Kerry Patterson is coauthor of four New York Times bestsellers, Crucial Conversations, Crucial Accountability, Influencer, and Change Anything.
Listen to Kerrying On via iTunes
A couple of months ago I wrote a piece on epistemology—or how we know what we know. I addressed methods for cheating, the dangers of drawing causal conclusions from correlational data, and the insidious threat of post hoc analysis (and yes, it was every bit as scintillating as this summary suggests). This month I complete my thoughts on the subjects with four more vignettes that help answer the questions: Should I trust what you’re telling me? How do I separate the wheat from the chaff? I’ll start with a threat to all of us . . .
Bad Research. One day while I was chatting with a graduate student about research methods, he pointed out that he didn’t have much faith in research because, in his mind, it wasn’t all that scientific. He then went on to explain that he was once part of a research team that observed birds of prey within their various habitats. The hypothesis they were testing was that a certain bird would hang suspended over one habitat more often than another bird which preferred a different habitat.
The trouble was that as researchers observed the birds riding the updrafts, it was tricky to divine which was which. The birds often appeared as little more than specs against the blue sky. Undeterred by the ambiguity, my friend explained that he wrote down the bird that was supposed to match the habitat because he “knew it would confirm the professor’s hypothesis.” From this experience he concluded that research is bad. He should have concluded that bad research is bad.
What made this research bad?
The grad student needed to be “blind to condition.” That is, he shouldn’t have known the hypothesis because it might bias him—which it did.
A Lack of Control. My wife once took a children’s literature class. The teacher explained that one “seminal” study discovered that if children are reading more than two years behind grade level, and if you read aloud to them, then their scores would miraculously return to normal. My wife asked how the research had been conducted. It turns out that researchers had identified students who were two or more years behind reading level and then read aloud to them. Their scores improved. Ergo, reading aloud causes reading scores to improve.
What’s wrong with this study?
The reading scores may have improved simply because the students matured over the year and they caught up naturally. We’ll never know for sure, because the researchers didn’t simultaneously track a control group. Without a control group, you can’t conclude that the intervention caused anything. Who knows what else might have been going on in the environment to cause the improvement?
A Random Idea. One day, when I was six, I was searching for frog eggs in the swamps behind my back yard it suddenly hit me that I had been slogging through filthy, slimy, gunk all day long and that I just might have contacted a polio germ. I immediately sprinted home, boiled water, and tried to gargle with it. Never before had cleanliness concerned me an iota, but on this day in 1952 the fear of contracting polio was on everyone’s mind. Nobody knew how you caught the frightening disease, it struck down young children, and its victims could end up in an iron lung for the rest of their lives. Thus the painful gargling.
Later that year when medical researchers called for volunteers to test a new polio vaccine, my parents eagerly signed me up. After receiving four doses of a very thick vaccine that had to be pumped into my tiny arm through a painfully thick needle, I was awarded a small metal button that proclaimed me a “Polio Pioneer.” I had been one of thousands of children who had served as experimental subjects, and the good news was that the vaccine had worked. Fewer of those of us who had received the shots came down with the frightening disease than the children whose parents hadn’t signed them up to be guinea pigs.
The victory celebration didn’t last very long. Within weeks an embarrassing announcement followed. The research was flawed.
The scientist who were trying to save thousands of people untold suffering didn’t know how to design an experiment. Someone who did know a thing or two about research design asked the senior designers a pesky question: “What if the parents who volunteered their kids to be tested treat their offspring differently than parents who didn’t volunteer?” Perhaps people who willingly expose their kids to medical research are more scientifically oriented. Maybe they’re more educated. Maybe they keep their houses cleaner. If any or all of this were true, then maybe the kids who received the shots experienced a lower incidence of polio because of the washings and cleaning and not because of the innoculations.
Now what’s a scientist to do? You can’t ask kids to number off and then give half of them a shot and leave the other half untested. That’s illegal and unethical. But you can call for volunteers once again and then give half the kids the vaccine and half a placebo. So that’s what the researchers did. They created a new strain of vaccine, called for volunteers (my parents couldn’t sign me up fast enough) and gave us shots. Of course, they didn’t tell us kids that half of us were getting water shot into our arms.
We discovered the trickery later that year when, with all of the drama associated with the Academy Awards, our teachers handed out sealed envelopes. Inside my envelope I found a small slip of paper with the word “vaccine” printed on it. I had won. I had been properly vaccinated and once again the newly-formulated vaccine had proven to be effective. My neighbor Bobby Kaiser opened his envelope only to find the word “water.” In order to be properly inoculated, Bobby was forced to take four more shots—ending with a grand total of twelve arm puncturings. And why? Because researchers didn’t understand that not only must you create experimental and control groups when designing an experiment, but you must also randomly assign people to their conditions. Anything less than this and you can’t trust the findings.
Lying with Statistics. When listening to sales pitches you often hear things like “individuals who have been immersed in our award-winning Executive Leadership Training Program are ‘significantly more effective’ than those who haven’t!” That sounds good. Participants don’t just do better, they do significantly better.
Do you know why this might actually be a bit of a lie?
Here’s what’s typically going on behind the scenes. A group of subjects is exposed to a treatment—say leadership training. The subjects then score 3.5 on a 7-point effectiveness scale whereas people who missed the training score 3.4. The designers don’t like this oh-so tiny improvement so they launder their results in a way that gets them what they want.
Here’s what they do. If they then put, say, 2,000 more people through the training and the expanded population continues to score a tenth of a point higher than people who aren’t treated, the difference actually may be statistically significant. This, unfortunately, is a poor choice of terms. What statisticians are really saying when they use the word “significant” is that 95 times out of a hundred—if the two groups were actually alike—you wouldn’t get that .1 difference. So, you can be 95 percent sure that the difference is real. In the leadership training case, that means you’re 95 percent sure that those who paid for the leadership training improved a very trivial amount—a puny one-tenth on a 7-point scale. By misapplying the statistical term “significance,” the training designers report that their trained group did significantly better.
The solution to this deliberate obfuscation is rather simple. When someone reports “significant” results, ask to see their raw data. Demand to see the difference of the scores between groups. You might discover that the number of subjects was large but the difference between groups was puny. From this you’ll know that the person claiming “significant” improvement is misapplying the term.
So there you have it. Hardly a day passes that someone out there doesn’t make some audacious “research-based” claim about what we need to do to improve our lives. Naturally, before we start drinking brine-shrimp juice, or standing on our heads during the nightly news, or pumping water into our colons we ask, “How do you know this method actually works? It doesn’t sound all that fun and if it doesn’t work, well, why should we suffer?” These are the right questions to ask. And whether you studied epistemology in school or not, it turns out you need to be an epistemologist to sort out the wheat from the chaff. Otherwise you’ll need to get used to being “chaff-ted.”