Crucial Skills®

A Blog by Crucial Learning

Kerrying On

Kerrying On: Some Thoughts on Knowing

ABOUT THE AUTHOR
Kerry Patterson

Kerry Patterson is coauthor of four New York Times bestsellers, Crucial Conversations, Crucial Accountability, Influencer, and Change Anything.

READ MORE

Kerrying On

Listen to Kerrying On via iTunes

In the spring of 1958, in my seventh-grade Civics class, Mr. Edwards introduced a classroom full of slack-jawed students to the mysterious field of epistemology. Between explaining to Sam Barker that it was bad to flick the back of Carol Simpson’s left ear and chastising Curt LaMay for taking his shoes off and putting his feet on his desk, Mr. Edwards raised the mind-boggling question, how do we know what we know?

“We often say that we know certain things to be true,” the balding Mr. Edwards expounded, “but how do we know that we know?”

At first, this curious query yielded blank stares. However, as the concept of needing to validate what we accept to be knowledge sunk into our largely unused craniums, it wasn’t long until we were caught up in the intellectual chase. After thinking about Mr. Edwards’s question for a second, I came back with, “If we discover the secret for knowing how we know something—how do we know that?”

As my classmates came up to speed with the idea of an infinite string of “how-do-we-know that” questions, I then added another idea that my father had brought to my attention the previous summer while driving across the country and tiring of playing “I spy with my little eye.”

“Suppose there’s this planet called Zaltoid. The Grand Fubah in charge of Zaltoid has a dog named Fido—only spelled with a P-H, not an F. One day Phido eats a spoiled piece of road kill, falls into a restless sleep, and has a horrible nightmare.

“Now here’s the interesting part,” I said quoting my father, “What if we are that nightmare? What if this whole universe is nothing more than someone or something else’s dream?”

Since I raised those questions that day, I have never received a satisfactory explanation for how we know anything. Apparently I’ve flunked epistemology. Nevertheless, I do have some good news. I have learned a fair amount about examining the validity of what other human beings claim to be true. This is important because people are constantly trying to get one another to believe one wild idea after another, and many prop up their arguments with all sorts of supposed “scientific evidence.”

For instance, can we really believe that four out of five dentists recommend gum to their patients who chew gum? And sure, we all know that kids who watch more than five hours of TV a day are more violent than kids who watch less TV, but is this because they watch violent TV? I, for one, longed for slow-motion decapitations as a part of my daily childhood TV diet, and yet today I lift spiders off the floor and take them outside rather than step on them. And finally, has anyone ever trusted statistics? Have you seen the people who teach it?

Today I’m prepared to answer these penetrating questions. Over two decades ago I spent six straight years in graduate school where I learned how you can be duped by nefarious tricksters who employ faulty thinking, dreadful research techniques, and clever statistical ploys—all aimed at convincing naïve listeners of the veracity of something that may not be all that true. I wish I had learned the basics of statistical analysis and research design earlier in life. I want my own children to know about it today. I wish the university students I continue to teach were far less willing to accept questionable research findings—just because they’re printed to three decimal places in a seriffed font.

So here goes. Here are three popular methods people (often researchers) use to support their conclusions. As you read the examples, see if you can catch the errors or, in some cases, the tricks, they routinely employ.

Bald-faced Cheating. First, let’s consider those eager dentists out there who recommend gum to their patients who chew gum. Does sugarless gum help clean teeth or might it pull off your crowns? What are the actual pluses and minuses? It’s hard to know because the research wasn’t really conducted. Here’s how I found out.

When I was a Coast Guard officer I ate lunch every day alongside a table filled with doctors and dentists. One day one of the dentists pulled out a letter with a flourish. “Take a look at this baby!” he exclaimed with a grin. It was a letter from people who made and sold gum. The letter explained: “Dear Dr. Snodgrass, if you recommend our gum to your patients who chew gum, don’t return this letter. If you don’t recommend our gum, please answer the following fifty questions and mail your response to us.” No return envelope was provided. So Dr. Snodgrass threw away the letter—as did four out of five other dentists.

I start with this particularly slippery example of questionable research methods because it represents the worst of research trickery. It’s premeditated, self-serving, presents a lie as the truth, and then dresses up the lie with numbers.

The solution to this type of shenanigan is obvious, but not always simple. If a research finding is important enough that you’re about to act on it, study the original research. Learn what they actually examined and how they examined it. Don’t take anything on its face. Without knowing how people actually conducted the research, you can’t begin to understand, let alone trust, the reported findings.

Faulty Conclusion. A large petroleum provider runs a country-wide magazine ad that points out that children who watch more than five hours of TV a day are more violent than children who watch less TV. From this finding the suspicious do-gooders make two recommendations. One, don’t let your kids watch so much TV. Two, take particular care to avoid violent TV.

What’s wrong with this ad?

While it’s hard to fault the advice, it’s not as if you can offer it based on the research finding. The authors are assuming that correlational data imply causality. Just because violence increases with increased TV watching doesn’t mean that watching TV is causing the violence. The causality could run the other direction. Perhaps kids who are violent (for whatever reason) might like watching TV. Or there could be a third variable. Perhaps parents who let their kids watch a lot of TV are inattentive and violent on their own. These unhealthy behaviors cause both TV watching and violence.

Question correlational data. Correlational data doesn’t always explain direction and could be puppet to a different variable altogether.

After-the-fact Conclusions. This one’s a bit tricky and I learned about it in an unexpected way. One day my teenage son and I were walking along a rocky beach in Puget Sound. After hitting a few rocks with makeshift bats and skipping flat rocks over the placid surface, we eventually settled in on the age-old contest of rock throwing for accuracy, which I wasn’t very good at. So I decided to impress my son in a new way. I grabbed a plum-sized rock, told my son to watch closely, and then threw the rock as far as I could chuck it. It fell upon a small piece of driftwood some forty yards away and careened into the water. “Do you see that tiny piece of driftwood I just hit?” I asked. He indicated that he did. “Okay hotshot,” I continued. “Now you hit that piece of driftwood.”

‘Hey!” he chided me. “I would be a lot more impressed if you had told me what you were going to hit before you threw the rock.”

Researchers use this same technique all the time. For example, a group of organizational theorists enter healthy organizations as well as those that are limping along. After looking for differences between the two groups the researchers explain exactly why the one group is healthy and the other isn’t. In the healthy companies people are empowered, or their bosses are LMNOP-types, or they all wear yellow accessories, or whatever.

Making ambitious pronouncements after completing a series of observations is, indeed, similar to stumbling on a target and then acting as if you knew about it all along. Such research methods could actually serve us well if they were followed by experiments that tested the conclusion and tracked the results. But they rarely are. Instead, researchers fling a rock in hopes of hitting something and then convince themselves that they knew what they were going to hit long before they so much as moved a muscle.

Review after-the-fact pronouncements with caution. While observational research can be enormously helpful in generating hypotheses, don’t take the observations as experimental findings that are worthy of you acting upon them.

In summary, research findings shouldn’t be accepted as if somehow they had been carved in stone on a mountainside. Findings should be used as a starting point for a thoughtful discussion. If you’re about to bet on the study by actually doing something, pull out the study and examine it with a fine tooth comb. Don’t be tricked by the decimals.

Leave a Reply