When Qualitative Research Works

When Qualitative Research Works

  • Dev Patnaik Dev Patnaik
  • Steve Frechette
  • June 24, 2022

Industry leaders have been successfully using qualitative research for decades. But for many years, we didn’t really understand why it worked. Now we know, and understanding these dynamics can supercharge your research.

Self-driving cars, drone delivery, augmented reality. The world is changing, and companies need to innovate to stay relevant. But real innovation isn’t about chasing the next new thing. It’s about learning how to become resilient.

In an attempt to build this resilience, companies have long turned to quantitative data. They seek out larger and more sophisticated datasets, yearning to uncover insights about customer behavior. The rise of big data is the natural outcome of that trend.

Quantitative data is great, especially for business intelligence; however, without the unique insights that only qualitative research provides, the most valuable opportunities remain hidden.

A large consumer packaged goods company faced just this challenge. For years, they worked to make incrementally better multi-blade razors: sharper razors, safer razors, faster razors. And yet, growth had started to plateau. Reversing this trend wouldn’t come from flashier blades.

Seeking an answer, the company launched a global shaving survey, investigating the shaving habits of 20,000 people across the world. They assessed the tables, crunched the numbers, and looked at the output. What they found was unsurprising: people want a clean, close shave. This was the same old playbook they had been operating from for decades. On the face of it, there were no new opportunities for growth.

Conventional wisdom, rooted in statistics, would suggest that you simply can’t come to valid conclusions from studying just a handful of people. However, companies have been able to produce meaningful insights from studying less than twenty people.

The team decided to take a step back and reframe the study. Working with Jump, they conducted qualitative research designed to uncover the social and psychological needs that men have around hair removal. That meant more than just figuring out how they shave. They needed to look at men’s shaving needs in the broader context of their lives. This wasn’t a study of habits. It was a question of culture.

The team shadowed men in the morning, standing in the bathroom as participants went through their morning routine narrating their actions and feelings. They provided cameras to fathers and sons and had them film each other during their morning prep times. They conducted in-depth interviews with participants to unpack what they were observing and hearing. And finally, they ran game nights, where participants played a board game that was custom-designed to elicit stories about everyday life.

After conducting the fieldwork, and analyzing the information they collected, the team found that using razors wasn’t just about getting a clean, close shave. Men actually care about shaping and styling their facial hair. And furthermore, the presence of facial hair, not just the act of shaving it off, is an important signifier of adulthood for boys. Shaving is how you create a look.

Company executives were surprised by the results. They knew that shaving was an important part of being a man, but they questioned how many men were actively trying to shape their facial hair. After all, weren’t most men clean-shaven?

The team decided to go back to the global shaving survey. This time, they mined the database for one question: how many people have facial hair? The answer was over 50%. The data was there all along. No one had thought to ask the question.

Analyzing data in a vacuum, without understanding the underlying experiences and motivations of customers, obscures critical insights. It’s difficult to know how to cut the data, and how to make sense of what might surface. But interviewing just a handful of people can provide groundbreaking insights and bring that data to life.

Ethnography shouldn’t work. But it does.

Conventional wisdom, rooted in statistics, would suggest that you simply can’t come to valid conclusions from studying just a handful of people. However, companies have been able to produce meaningful insights from studying less than twenty people.

Ethnography was originally developed in the social sciences to understand people and cultures. Researchers spend time observing local customs, speak with members of the community, and participate in everyday life. Across industries, companies have seen dramatic benefits from applying this immersive, qualitative research.

And yet, these same researchers were unable to provide a rigorous scientific rational for how and why ethnography worked so well. Conventional wisdom, rooted in statistics, would suggest that you simply can’t come to valid conclusions from studying just a handful of people. However, companies have been able to produce meaningful insights from studying relatively small samples of less than twenty people. As with the shaving study, these insights have then been validated by larger quantitative surveys. This shouldn’t work. But it does.

Back in the late eighties, Abbie Griffin at MIT was exploring the benefits of new product development. A large part of her work revolved around customer needs. Griffin wanted to know how many people she would need to interview to gather a comprehensive set of needs on a given topic.

Diving into this question, Griffin interviewed thirty potential customers of food- carrying and storing devices (things like coolers and picnic baskets). The interviews were transcribed and each one was read by seven analysts. After reviewing the interviews, the team merged the needs and eliminated duplicates. They found that there was a set of 230 unique needs, which represented about 90% of all possible needs the team could have gathered on the topic.

With this in-hand, Griffin wanted to know how many needs she could have obtained from interviewing fewer than thirty people. Because the participants revealed different numbers of needs, she had to consider the order of interviews. So Griffin gathered 70,000 possible orderings. This would tell her the percentage of total needs revealed by each customer. She plotted the results on a chart.

Griffin found that fewer new needs are uncovered in each successive interview. She found that the first three customers would reveal approximately 40% of the total needs. The next three would reveal an additional 20%. And the three after that would reveal only about 10%. The trend shows how it becomes increasingly unlikely to uncover large numbers of new needs late in a study. In fact, the bulk of needs are identified within a dozen interviews.

Griffin’s work demonstrated that qualitative research was effective at revealing a large amount of information with just a handful of participants. Similar results were also found in a commercial setting, in which needs were gathered around complex office equipment. But why does this methodology work so well?

The magic of ethnography is in cultural consensus.

Cultural consensus is the degree to which everyone in a group agrees with each other about a subject.

On the face of it, statistics might suggest that you need to have a very large sample for accurate results. If you were trying to find out the colors of golf balls in a big bag, for example, you’d want to pull out a lot of golf balls before claiming you knew the colors. However, if the golf balls had been talking to each other, and some of them were experts on who else is in the bag, you’d only need a few to find out all the colors.

In the mid-eighties, Kim Romney and Bill Batchelder of UC Irvine, and Susan Weller of UPenn, wanted to understand the relationship between the amount of experience participants have in a subject and the number of participants required to obtain accurate results. In qualitative research, this expertise in a subject in called cultural knowledge. Cultural knowledge is anything learned though experience (such as the name of an object or the rules of a game) that others also agree upon. The more experience individuals have in a subject, the more likely they are to agree with others about it, and thus the more likely it is to be correct.

This measure of cultural knowledge was important to the authors because it would tell them, and other researchers, how confident they could be in the answers they’d hear when interviewing people about a topic. Would they hear the actual needs? Could these needs be applied to a broader group?

So the team derived a model that would help them measure the cultural knowledge of each participant. To test it, the authors chose 40 medium-difficulty questions from almanacs, atlases, trivia books, friends, and colleagues. These questions would represent culture. They asked the questions to 41 randomly selected students in the UC Irvine student union. The authors then took these results and built a matrix to analyze the proportion of matches between the students.

The authors found that the higher the competence of the individuals (that is, the more questions each person answered correctly), the fewer people were needed in total to answer all of the questions correctly in the study. In fact, the authors found that this could be accomplished with just four individuals.

In addition to the experience level of participants (competence), the desired accuracy of the responses (confidence) also determines sample size. The more you want to make sure that the answers are accurate for that subject, the more participants are required. With that said, the numbers are largely illustrative. The study was run with true/false trivia questions, which is far cleaner than trying to understand consumer needs. As the authors note, “…it is best not to take [the table] too literally. The numbers are meant as a rough guide and to illustrate that it’s possible to get stable results with fairly small samples.”

Let’s pause for a moment. To recap: cultural knowledge is anything learned through experience. To measure a person’s cultural knowledge, the authors built a model and tested it with a set of questions. They found that the people with more knowledge could answer more questions correctly. In other words, the more that each golf ball knows about the others in the bag, the fewer you need to pull out to learn the colors of all the golf balls.

This may seem straightforward (the more knowledgeable the people, the fewer it takes to answer the questions correctly), but the implications of this are powerful for qualitative research. When running qualitative studies about a specific subject, it’s important to choose subject experts. What makes someone a subject expert? As noted previously, it’s how much experience a person has in a subject, and the more experience a person has, the more likely that person is to agree with others in the group about that subject. This is referred to as the person’s cultural competence. Subject experts have high cultural competence. Often, this means the group of experts will have similar life experiences, whether it’s riding motorcycles with family, shopping at malls with friends, or shaving in the morning.

Cultural consensus is achieved when these subject experts are gathered together. It’s the degree to which everyone in the group agrees with each other about a subject. Achieving cultural consensus increases the accuracy of the responses and reduces the number of people required for us to learn about the subject.

Here’s a simple example that illustrates this point:

Let’s say you’re on vacation in Mumbai, and as a test, you want to see if you can learn the rules of baseball without using books, a smartphone, or a computer. So you ask random passersby on the street. It turns out that many have never watched a baseball game before. Others have observed games but never paid enough attention to care about the rules. And a few give you responses based on the game of cricket. Eventually, you end up getting most of the rules of baseball, along with some erroneous rules, but it takes hundreds of conversations to get there.

The question was clear (what are the rules of baseball?), but cultural competence wasn’t high (the people you spoke with didn’t know a lot about the game).

Fast forward a week, and you’re back home in Chicago. You decide to try this experiment again. Instead of randomly intercepting people on the streets of Mumbai, you head out to a little league baseball game near your home. You ask the coach if you could speak with the players after the game. You get the thumbs up, and when the game ends, you ask each player, one at a time, to tell you the rules of baseball. The first player covers the vast majority of the rules. The second player repeats many of them but also adds in some new ones. After just a handful of conversations, you’ve learned the rules of baseball.

When you defined the subject area (the game of baseball), located subject experts (little league baseball players), and targeted just this group, you were able to learn about the subject quickly and accurately.

That’s the real reason why qualitative research works. People are, by definition, subject matter experts in their own lives. And so, as you speak with people who have similar lives (similar ages, geographies, or cultural backgrounds, for example), you don’t need to talk to many of them to get to the right answers.

Harness cultural consensus by properly choosing participants

The temptation is to try and expand the impact of the study by covering a larger breadth of customers, but doing this actually breaks down cultural consensus and thus the accuracy of the research.

Cultural competence and cultural consensus work together to make qualitative research work. Define the subject area, find people that know a lot about that subject, and study just a handful of those people.

Still, at different stages in the process, teams can make critical missteps, often unsuspectingly, that can derail the study. At the start of a project, teams may not clearly define the subject area that they are studying. Without a clearly defined subject area, it becomes impossible to locate subject experts. And without subject experts, it’s impossible to achieve cultural consensus. For example, trying to study banking services, in its broadest sense, would be very difficult. Which banking services, and for whom? However, studying university graduates who manage student loans online is a tightly scoped study where the path to finding subject experts is clear.

Teams will sometimes combine different customer groups who aren’t all experts in the same subject. The temptation is to try and expand the impact of the study by covering a larger breadth of customers, but doing this actually breaks down cultural consensus and thus the accuracy of the research. Studying how customers purchase cars might lead you to speak with a range of people, from young Millennials to Gen X parents. In actuality, the ways that these groups buy cars are likely quite different. Trying to lump them in the same study would break cultural consensus.

Alternatively, teams may let the sample size bloat. The more is better temptation is strong. If recruiters locate extra participants that seem promising, it can be difficult to turn them down. And with too many participants, teams get bogged down in data, and it becomes difficult to synthesize the information.

With recruiting finalized, and field research complete, teams will sometimes take participant responses at face value without conducting analysis. This can occur from a desire to be faster to market. However, participants are often unable to directly communicate deeper motivations or emotions. It takes time to unpack the interviews. If you were to speak with consumers in the hopes of designing a better point-and- shoot camera, you might hear that you should increase the number of megapixels. Then, all of a sudden, the iPhone comes along and it turns out that no one even carries a point-and-shoot camera anymore.

Finally, with a set of insights in-hand, teams can be quick to share the findings without considering the additional support that quantitative research provides. When studies inform large-scale changes to an organization, such as a redesigned patient experience in the emergency room, having quantitative support can help build buy-in and provide further validation of the work.

Prepare your team with a solid foundation before embarking on a qualitative research study. As the work unfolds, you will be faced with competing priorities and visions. Knowing where you can bend, and where you need to stand your ground, is crucial

Here’s what to do:

1. Clearly define what you want to study. Without defining the area of study, it’s not possible to find experts in that area, and without those experts, the research lacks credibility. So clearly define the subject area and what will make someone an expert. This could include basic demographics like gender and age, activities like shopping tendencies and pastimes, or mindsets such as religious beliefs and political leanings. Be wary of beginning the study without a focal question. Remember our example: university graduates who manage student loans online (rather than a generalized study on banking services).

2. Select a consistent group of participants. Bringing a consistent group of participants together will achieve cultural consensus, and this will increase the accuracy of the responses. With greater accuracy, the results can then be applied to the broader population that the study represents. Be wary of trying to slip in other customer segments (for example, a few dads into a study about how moms shop) to seem inclusive or to appease a stakeholder. If they don’t fit the group, it’s better to run two separate studies, each with a homogenous group of participants, rather than sacrificing the credibility of the research.

3. Recruit about 8-12 people for each group. Griffin interviewed thirty people when she studied needs around food-carrying and storing devices. Her results suggest that 10-20 participants are ample. Romney, Batchelder, and Weller were able to achieve strong results with as few as just four participants. In the decades since then, practitioners have found that 8-12 people work well in creating a sample that is large enough to capture a range of needs, and reveal meaningful patterns about a subject, but also small enough to be feasible. The more this number bloats, the more expensive it becomes to run the study, the longer it will take, and the more confusing it can become to try and synthesize all the information gathered.

4. Spend time analyzing the interviews. Participants often struggle to directly communicate their underlying motivations and feelings. They may say one thing, but actually mean something very different. Remember that conducting the research is just one step in the process. It’s important to devote ample time to analyzing the results to push for a greater understanding of what’s going on. Be wary of taking soundbites at face value.

5. Use qualitative research, then quantitative research. If validity is critical, such as in building support for large-scale changes, follow up with quantitative samples. This “qual then quant” approach lets you figure out what the most important insights are before seeking further validation.

The best way to learn these principles is to go out and actually run a study. Intuition and point of view are important, but good practitioners know how the process works. Structure your plan and rally your team. Then find those subject experts, get out of the office, and chat with them. Walk in their shoes. See what their life is like. Doing so will shed light on your data, add strength to your business decisions, and position your company for resilience. And the beauty of ethnography is that you may just learn something about your own life as well.

Further Reading

1. A. Kimball Romney, Susan C. Weller, and William H. Batchelder. “Culture as Consensus: A Theory of Culture and Informant Accuracy.” American Anthropologist, New Series, Vol. 88, No. 2 (1986), pp. 313-338.

2. Abbie Griffin. “Functionally Integrating New Product Development.” PhD Thesis. MIT Sloan School of Management. June, 1989.

3. Abbie Griffin and John R. Hauser. “The Voice of the Customer.” Marketing Science Vol. 12, No. 1 (1993).

Dev Patnaik


Dev Patnaik is the CEO of Jump Associates, the leading independent strategy and innovation firm. He’s a board member of Conscious Capitalism. Dev has been a trusted advisor to CEOs at some of the world’s most admired companies, including Starbucks, Target, Nike, Universal and Virgin.