Breaking the Rules Part 1: How Real-World Experience has Led me to Challenge “Good Research” Practices

Having worked for over 3 decades in research management positions…in ad agencies, for a financial services company, and for nearly 20 years now as a Design Researcher at Sundberg-Ferar, I feel I’ve learned a thing or two about “good” research. Ironically, I’m finding that the more experience I gain, the less I follow established market research guidelines. And there’s a reason for that…I believe I’m getting better results.

Let me give you some examples.

Guideline: In a focus group, begin by having panelists introduce themselves, to “break the ice.”

Reality: Why would you want to start your session with something that stresses people out? This seemingly cordial way of helping strangers loosen up is inherently flawed. Being suddenly forced to speak — about yourself, puts you in the spotlight. Your heart rate accelerates. Sweat beads form on your brow. You’re about to be judged by your peers. She may not want to announce that she’s 39 years old and not married. He isn’t thrilled to tell this group of strangers that he works in an unglamorous or low-paying occupation, or that he lives in a not-so-desirable community. Skip the introductions. Get to the fun stuff. The sense of relief in the room will be palpable.

Guideline: Do not recruit research respondents who are employed in marketing or media occupations, and certainly not anyone who works as a market researcher!

Reality: Why would you want to do that, particularly in Design Research or User Experience Research? People are people. Confidentiality issues aside, marketers and researchers can be bright, thoughtful and articulate people. When I’m investigating usability issues related to vending machines, or how people assemble a tub shower enclosure, or get in and out of a golf car, or use their garden tools — it doesn’t matter what they do for a living. (Unless of course they work in the vending machine, tub/shower, golf car or garden tool industry, in which case they should never have made it through the recruitment screening process.) So rest assured, marketing brethren — we embrace equal opportunity recruitment practices, and we’re happy to include you!

Guideline: When unclear about what a respondent/interviewee is attempting to communicate, under no circumstances offer potential interpretations. Simply continue to probe with statements like “Tell me more…” or “I’m not sure I understand…” until the matter is clarified.

Reality: Okay, this is a touchy one. I get it. There is a fine line between probing and leading. When, as an interviewer or moderator I need clarification, I will use the standard probes like “Help me to better understand what you mean,” but sometimes, the respondent is incapable of expressing him or herself to the degree you’d like. After a second or third “Tell me more…” people get frustrated, both with you for what they perceive to be your inability to understand them, and with themselves, for their inability to give you what you’re looking for.

I have been accused of “leading the witness” when I’ve said to a focus group participant: “So, if I’m understanding you correctly, it sounds like you’re saying ______ or that you’re feeling ______. Is that accurate or isn’t that quite what you meant?” Many a research expert will tell you that once you’ve done that, the respondent is likely to acquiesce, to agree with your interpretation even if it’s not accurate, either to please you or because it’s easier than attempting to further explain themselves. I disagree. It’s been my experience, that when you show genuine interest in the respondent, and state clearly that “I want to make sure I understand you correctly,” people are more than willing to respond with, “No, that’s not exactly what I meant, it’s more like this…” And Voilà. I get it now! This technique is more appropriate in some situations than in others, but generally, we obtain deeper, more meaningful insights from a natural, conversational, person-to-person exchange than we do from the more clinical and detached “tell me more” approach.


Breaking the Rules Part ll: How Real-World Experience has Led me to Challenge “Good Research” practices.

The research techniques and methodologies that we use at Sundberg-Ferar are myriad. In my previous article, I focused on questionable practices related to qualitative research — specifically with regard to focus groups and individual interviews. This time, I’m pointing the finger at quantitative research.

Being a Design Researcher, I tend to participate in every product or user experience study available to me. Call me a geek, but I enjoy seeing how other researchers design questionnaires — the wording they use, the responses they allow on closed-ended questions, and the rating scales they employ. These days, there is no shortage of such opportunities, as it seems everyone wants to track everything about what we do, where we go, and how we feel. What frustrates me about this is, much of what I’m seeing doesn’t make sense. Let me explain.

I’m sure you’ve experience this. The cashier at your local grocery store makes mention of an online survey that she’d like you to take. The link is printed at the bottom of your receipt. When you get home, you think, why not? Maybe I will win the $500 in free groceries, or whatever. Now, if you are the type of person who makes it a point to complete such surveys, you will quickly find out that the questions in this survey are the same ones that were asked on that survey from the hardware store where you bought a garden hose, and from the pharmacy where you purchased cough medicine, and the gas station where you bought an energy drink and lottery tickets.

You see, long ago, somebody, somewhere determined that a “friendly greeting” and a “problem solved” and your strong “willingness to recommend” were highly correlated to your satisfaction with and loyalty to an institution. So now everybody asks those same questions. This, my friends, is lazy research. And to make matters worse, management (again, somebody somewhere) is using this data to determine salaries, bonuses, and staff promotion eligibility.

Look at how flawed this kind of research is.

Q. Did the cashier thank you for your business? Yes No
Hmm. I don’t know. I wasn’t paying attention. Honestly, I can’t remember.
Ok, granted, sometimes they’ll give you an “I don’t recall” option, but not always. The system is set up so that you can’t complete the survey (and be entered in the sweepstakes) unless you provide a response to every question, so you pick one. No harm done, right? Well, unfortunately, if enough people randomly pick NO — that cashier will be reprimanded, or worse.

Here’s another one.
Q. How would you rate this store for the availability of the advertised sale items? (Highly Satisfied, Somewhat Satisfied, Somewhat Dissatisfied or Highly Dissatisfied)
Well, I have no idea. I came in for ice cream and hamburger buns. I don’t even know what items were being promoted. So, again, because I HAVE TO respond in order to complete the survey, I pick one of the response choices, and thereby perpetuate the accumulation of inaccurate data.

And I’ve saved the best one for last. This is my favorite.
Q. How likely are you to recommend this [store/business] to others? (Very Likely, Somewhat Likely, Neither Likely or Unlikely, Somewhat Unlikely or Very Unlikely)
Let’s see…I was driving to a client’s office when I pulled into this fast food restaurant for a coffee. Although the transaction was fast and the drive-thru attendant was pleasant, the idea that I’m going to recommend this particular establishment to my friends, relatives or co-workers, none of whom live anywhere near here, is preposterous. Was the coffee any better? Did I receive any special treatment or perks that would compel me to tell someone to go out of their way to stop here? No.

I’d like to suggest that unless you include some contextual reference here, it’s not worth asking. The addition of a simple lead-in phrase such as “If you were asked about this location…” would certainly reduce respondent confusion and ambiguity, and greatly advance the usefulness of your data.

To put a finer point on this, here’s a true story.

I happened to know the head of housekeeping at a nearby hospital, and told him about the problem I had with some of the questions on the post-discharge survey I received after an overnight stay there. One, of course, was the “likelihood to recommend” question. The reason I picked that hospital for my procedure was because it was fairly close to my home and because it was “in network” for the health insurance provided by my employer. But since none of my close friends or relatives live anywhere near me, and they are all relatively healthy, it was highly unlikely that I would be recommending this hospital to anyone. By responding “Very Unlikely”, did that mean I wasn’t satisfied with my experience there? Not at all, but unfortunately, I knew it would probably be interpreted that way.

I went on to tell my friend that there were also questions on this survey about my satisfaction with the attitude and efficiency of the housekeeping staff. When I told him that I selected “neither satisfied not dissatisfied” because I was discharged early in the morning, before housekeeping made it to my room, he groaned. He said the organization regards anything less than “Highly Satisfied” as a negative…and scores like mine directly impact his performance evaluation and compensation, and those of his staff. I was infuriated. So without mentioning any names, I wrote to the President of the hospital. In my letter, I pointed out the multiple issues I had with the survey, and noted specific questions where the response choices were insufficient or inappropriate. He wrote back and thanked me for my input, but explained that the matter was out of his hands. This questionnaire was being used throughout his hospital’s entire health care system, across the country, and was mandatory. Case closed.

Bad research. Data points like this are being collected everywhere we eat, work, shop and play, and the results are being misinterpreted and misused. Why? Maybe because it’s the easiest way for corporate management to check off the “Voice of the Customer” box. Research practitioners: we need to stand up and oppose the implementation of this type of shoddy research. There’s a right way and a wrong way to design a questionnaire, and it doesn’t take an expert to recognize that, unfortunately, much of what we see out there today is representative of the latter.

More “Breaking the Rules” posts to come…