Self-reported information is not perfect. But it is even less perfect in some cases than in others.
- Point: I can tell you from having conducted 100s of studies with IT professionals that certain things get over-reported. Plans to invest in new hot, technologies are a classic example. Not intentional, perhaps, but it happens.
- Point: Social desirability is a known issue as well. For example, asking people directly about exercise and dental hygiene is known to be problematic. Some studies suggest people over-report voting (saying they voted when they did not). Some research even suggest that the impact of social desirability on survey responses may vary by country, further complicating the interpretation of research results (Journal of Marketing Research, 2009, Vol XLVI, authors Steenkamp, de Jong, Baumgartner).
- Point: the survey process itself, according to some research, has an impact on behaviors—making self-reported data iffy. Surveying people about purchase intentions may actually change their behavior. After all some people only form an intention when asked about it (you may not have thought about whether you planned to buy a new PC this year until you got a survey asking you about it). Further, some research suggests that a surveyed group that reports plans may be more likely to behave that way—sort of a self-validating effect, and making them less representative of the non-surveyed population (Journal of Marketing, April, 2005, authors Chandon, Morwitz, and Reinartz).
So at minimum, it appears that purchase intent and socially desirable items are at particular risk of inaccurate self-reporting.
What does this mean for researchers?
When designing research projects we have to be vigilant:
- Are we asking about items that we can expect most people to answer accurately?
- Are we ok for a given topic, and research objective, knowing that there may be a big gap between perception and reality?
Given the limitations of self-reported data, survey research (especially about sensitive topics and purchase intent) may simply be the wrong methodology for some projects.
Luckily, there are alternatives. For example, there has been a huge increase in the amount of actual behavioral data available to researchers in recent years. Increasingly sophisticated CRM databases, purchase data, and observational data (such as Internet behaviors) provide access to actual behavior—what people are buying, what they are looking at, the sequences that precede a purchase, and more.
Another option is ethnography. Some researchers find that observing people can be more reliable, and insightful, than asking them to self-report.
The bottom line: Intentions aside, survey respondents simply can’t accurately self-report some items of interest to researchers. Can they get us “close enough”?
Sure, if we are aware of the limits and apply the research with appropriate caveats.
Go ahead and ask people if they plan to purchase the latest techno widget in the next 6 months. The results says something about openness to marketing messages. But I wouldn’t use it for sales forecasting.
4 comments
Jeffrey Henning has posted a counterpoint to this article on his blog: http://blog.vovici.com/blog/bid/28654/Self-Reported-Data-Point-Counterpoint
Cathy Harrison (@virtualMR) has posted a “verdict” on this debate: http://blog.cmbinfo.com/bid/41165/Debating-the-Usefulness-of-Self-Reported-Market-Research-Data
This is an all-too-common viewpoint, but unfortunately it is a bit naive. All data have limitations, biases, and idiosyncrasies, and it requires training, judgment and experience to understand which kinds of data are appropriate for a given application.
Regarding self-reported survey data, companies have been making accurate sales forecasts for new products and other similar applications with models like BASES and ASSESSOR for 40 years. These models routinely forecast sales a year or more into the future within error ranges of +/- 10-20%. But you have to know what you’re doing.
You can indeed sometimes forecast sales and other parameters with behavioral data. At other times, using behavioral data is a bit like trying to drive your car forward while looking out the back window. I’m sure someone can do it with practice, but I wouldn’t do it if I had other alternatives.
In our pricing practice we sometimes talk with companies who would use historical behavioral data to make pricing decisions, and this can indeed work. But how do you set the price for a new product that’s never been on the market? Or decide whether to raise your price to a level where it’s never been priced before? There’s no history to work with. That’s when properly calibrated models based on well-designed surveys and collecting self-reported information, can be very helpful. And accurate.
Categorical statements about the usefulness of different classes of data (i.e. self-reported is bad, qualitative is bad/misleading, etc.) don’t really help clients make better decisions or help to advance our profession. It’s a little like a carpenter saying, “Hammers are bad! Saws are much better!”. Today you need to use ALL of the different data types at different times, in different roles, understanding their strengths, weaknesses, and limitations.
Rob
Couldn’t agree more: “Today you need to use ALL of the different data types at different times, in different roles, understanding their strengths, weaknesses, and limitations.” Alas, too often I find people automatically opting for a survey methodology–even when it is not the best choice. It ends up causing dissatisfaction with the research process itself, which is very unfortunate.
Clients also have to consider their individual markets. Example: Asking about purchase intentions in certain, mature consumer product categories is very different than asking about them in high tech categories.