It’s been one week since you released the feature your customers have been requesting for over a year. As the product owner, you feel dread as you look at the analytics and see a low adoption rate. You’re left wondering, “Why is adoption so low if this feature was always requested in all of our product surveys?”
This scenario is common in the fast-paced world of Enterprise SaaS. With constant demands to improve the product, Product Managers (PMs) often rely on surveys as their source for voice of customer data because they can be sent out at high volume and generally have quick turnaround times. However, surveys can be misleading because they rely on what people say and their answers are often constrained by the construction of the survey.
Overreliance on “Say Data” leads to the development of products people say they want, but don’t actually need. Say Data is when users vocalize what they feel about a concept or guess how they would act in a certain situation, which is not a predictor of future behavior. People often make confident but false predictions about their future behavior, especially when presented with a new and unfamiliar design.
To ensure a product will be adopted post launch, PMs should supplement their Say Data with “Do Data”. Do Data is behavioral evidence and can be a more reliable predictor of future behavior. For example, in the scenario above, the PM should be testing prototypes of the feature with customers throughout the product lifecycle to understand how their customer is going to use the new feature in conjunction with their current use of the product.
The results of a McKinsey study showed most successful innovators periodically tested and validated customer preferences during the development process, which made them better able to identify and fix design concerns early on and minimize project delays.
"Don’t listen to what your customers are saying, watch what they’re doing."
Observe how the users are interacting with the product. Nielsen Norman Group suggests there are many ways to run an optimal user test or field study, but ultimately, getting user data boils down to the basic rules of usability:
- Watch what people actually do
- Do not believe what people say they do
- Definitely don’t believe what people predict they may do in the future
The problem with people’s inability to predict their future behavior is compounded when relying on poorly designed surveys. Constructing a survey may seem easy at a surface level, but there are many pitfalls that one should consider before creating a survey. Survey Monkey lists the five most common mistakes as leading questions, loaded questions, compound questions, using absolutes, and language use. (Details here)
As far as feature requests, scaling questions pose the greatest risk. If customers are asked to rate features on a scale from “very important” to “not important” you will be left with some features labeled “very important” and others “important”. How does one distinguish between important and very important? Do you build only very important features and leave important features in the backlog? Vice-versa? These questions can only be answered by speaking with customers and observing their process.