Subscribe

Subscribe to Blog



It’s been one week since you released the feature your customers have been requesting for over a year.  As the product owner, you feel dread as you look at the analytics and see a low adoption rate.  You’re left wondering, “Why is adoption so low if this feature was always requested in all of our product surveys?”

This scenario is common in the fast- paced world of Enterprise SaaS.  With constant demands to improve the product, Product Managers (PMs) often rely on surveys as their source for voice of customer data because they can be sent out at high volume and generally have quick turnaround times.  However, surveys can be misleading because they rely on what people say and their answers are often constrained by the construction of the survey.

Overreliance on “Say Data” leads to the development of products people say they want, but don’t actually need.  Say Data is when users vocalize what they feel about a concept or guess how they would act in a certain situation, which is not a predictor of future behavior.[1]  People often make confident but false predictions about their future behavior, especially when presented with a new and unfamiliar design.[2]

Watch UserTo ensure a product will be adopted post launch, PMs should supplement their Say Data with “Do Data”.  Do Data is behavioral evidence and can be a more reliable predictor of future behavior.[3]  For example, in the scenario above, the PM should be testing prototypes of the feature with customers throughout the product lifecycle to understand how their customer is going to use the new feature in conjunction with their current use of the product.

The results of a McKinsey study showed most successful innovators periodically tested and validated customer preferences during the development process, which made them better able to identify and fix design concerns early on and minimize project delays.[4]

"Don’t listen to what your customers are saying, watch what they’re doing."

- Chris Lazzarini

Observe how the users are interacting with the product. Nielsen Norman Group suggests there are many ways to run an optimal user test or field study, but ultimately, getting user data boils down to the basic rules of usability:

 

The problem with people’s inability to predict their future behavior is compounded when relying on poorly designed surveys.  Constructing a survey may seem easy at a surface level, but there are many pitfalls that one should consider before creating a survey.  Survey Monkey lists the five most common mistakes as leading questions, loaded questions, compound questions, using absolutes, and language use. (Details here)

As far as feature requests, scaling questions pose the greatest risk.  If customers are asked to rate features on a scale from “very important” to “not important” you will be left with some features labeled “very important” and others “important”.  How does one distinguish between important and very important?  Do you build only very important features and leave important features in the backlog? Vice-versa?  These questions can only be answered by speaking with customers and observing their process.

[1] https://medium.com/@peerinsight/moving-from-say-to-do-fdeb7cea225d

[2] https://www.insightsassociation.org/article/caution-how-market-researchers-are-contributing-product-failure

[3] https://medium.com/@peerinsight/moving-from-say-to-do-fdeb7cea225d

[4] https://www.mckinsey.com/business-functions/operations/our-insights/the-path-to-successful-new-products

[5] https://www.nngroup.com/articles/first-rule-of-usability-dont-listen-to-users/

 

More from this author

Chris Lazzarini

Director of Operations

Chris Lazzarini
Chris Lazzarini

Chris Lazzarini

Director of Operations

More from this author

Watch Highlight Reels

Find out how Truthlab can shed light on the customer experience with the truth quotient.

Customer Experience Update