Request a Demo

Meet with our Truthlab experts to experience the product first-hand and see how it can help meet your company’s needs.

  • This field is for validation purposes and should be left unchanged.
*By clicking or tapping the “Submit Request” button, you agree to Truthlab contacting you for purposes of coordinating a product demonstration.

Subscribe to the Truthlab Blog

Receive blog article notifications by email and get insights on Product Experience, Customer Experience, Design, SaaS Technology, B2B Thought Leadership and more.

*By clicking or tapping the “Submit Request” button, you agree to receiving regular emails for Truthlab blog article posts. You can stop the emails at any time by clicking the “unsubscribe” links at the bottom of the emails.

Create a Test Account

Take full advantage of Truthlab’s moderated session recording by signing-up for a free trial account for 10 days.

  • This field is for validation purposes and should be left unchanged.
Watch Our 60 Second Introduction.
Customer Experience, Design, Ideas

Don’t Trust What Your Users are Saying

3 minute read | By: Chris Lazzarini

It’s been one week since you released the feature your customers have been requesting for over a year.  As the product owner, you feel dread as you look at the analytics and see a low adoption rate.  You’re left wondering, “Why is adoption so low if this feature was always requested in all of our product surveys?”

This scenario is common in the fast-paced world of Enterprise SaaS.  With constant demands to improve the product, Product Managers (PMs) often rely on surveys as their source for voice of customer data because they can be sent out at high volume and generally have quick turnaround times.  However, surveys can be misleading because they rely on what people say and their answers are often constrained by the construction of the survey.

Overreliance on “Say Data” leads to the development of products people say they want, but don’t actually need.  Say Data is when users vocalize what they feel about a concept or guess how they would act in a certain situation, which is not a predictor of future behavior.[1]  People often make confident but false predictions about their future behavior, especially when presented with a new and unfamiliar design.[2]

Watch UserTo ensure a product will be adopted post launch, PMs should supplement their Say Data with “Do Data”.  Do Data is behavioral evidence and can be a more reliable predictor of future behavior.[3]  For example, in the scenario above, the PM should be testing prototypes of the feature with customers throughout the product lifecycle to understand how their customer is going to use the new feature in conjunction with their current use of the product.

The results of a McKinsey study showed most successful innovators periodically tested and validated customer preferences during the development process, which made them better able to identify and fix design concerns early on and minimize project delays.[4]

"Don’t listen to what your customers are saying, watch what they’re doing."

- Chris Lazzarini

Observe how the users are interacting with the product. Nielsen Norman Group suggests there are many ways to run an optimal user test or field study, but ultimately, getting user data boils down to the basic rules of usability:


The problem with people’s inability to predict their future behavior is compounded when relying on poorly designed surveys.  Constructing a survey may seem easy at a surface level, but there are many pitfalls that one should consider before creating a survey.  Survey Monkey lists the five most common mistakes as leading questions, loaded questions, compound questions, using absolutes, and language use. (Details here)

As far as feature requests, scaling questions pose the greatest risk.  If customers are asked to rate features on a scale from “very important” to “not important” you will be left with some features labeled “very important” and others “important”.  How does one distinguish between important and very important?  Do you build only very important features and leave important features in the backlog? Vice-versa?  These questions can only be answered by speaking with customers and observing their process.