11 years ago when I was doing my first ever tech project, which turned into my startup that self-taught me everything and started my tech career, I remember finding out about analytics tools and being blown away. “You mean you can find out exactly where users are dropping off, which features they use the most, and essentially gauge how happy they are with specific features?”
As I went on in my career, working at various startups as a UX Designer with Product Management duties, I was surprised to see how underutilized these analytics tools were. I thought “oh it’s because they’re just a scrappy startup”. They seemed like something every startup had installed, because they’re supposed to, but never actually used. When I started working as a Product Manager working with a Y Combinator team that had just been acquired into a bigger corporation, ‘scrappy startup’ was no longer an excuse in my mind as to why user analytics was under utilized. In this case I realized it’s because as one of a few Product Managers for a banking iPhone, Android, mobile web, and web app, there just wasn’t enough time to balance all the agile meetings and feature requests from c-level executives, developers, the marketing team, app store user feedback, and also make data based product decisions.
For the last 5 years I’ve been working with various Fortune 500 and startups, and I’ve also noticed that at best “yes, we track analytics and everything is installed, yes, we want to be data driven but no, we haven’t actually looked at our data in a way to guide product decisions”. Maybe I just haven’t come across a company that has the culture of being truly data driven, but otherwise I have some theories as to why user analytics data is underutilized in product decision making.
Not enough bandwidth for product managers to analyze and make sense of data
Analytics tracking gets out-dated with every new iteration and feature
Data doesn’t feel statistically significant
Too many questions on “how” to interpret the data
As a part of UX consulting, I have been conducting usability tests, and have found that a simple Net Promoter Score survey on each user test is an easy way to gauge improvements upon each iteration of the clickable prototype or product shown in the user test. For example when there is a 2 point jump with every 5 user tests, the latest iteration, without a doubt, is much better.
So I came up with a hypothesis, what if products ran NPS surveys, within various sections of their product, and maybe even compared the NPS scores against each other. Since it’s one data point, there isn’t a need for much bandwidth for interpretation. Since NPS is a type of qualitative data, they don’t need much traffic for it to have statistical significance. Since it’s simple, “how” to interpret the data doesn’t become a point of argument.
I looked online and found some solutions that provide this functionality, but found them too complex in terms of user experience or pricing, or they lacked the capability to measure NPS scores contextually.
So I built a basic minimum viable product to test this hypothesis, Userglee.com for actionable Net Promoter Scores. There is a lot I can add to it and a lot of directions I can take it, but currently looking for feedback on the bare-bones MVP to see if there is anything worth pursuing further.