Here are 3 interesting stories on how I leveraged product-usage data to influence my team's decision making and product direction.
"Using data to support your design decisions is the golden ticket to getting support because it is the most scientific way to demonstrate that your designs are having the intended effect."
---- Tom Greever, author of Articulating Design Decisions
It promotes buy-in with stakeholders.
It keeps cross-functional teams accountable to design goals.
Here's how I've used data on my product design cycle.
We have limited time, what should be prioritized and why?
What could potentially yield the most results?
What users care the most about?
What does success means to the business in terms of numbers?
What do we expect? Revenue, adoption, conversion rate?
What is the baseline?
How is the feature performing compared to our success metrics?
What are users interacting with the most?
What has the response from the community been?
In one specific quarter of 2024, there were 19 ideas listed for UI improvements as candidates for the roadmap. What should be prioritized?
Data collected in FullStory assets and BigQuery dashboard showed the most-interacted-with features and integrations.
A scrape of Slack support channels exposed what customers and internal users where struggling with.
As a result, the list was narrowed down to 8 items of various sizes and made it to the roadmap with a clear reason why.
The number of installations between two Messaging apps supported by our product shows the vast difference of popularity amongst them, and where efforts should be directed towards.
Success metrics defined for a cross-product feature during a Lean UX workshop:
Increase the cross-usage between IRM products (from 30% to 60%)
Increase in billable IRM users
At least 10 Pro and or Advanced orgs adopting this feature
Increase in migration from competitor
Admittedly, some of these metrics are quite vague and not really measurable, but they were the starting-point for our team to start digging into data and to find answers.
Tracking the same feature after its launch permitted us to evaluate it against the sucess metrics we have established in the early development stage.
Increase in cross-usage between two IRM products ✅
At least 10 Pro and or Advanced orgs adopting this feature ✅
Increase in billable IRM users ❌
Increase in migration from Competitor ❌
📝 Further qualitative user research exposed discoverability issues with this feature, which were documented as a proposal for the subsequent roadmap.
✨ To focus on the what, not the why when looking into data. Interpreting it right off the bat can lead to faulty conclusions.
Time to research!
✨ To be comfortable when data shows an underperforming feature. It does not speak to my worth as a designer.
Time to iterate!
✨ To be humble and learn as I find biases and blind spots in my own data experiments. How can I learn from it and do better next time?
Time to experiment!