Download - Design by Numbers: A Data-Driven UX Process
Design by NumbersA Data-Driven UX Process
Brian Rimel @brianrimelUX Consultant, OpenSource Connections
User-Centered Design
Internal enterprise applications
Access to users
Why Data?Balancing the qualitative and quantitative
You can’t always trust your users
Limited data doesn’t tell the whole story
The HEART Framework
Src: https://library.gv.com/how-to-choose-the-right-ux-metrics-for-your-product-5f46359ab5be
PULSE MetricsPage views, Uptime, Latency,
Seven-day active users, Earnings
Unnecessary Data Creates Noise
HEART MetricsHappiness, Engagement, Adoption,
Retention, Task Success
HappinessSatisfaction or Delight
System Usability Scale, Net Promoter Score
EngagementLevel of involvement
Number of visits per user per week
AdoptionNew users/uses of a feature
Number of accounts created in the last 7 days
RetentionRate at which existing users return
Percentage of seven-day active users that are still active 30 days later
Task SuccessTraditional behavior metrics for efficiency,
effectiveness, and error rate.Percentage of completion errors for a given task
Goals
Signals
Metrics
Goals Signals Metrics
HappinessThe user feels the welcome wizard is
easy to useLevel of user satisfaction Mean SUS Score
Engagement - - -
Adoption - - -Retention - - -
Task Success
The welcome wizard should be
as simple as possible
The number of errors during the
processRate of error
per step
Example: Welcome Wizard
Goals should be SMARTSpecific, Measurable, Attainable, Realistic, Time-Based
Normalize the DataWhat does an increase in total active users tell us?
A Limited-Data Process
Initial Metrics Gathering
Existing metrics influence feature priority
Kano Survey for feature-level satisfaction
Kano Survey
src: http://uxmag.com/articles/leveraging-the-kano-model-for-optimal-results
Feature Must-beOne-
Dimensional
Attractive
Unimportant
Undesired
Advanced Search 87% 8% 4% 1% 0%
Prioritizing of Features
1.2.3. Advanced Search4.5.6.7.8.9.10.
From Kano Survey:87% Must-be feature
From Usage Statistics:22% Engagement/Week
Why the discrepancy?
Goals Signals Metrics
HappinessThe user feels
comfortable using advanced search
Level of confidence SUS Survey
Engagement
The features enable consistent
searching
Number of advanced searches
Searches per day per user
Adoption - - -Retention - - -
Task Success
The advanced search process is easily understood
User enters a query, but does not complete
the search
Percentage of Abandoned Searches
Advanced Search: Goals & Metrics
User Interview & Testing
Identify discrepancy between stated importance and usage metrics
Establish baseline metricsMeasure satisfaction - SUS Survey
System Usability Scale (SUS)
src: https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html
Review FindingsMetric Initial Testing
Mean SUS Score 56Error Rate / Step 21%
Okay, so where is the problem?Let’s map it!
Mapping the Journey
Develop Prototypes
User Testing of Prototype
Continue measuring baseline metrics
A/B Testing
Follow-up SUS Survey
Results & Recommendations
Great! But, what does this mean?Context critical to interpretation
Metric Initial Testing Prototype Testing
Mean SUS Score 56 73Error Rate / Step 21% 12%
The Customer Journey
Long-Term Metrics
Tracking Engagement, Adoption, Retention, and Task Success over timePeriodic usability testing of full application
The Data-Driven Process
Tools
Kibana Dashboard
“Extremely satisfied is like extremely edible.”
- Jared Spool
Key Takeaways• Collaboratively define SMART goals• Revisit and challenge goals• Continuously monitor metrics over time• Balance quantitative and qualitative
measures
Questions
References• Google HEART Metrics Study
http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36299.pdf • Kano Survey
http://uxmag.com/articles/leveraging-the-kano-model-for-optimal-results • SUS Survey
https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html