Quantifying the User Experience, 1st Edition

Practical Statistics for User Research

Quantifying the User Experience, 1st Edition,Jeff Sauro,James Lewis,ISBN9780123849687


Morgan Kaufmann




235 X 191

Finally, the book usability practitioners need to confidently quantify, qualify, and justify their data

Print Book + eBook

USD 59.94
USD 99.90

Buy both together and save 40%

Print Book


In Stock

Estimated Delivery Time
USD 49.95

eBook Overview

VST (VitalSource Bookshelf) format

DRM-free included formats : EPUB, Mobi (for Kindle), PDF

USD 49.95
Add to Cart

Key Features

  • Provides practical guidance on solving usability testing problems with statistics for any project, including those using Six Sigma practices
  • Show practitioners which test to use, why they work, best practices in application, along with easy-to-use excel formulas and web-calculators for analyzing data
  • Recommends ways for practitioners to communicate results to stakeholders in plain English
  • Resources and tools available at the authors’ site: http://www.measuringu.com/




Quantifying the User Experience: Practical Statistics for User Research offers a practical guide for using statistics to solve quantitative problems in user research. Many designers and researchers view usability and design as qualitative activities, which do not require attention to formulas and numbers. However, usability practitioners and user researchers are increasingly expected to quantify the benefits of their efforts. The impact of good and bad designs can be quantified in terms of conversions, completion rates, completion times, perceived satisfaction, recommendations, and sales. The book discusses ways to quantify user research; summarize data and compute margins of error; determine appropriate samples sizes; standardize usability questionnaires; and settle controversies in measurement and statistics. Each chapter concludes with a list of key points and references. Most chapters also include a set of problems and answers that enable readers to test their understanding of the material. This book is a valuable resource for those engaged in measuring the behavior and attitudes of people during their interaction with interfaces.


Usability and user experience practitioners, software and web-development professionals, marketers, information architects, interaction designers, business analysts, market researchers, and students in these and related fields

Jeff Sauro

Jeff Sauro is a six-sigma trained statistical analyst and founding principal of Measuring Usability LLC. For fifteen years he’s been conducting usability and statistical analysis for companies such as PayPal, Walmart, Autodesk and Kelley Blue Book or working for companies such as Oracle, Intuit and General Electric. Jeff has published over fifteen peer-reviewed research articles and is on the editorial board of the Journal of Usability Studies. He is a regular presenter and instructor at the Computer Human Interaction (CHI) and Usability Professionals Associations (UPA) conferences. Jeff received his Masters in Learning, Design and Technology from Stanford University with a concentration in statistical concepts. Prior to Stanford, he received his B.S. in Information Management & Technology and B.S. in Television, Radio and Film from Syracuse University. He lives with his wife and three children in Denver, CO.

Affiliations and Expertise

Measuring Usability LLC, Usability Metrics and Statistical Analyst, CO, USA

James Lewis

Dr. James R. (Jim) Lewis is a senior human factors engineer (at IBM since 1981) with a current focus on the design and evaluation of speech applications and is the author of Practical Speech User Interface Design. He is a Certified Human Factors Professional with a Ph.D. in Experimental Psychology (Psycholinguistics), an M.A. in Engineering Psychology, and an M.M. in Music Theory and Composition. Jim is an internationally recognized expert in usability testing and measurement, contributing (by invitation) the chapter on usability testing for the 3rd and 4th editions of the Handbook of Human Factors and Ergonomics and presenting tutorials on usability testing and metrics at various professional conferences. Jim is an IBM Master Inventor with 77 patents issued to date by the US Patent Office. He currently serves on the editorial boards of the International Journal of Human-Computer Interaction and the Journal of Usability Studies, and is on the scientific advisory board of the Center for Research and Education on Aging and Technology Enhancement (CREATE). He is a member of the Usability Professionals Association (UPA), the Human Factors and Ergonomics Society (HFES), the Association for Psychological Science (APS) and the American Psychological Association (APA), and is a 5th degree black belt and certified instructor with the American Taekwondo Association (ATA).

Affiliations and Expertise

Senior Human Factors Engineer, IBM, FL, USA

Quantifying the User Experience, 1st Edition


About the Authors

Chapter 1 Introduction and How to Use This Book


The Organization of This Book

How to Use This Book

What Test Should I Use?

What Sample Size Do I Need?

You Don’t Have to Do the Computations by Hand

Key Points from the Chapter


Chapter 2 Quantifying User Research

What is User Research?

Data from User Research

Usability Testing

Sample Sizes

Representativeness and Randomness

Data Collection

Completion Rates

Usability Problems

Task Time


Satisfaction Ratings

Combined Scores

A/B Testing

Clicks, Page Views, and Conversion Rates

Survey Data

Rating Scales

Net Promoter Scores

Comments and Open-ended Data

Requirements Gathering

Key Points from the Chapter


Chapter 3 How Precise Are Our Estimates? Confidence Intervals


Confidence Interval = Twice the Margin of Error

Confidence Intervals Provide Precision and Location

Three Components of a Confidence Interval

Confidence Interval for a Completion Rate

Confidence Interval History

Wald Interval: Terribly Inaccurate for Small Samples

Exact Confidence Interval

Adjusted-Wald Interval: Add Two Successes and Two Failures

Best Point Estimates for a Completion Rate

Confidence Interval for a Problem Occurrence

Confidence Interval for Rating Scales and Other Continuous Data

Confidence Interval for Task-time Data

Mean or Median Task Time?

Geometric Mean

Confidence Interval for Large Sample Task Times

Confidence Interval Around a Median

Key Points from the Chapter


Chapter 4 Did We Meet or Exceed Our Goal?


One-Tailed and Two-Tailed Tests

Comparing a Completion Rate to a Benchmark

Small-Sample Test

Large-Sample Test

Comparing a Satisfaction Score to a Benchmark

Do at Least 75% Agree? Converting Continuous Ratings to Discrete

Comparing a Task Time to a Benchmark

Key Points from the Chapter


Chapter 5 Is There a Statistical Difference between Designs?


Comparing Two Means (Rating Scales and Task Times)

Within-subjects Comparison (Paired t-test)

Comparing Task Times

Between-subjects Comparison (Two-sample t-test)

Assumptions of the t-tests

Comparing Completion Rates, Conversion Rates, and A/B Testing



Key Points from the Chapter


Chapter 6 What Sample Sizes Do We Need? Part 1: Summative Studies


Why Do We Care?

The Type of Usability Study Matters

Basic Principles of Summative Sample Size Estimation

Estimating Values

Comparing Values

What can I Do to Control Variability?

Sample Size Estimation for Binomial Confidence Intervals

Binomial Sample Size Estimation for Large Samples

Binomial Sample Size Estimation for Small Samples

Sample Size for Comparison with a Benchmark Proportion

Sample Size Estimation for Chi-Square Tests (Independent Proportions)

Sample Size Estimation for McNemar Exact Tests (Matched Proportions)

Key Points from the Chapter


Chapter 7 What Sample Sizes Do We Need? Part 2: Formative Studies


Using a Probabilistic Model of Problem Discovery to Estimate Sample Sizes for Formative User Research

The Famous Equation: P(x ?1) = 1 ? (1 ? p)n

Deriving a Sample Size Estimation Equation from 1 ? (1 ? p)n

Using the Tables to Plan Sample Sizes for Formative User Research

Assumptions of the Binomial Probability Model

Additional Applications of the Model

Estimating the Composite Value of p for Multiple Problems or Other Events

Adjusting Small Sample Composite Estimates of p

Estimating the Number of Problems Available for Discovery and the Number of Undiscovered Problems

What affects the Value of p?

What is a Reasonable Problem Discovery Goal?

Reconciling the “Magic Number 5” with “Eight is not Enough”

Some History: The 1980s

Some More History: The 1990s

The Derivation of the “Magic Number 5”

Eight Is Not Enough: A Reconciliation

More About the Binomial Probability Formula and its Small Sample Adjustment

Origin of the Binomial Probability Formula

How does the Deflation Adjustment Work?

Other Statistical Models for Problem Discovery

Criticisms of the Binomial Model for Problem Discovery

Expanded Binomial Models

Capture–recapture Models

Why Not Use One of These Other Models When Planning Formative User Research?

Key Points from the Chapter


Chapter 8 Standardized Usability Questionnaires


What is a Standardized Questionnaire?

Advantages of Standardized Usability Questionnaires

What Standardized Usability Questionnaires Are Available?

Assessing the Quality of Standardized Questionnaires: Reliability, Validity, and Sensitivity

Number of Scale Steps

Poststudy Questionnaires

QUIS (Questionnaire for User Interaction Satisfaction)

SUMI (Software Usability Measurement Inventory)

PSSUQ (Post-study System Usability Questionnaire)

SUS (Software Usability Scale)

Experimental Comparison of Poststudy Usability Questionnaires

Post-Task Questionnaires

ASQ (After-scenario Questionnaire)

SEQ (Single Ease Question)

SMEQ (Subjective Mental Effort Question)

ER (Expectation Ratings)

UME (Usability Magnitude Estimation)

Experimental Comparisons of Post-task Questionnaires

Questionnaires for Assessing Perceived Usability of Websites

WAMMI (Website Analysis and Measurement Inventory)

SUPR-Q (Standardized Universal Percentile Rank Questionnaire)

Other Questionnaires for Assessing Websites

Other Questionnaires of Interest

CSUQ (Computer System Usability Questionnaire)

USE (Usefulness, Satisfaction, and Ease of Use)

UMUX (Usability Metric for User Experience)

HQ (Hedonic Quality)

ACSI (American Customer Satisfaction Index)

NPS (Net Promoter Score)

CxPi (Forrester Customer Experience Index)

TAM (Technology Acceptance Model)

Key Points from the Chapter


Chapter 9 Six Enduring Controversies in Measurement and Statistics


Is it Okay to Average Data from Multipoint Scales?

On One Hand

On the Other Hand

Our Recommendation

Do you Need to Test at Least 30 Users?

On One Hand

On the Other Hand

Our Recommendation

Should you Always Conduct a Two-Tailed Test?

On One Hand

On the Other Hand

Our Recommendation

Can you Reject the Null Hypothesis when p > 0.05?

On One Hand

On the Other Hand

Our Recommendation

Can you Combine Usability Metrics into Single Scores?

On One Hand

On the Other Hand

Our Recommendation

What if you Need to Run more than One Test?

On One Hand

On the Other Hand

Our Recommendation

Key Points from the Chapter


Chapter 10 Wrapping Up


Getting More Information

Good Luck!

Key Points from the Chapter


Appendix: A Crash Course in Fundamental Statistical Concepts


Types of Data

Populations and Samples


Measuring Central Tendency



Geometric Mean

Standard Deviation and Variance

The Normal Distribution


Area Under the Normal Curve

Applying the Normal Curve to User Research Data

Central Limit Theorem

Standard Error of the Mean

Margin of Error


Significance Testing and p-Values

How much do Sample Means Fluctuate?

The Logic of Hypothesis Testing

Errors in Statistics

Key Points from the Appendix


Quotes and reviews

"Quantifying the User Experience will make a terrific textbook for any series of UX research courses…I highly recommend this book to anyone who wants to integrate quantitative data into their UX practice."--Technical Communication, May 2013
"…as a whole, it provides a pragmatic approach to quantifying UX, without oversimplifying or claiming too much. It delivers what it promises. This book is valuable for both practitioners and students, in virtually any discipline. It can help psychologists transfer their statistical knowledge to UX practice, practitioners quickly assess their envisioned design and analysis, engineers demystify UX, and students appreciate UX’s merits."--ComputingReviews.com, March 19, 2013
"The most unique contributions of this book are the logic and practicality used to describe the appropriate application of those measures…Sauro and Lewis strike a perfect balance between the complexity of statistical theory and the simplicity of applying statistics practically. Whether you wish to delve deeper into the enduring controversies in statistics, or simply wish to understand the difference between a t-test and Chi-square, you will find your answer in this book. Quantifying the User Experience is an invaluable resource for those who are conducting user research in industry."--
User Experience, Vol. 13, Issue 1, 1st Quarter
"Written in a conversational style for those who measure behavior and attitudes of people as they interact with technology interfaces, this guide walks readers through common questions and problems encountered when conducting, analyzing, and reporting on user research projects using statistics, such as problems related to estimates and confidence intervals, sample sizes, and standardized usability questionnaires. For readers with varied backgrounds in statistics, the book includes discussion of concepts as necessary and gives examples from real user research studies. The book begins with a background chapter overviewing common ways to quantify user research and a review of fundamental statistical concepts. The material provides enough detail in its formulas and examples to let readers do all computations in Excel, and a website offers an Excel calculator for purchase created by the authors, which performs all the computations covered in the book. An appendix offers a crash course on fundamental statistical concepts."--Reference and Research Book News, August 2012, page 186-7


Cyber Week Book Event | Use Code CYBOOK15

Shop with Confidence

Free Shipping around the world
▪ Broad range of products
▪ 30 days return policy

Contact Us