Empirical Decision Rules for Improving the Uncertainty Reporting of Small Sample System Usability Scale Scores
dc.contributor.author | Clark, Nicholas J. | |
dc.contributor.author | Dabkowski, Matthew F. | |
dc.contributor.author | Driscoll, Patrick J. | |
dc.contributor.author | Kennedy, Dereck | |
dc.contributor.author | Kloo, Ian | |
dc.contributor.author | Shi, Heidy | |
dc.date.accessioned | 2023-09-22T14:45:35Z | |
dc.date.available | 2023-09-22T14:45:35Z | |
dc.date.issued | 2021 | |
dc.description.abstract | The System Usability Scale (SUS) is a short, survey-based approach used to determine the usability of a system from an end-user perspective once a prototype is available for assessment. Individual scores are gathered using a ten-question survey with the survey results reported in terms of central tendency (sample mean) as an estimate of the system’s usability (the SUS study score), and confidence intervals (CIs) on the sample mean are used to communicate uncertainty levels associated with this point estimate. When the number of individuals surveyed is large, the SUS study scores and accompanying confidence intervals relying upon the central limit theorem for support are appropriate. However, when only a small number of users are surveyed, reliance on the central limit theorem falls short, resulting in CIs that suffer from parameter bound violations and interval widths that confound mappings to adjective and other constructed scales. These shortcomings are especially pronounced when the underlying SUS score data is skewed, as it is in many instances. This paper introduces an empirically based remedy for such small-sample circumstances, proposing a set of decision rules that leverage either an extended bias-corrected accelerated (BCa) bootstrap confidence interval (Cl) or an empirical Bayesian credibility interval about the sample mean to restore and bolster subsequent Cl accuracy. Data from historical SUS assessments are used to highlight shortfalls in current practices and to demonstrate the improvements these alternate approaches offer while remaining statistically defensible. A freely available, online application is introduced and discussed that automates SUS analysis under these decision rules, thereby assisting usability practitioners in adopting the advocated approaches. | |
dc.description.sponsorship | Department of Mathematical Sciences | |
dc.identifier.citation | Nicholas Clark, Matthew Dabkowski, Patrick J. Driscoll, Dereck Kennedy, Ian Kloo & Heidy Shi (2021) Empirical Decision Rules for Improving the Uncertainty Reporting of Small Sample System Usability Scale Scores, International Journal of Human–Computer Interaction, 37:13, 1191-1206, DOI: 10.1080/10447318.2020.1870831 | |
dc.identifier.doi | https://doi/10.1080/10447318.2020.1870831 | |
dc.identifier.issn | 1044-7318 | |
dc.identifier.issn | 1532-7590 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14216/683 | |
dc.publisher | International Journal of Human–Computer Interaction | |
dc.relation.ispartof | International Journal of Human–Computer Interaction | |
dc.subject | System Usability Scale (SUS) | |
dc.title | Empirical Decision Rules for Improving the Uncertainty Reporting of Small Sample System Usability Scale Scores | |
dc.type | journal-article | |
local.peerReviewed | Yes |