![]() I've had the chance to work on a variety of products: developer tools powering the open source web, medicare enrollment processes across industry leaders, commerce foundations for 3 billion+ users, and more.Įach role taught me so much that I actively draw upon today and was a fundamental piece in shaping the way I approach UX research. In the distribution of active UX researchers, I'm probably at a medium amount of time for experience in the field: grad school starting in 2015, interning 2017-2019, full time from 2019 on. Today is my 1 year anniversary working as a UX researcher at Reddit. (Message me if don't have access and would like my draft copy).Ī Completion Rate Conundrum: Reducing bias in the Single Usability Metric - Carl J. The calculation I provide in this article would allow you to tailor the SUM score to whatever context is most relevant for your target completion rate. This means the original SUM method would artificially inflate scores over a 40% range (50%-90% completion)! MeasuringU reports an average score of about 90% for self-reported completion rates. (I think this is because benchmarking software cannot easily verify app-based task completion like web-based tasks). ![]() Our target % completion could be much higher - many teams now rely on self-reported completion rates. These users are less primed for the context of a 'study' and its tasks, so a reasonably good score will likely reflect a less attentive user sample. Our target % completion could be much lower - product teams sometimes build out benchmarking experiences directly into apps. The average completion rate is surely different given new application paradigms and unmoderated platforms for testing. However, now the range of 'good' completion rates varies so much more.ħ8% comes from a 2011 article. This bias is based on the article from MeasuringU that indicates a typical task completion rate in 78%.īack when I wrote this paper, my main goal was to get the calculations to center more easily around the metric of 78% as an 'average' completion rate. ![]() The original method inflates scores from 50%-78% and deflates scores from 78%-100%. In its original from, the SUM created a bias in completion rate calculations. I wanted to share one more thing about my recent proceedings paper from HFES - what information it provides that could actually help you measure your product's usability more accurately when using the Single Usability Metric (SUM). And it's grown more complicated than it was 10 years ago. What is a good task completion rate? In a classic psychology response, it depends.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |