Systematic Literature Review of Usability and Feasibility Evaluation Studies for Mobile Healthcare Applications



Eva Appel*, University Health Network, Toronto, Canada
Lora Appel*, University Health Network, Toronto, Canada


Track: Research
Presentation Topic: Mobile & Tablet Health Applications
Presentation Type: Poster presentation
Submission Type: Single Presentation

Last modified: 2014-11-11
qrcode

If you are the presenter of this abstract (or if you cite this abstract in a talk or on a poster), please show the QR code in your slide or poster (QR code contains this URL).

Abstract


Background: Mobile apps are changing the way doctors and patients approach health care. The IMS Institute for Healthcare Informatics reports that there are over 40,000+ healthcare apps available for download from iTunes app store, and the FDA estimates that 500 million smartphone users will use some type of health app by 2015. Along with well adopted and frequently used apps, there is an overwhelming majority which are opened once and forgotten about; analytics indicate that less than 20% of all apps downloaded stay on the device, and even less are used regularly. With the large volume and rapid growth rate of health related apps, there is a need for formative evaluation guidelines, metrics and methodologies to ensure that key areas of evaluation are not overlooked, results are valid and reliable, and to help compare and prioritize apps so that only those with the greatest likelihood of being adopted will undergo further costly efficacy and effectiveness testing.

Objective: To determine if published literature proposes standards, frameworks or validated measures to guide the design of usability, feasibility, and adoption evaluation studies for m-health apps. We performed a systematic literature review following the PRISMA statement for systematic reviews. The findings of this study describe how apps have been evaluated to date, and help inform a more rigorous design of future studies.

Methods: In January 2014, using Ovid, Medline was searched for articles that discussed the evaluation (usability, feasibility, or adoption) of mobile software applications designed for healthcare. A total of 58 articles published in English were included in this study, from 188 initially returned by the Medline searches. Articles evaluating solely the functionality or clinical-efficacy of interventions were excluded, as were meta-analyses. There was no limitation on publication date, and the earliest eligible article was published in 2003.

Results: Of the 58 papers included, 40% evaluated usability, 26% feasibility, 10% adoption, and the rest (24%) were a combination of these. Of the papers that explicitly listed user groups, 41% were patients, 54% health-care providers, and only 5% targeted family/other. The sample sizes ranged from 2 to 10,999 participants (median=20, mode=10). Seven unique data collection and evaluation methods were identified: surveys(63%), user scenarios (41%), analytics(29%), interviews(17%), focus groups(14%), observations (7%), and self-reported journals(7%). Sixty-five percent of the studies employed two or more methods, 71% of those were mixed qualitative and quantitative. Twenty-four percent of the studies mentioned using theoretical models or frameworks, such as: Technology Acceptance Model (4), Diffusion of Innovations (2), Re-AIM (1), Zhang’s unified framework for usability, using UFuRT (1), International-Standards-Organization 9241-11 Usability (1); and three papers proposed new frameworks.

Conclusions: Mobile apps promise to revolutionize healthcare, and a great deal is invested into their development. Our findings indicate that few studies define their concepts (such as feasibility, adoption), and only implicitly describe them through outcome measures, and of those there is no consistency in terminology, resulting in overlapping notions and inconsistent measures. Usability testing is currently better defined with a focus on three metrics: efficiency, effectiveness, and user satisfaction. Most studies report successful results but the criteria for success is rarely explicitly stated, and differ greatly across studies.
We suggest that the research community establish evaluation guidelines and methodologies as well as a standardized set of terms and metrics (a framework) that will result in overall better quality and lower-cost evaluation studies for m-health apps. Standards such as CONSORT, PRISMA and MOOSE are powerful examples of the enormous value a common language has on the quality of reporting. There should be little room for ambiguity in the interpretation of science, whether in clinical research or technological intervention evaluations.
To the best of our knowledge, this is the first study that classifies and summarizes the existing methods for evaluating usability, feasibility and adoption of mobile healthcare applications in a systematic literature review format.




Medicine 2.0® is happy to support and promote other conferences and workshops in this area. Contact us to produce, disseminate and promote your conference or workshop under this label and in this event series. In addition, we are always looking for hosts of future World Congresses. Medicine 2.0® is a registered trademark of JMIR Publications Inc., the leading academic ehealth publisher.
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.