An Adaptive EHealth Information Literacy Assessment for Pre-Professional Health Students
|
If you are the presenter of this abstract (or if you cite this abstract in a talk or on a poster), please show the QR code in your slide or poster (QR code contains this URL). |
Abstract
Background: Research Readiness Self-Assessment is an online interactive tool that tests eHealth information literacy competencies. In addition to assessing skills and knowledge, it captures other related competencies, such as self-reported beliefs about health media and its limitations. After an assessment taker responds to all questions and completes interactive exercises, she receives individually tailored, automated feedback about her eHealth information competencies. The participant also receives suggestions on how to further build these competencies.
Objective: To convert an existing tool, which is a hybrid of a test and a survey, into an adaptive test. The original instrument takes between 30 and 45 minutes to complete. Its participants answer the same questions in the same order. In contrast, an adaptive test will contain fewer items and will take less time to complete. It will adjust to the participants’ ability level and will not give questions that are too hard or too easy for this person. In most adaptive tests, the first item is picked at a medium-difficulty level for the test population. A correct response to this item is followed by a more difficult item; an incorrect response is followed by an easier item. If the difficulty of the first item is estimated with greater accuracy, less test items will be needed to obtain a satisfactory level of precision in measuring one’s eHealth information skills. The intent is to use self-reported measures that are predictive of skills to improve our estimate of the difficulty of the first test item.
Methods: The first step is to validate the original instrument and the underlying eHealth information competency model. Content validity is established through a literature review and subject matter expert (SME) evaluations. SMEs are 25 medical librarians and health faculty who complete the original assessment and provide detailed feedback about its items and the overall design. Construct validity evidence is gathered by comparing scores by individuals at different levels of research experience. Reliability estimates are obtained for items included in the following scales: objectively measured eHealth information skills; self-reported beliefs about health media; research and library experience; and computer skills. Inter-scale correlations are obtained. A regression model is used to predict objectively measured skills based upon self-reported scales. The Rasch model is used to estimate difficulties of all test items.
Results and Conclusions: Our preliminary findings offer evidence in support of content and construct validity of the original assessment; scale reliabilities range between .79 and .87. Experience, education level, computer skills and especially beliefs about health media predict eHealth information literacy scores. These scores are measured objectively through a series of exercises that simulate health information searches and document evaluation. Therefore, there is an opportunity to design an adaptive test that can be completed within a short time because it selects the first test item based on what is already known about the assessment taker. This knowledge is derived from the assessment taker’s responses to self-reported measures that predict eHealth information literacy scores.
Objective: To convert an existing tool, which is a hybrid of a test and a survey, into an adaptive test. The original instrument takes between 30 and 45 minutes to complete. Its participants answer the same questions in the same order. In contrast, an adaptive test will contain fewer items and will take less time to complete. It will adjust to the participants’ ability level and will not give questions that are too hard or too easy for this person. In most adaptive tests, the first item is picked at a medium-difficulty level for the test population. A correct response to this item is followed by a more difficult item; an incorrect response is followed by an easier item. If the difficulty of the first item is estimated with greater accuracy, less test items will be needed to obtain a satisfactory level of precision in measuring one’s eHealth information skills. The intent is to use self-reported measures that are predictive of skills to improve our estimate of the difficulty of the first test item.
Methods: The first step is to validate the original instrument and the underlying eHealth information competency model. Content validity is established through a literature review and subject matter expert (SME) evaluations. SMEs are 25 medical librarians and health faculty who complete the original assessment and provide detailed feedback about its items and the overall design. Construct validity evidence is gathered by comparing scores by individuals at different levels of research experience. Reliability estimates are obtained for items included in the following scales: objectively measured eHealth information skills; self-reported beliefs about health media; research and library experience; and computer skills. Inter-scale correlations are obtained. A regression model is used to predict objectively measured skills based upon self-reported scales. The Rasch model is used to estimate difficulties of all test items.
Results and Conclusions: Our preliminary findings offer evidence in support of content and construct validity of the original assessment; scale reliabilities range between .79 and .87. Experience, education level, computer skills and especially beliefs about health media predict eHealth information literacy scores. These scores are measured objectively through a series of exercises that simulate health information searches and document evaluation. Therefore, there is an opportunity to design an adaptive test that can be completed within a short time because it selects the first test item based on what is already known about the assessment taker. This knowledge is derived from the assessment taker’s responses to self-reported measures that predict eHealth information literacy scores.
Medicine 2.0® is happy to support and promote other conferences and workshops in this area. Contact us to produce, disseminate and promote your conference or workshop under this label and in this event series. In addition, we are always looking for hosts of future World Congresses. Medicine 2.0® is a registered trademark of JMIR Publications Inc., the leading academic ehealth publisher.

This work is licensed under a Creative Commons Attribution 3.0 License.