2011 Winner: Collaborative evaluation of evidence-based point-of-care medical applications for mobile devices


Project Team

Robyn Butcher (MLIS) – Horizon Health Network
Kathleen Gadd (MLIS) – Dalhousie University and UNB,
Martin MacKinnon (MD) – Dalhousie University and Horizon Health Network
Denise LeBlanc-Duchin (PhD) – Horizon Health Network and UNB

Report by Kathleen Gadd with files from the team

Project dates

Project planning: April 2011 – Nov 2011 Active research: Nov 2011 – Nov 2012
Data analysis, manuscript and report writing: Nov 2012 – ongoing
Poster presentations: 
Interprofessional Health Research day (Saint John NB): March 2012; CHLA (Hamilton ON): June 2012
Half-day medical apps workshop presentation at NLHLA: May 2012
Report date: Dec 20 2012

Description of the project

We designed a collaborative study to evaluate six point-of-care medical applications for mobile devices, and to analyze their use in making evidence-based decisions in practice. This study had two phases.

Phase 1 was an assessment of each app by a small team of librarians, using an evidence-based rubric. We graded each application in the areas of content, transparency, and sources of evidence. This resulted in each app having a numerical score in each category and overall.

Phase 2 was an assessment of the apps by medical residents while completing Internal Medicine rotations. For several reasons such as the workload for participants, recruiting enough participants due to time constraints, and overall costs, the top 4 apps from Phase 1 moved on to be evaluated by learners in Phase 2. The two lowest-scoring apps from Phase 1 were not evaluated by learners.

We hoped to recruit at least 30 Internal Medicine residents to evaluate the apps in terms of subjective usability and patient care perspective. We loaded 5 iPod Touches with the apps so that up to 5 learners could be enrolled in the study at any one time (a 6th iPod touch was available as a backup).

Residents enrolled in the study spent one week using each app during their time in the Internal Medicine Clinical Teaching Unit (typically a 4-8 week rotation overall depending on what year the learner is in).

At the end of each week they were prompted via e-mail to fill out a questionnaire regarding the experience of using each application. Questions included categories such as prior familiarity with medical apps, would they use it again after the study, did it answer their questions, whether app impacted diagnosis, app’s impact on their learning, etc. Residents also filled out entrance and exit questionnaires that asked about information-seeking behaviours and open-ended questions about how others felt about them using apps, such as patients and preceptors.

The residents were assigned the apps in different sequences to minimize bias and confounding factors. Data was collected by an administrator with whom the students already had contact during the normal course of their rotation.

We hoped to combine the numerical scores from Phase 1 and 2 to determine the ultimate app, preferred by both librarians and residents, if one existed. We also hoped to see if there was a particular app that scored well or poorly in different categories, rated by different types of users.

Overview of results

Phase 1 – librarian evaluation of apps

Robyn and Kathleen independently assessed each app using an evidence-based rubric we created that had also been widely distributed to other health information professionals for feedback. This process went well. We kept screen captures and notes of proof for each category so any discrepancies were easy to resolve.

To evaluate the apps, we used information from the apps themselves or the vendors’ publicly accessible websites, since this would be the information an average user would have to make their own assessment of the app’s quality.

The rubric seemed to work well to separate the apps. As reported in our poster presentations, the apps received the following scores: Dynamed: 88 points; First Consult: 83 points; Medscape: 72 points; Pepid: 54 points; Epocrates: 41 points; Harrison’s Practice (Lexicomp): 23 points. The four highest-scoring apps moved on to Phase 2.

We also view the process of the rubric creation and the rubric itself as a result of the study, as it can now be used to evaluated new apps that have entered the market since our project began, such as the UpToDate app. It can also be improved by other health professionals who wish to use it.

Phase 2 – learner evaluation of apps

As stated above, we hoped to recruit 30 residents to evaluate the apps. This would have provided the experiment with enough power to make claims about significant data findings in the relative quality of the apps. Unfortunately, during the summer and fall fewer participants were interested and recruitment became more difficult.

While a total of 17 participants completed the entrance and exit questionnaires, only 12 participants completed all four weekly app questionnaires. Some of these questionnaires were filled out weeks after the participant finished their rotation on Internal Medicine. Due to the small number of completed questionnaires and their untimely completion, we feel that the data contained in the weekly questionnaires is corrupt and cannot be used to make any claims.

Fortunately, the entrance and exit questionnaires provide us with some very interesting results that reveal areas for future research at the very least. Information such as how many students currently owned mobile devices, how comfortable they were using them, how often they used them, and which information sources they preferred can be very interesting data for information professionals in academic, hospital, and public libraries. Their written answers to questions in the exit questionnaire also provide us with insight into their thoughts and beliefs regarding seeking answer to clinical questions while involved in patient care.

Overall, these results and the model of collaboration represent the value of our project, which we felt was time, effort, and money well-spent.

For those interested in conducting this type of research in the future, we have several theories on how participation could be improved. For example, while purchasing our own iPod touches made the project easier and cheaper for us (and allowed any student to participate), perhaps a mixed option where some students could have used their own devices would have encouraged participation and completion of the questionnaires.


Robyn and Kathleen have presented the librarian facet of the project on three occasions: locally at the Interprofessional Health Research Day in Saint John NB; regionally at the Newfoundland and Labrador Health Libraries Association meeting; and nationally at the Canadian Health Libraries Association conference. The entire project team is currently working on a manuscript we hope to have published in a peer-reviewed publication.

How CAUL funds were spent

The funds from the grant were spent to reimburse our principal investigator, Dr. Martin MacKinnon, for funds he had already spent on supplies for the project. Martin purchased 6 iPod Touch devices (~$250 each) as well as year-long subscriptions to medical apps: MD Consult (4 subscriptions @ $224.87 each), Epocrates (1 subscription @ $159), Pepid (5 subscriptions @ $254.95 each). The number of subscriptions differ as MD Consult became available via Dal by the time we activated the 5th iPod, and Epocrates did not progress to phase 2 of our study. Medscape is free and Dynamed is available via Dal. We received trial licenses for LexiComp.

Dr. MacKinnon purchased the supplies with his credit card and was reimbursed via the hospital’s research department. The project would not have been possible without this grant.


Kathleen Gadd and Robyn Butcher have both changed jobs since the project was initiated.

Kathleen Gadd is now with Horizon Health Network as a health sciences librarian in Miramichi NB, and Robyn Butcher is now with University of Toronto’s Department of Family and Community Medicine as a Librarian. However, we continue to work with Martin MacKinnon and Denise LeBlanc-Duchin on writing a manuscript for publication. We are very pleased with the nature of our collaboration on this project.

PDF icon Grant Report 2011 - K. Gadd.pdf101.01 KB