Scholarly article on topic 'How context affects electronic health record-based test result follow-up: a mixed-methods evaluation'

How context affects electronic health record-based test result follow-up: a mixed-methods evaluation Academic research paper on "Clinical medicine"

Share paper
Academic journal
BMJ Open
OECD Field of science

Academic research paper on topic "How context affects electronic health record-based test result follow-up: a mixed-methods evaluation"

BMJ Open How context affects electronic health record-based test result follow-up: a mixed-methods evaluation

Shailaja Menon,1 Michael W Smith,1 Dean F Sittig,2 Nancy J Petersen,1 Sylvia J Hysong,1 Donna Espadas,1 Varsha Modi,1 Hardeep Singh1

To cite: Menon S, Smith MW, Sittig DF, etal. How context affects electronic health record-based test result follow-up: a mixed-methods evaluation. BMJ Open 2014;4:e005985. doi:10.1136/bmjopen-2014-005985

► Prepublication history and additional material is available. To view please visit the journal ( 10.1136/bmjopen-2014-005985).

Received 30 June 2014 Revised 8 September 2014 Accepted 11 September 2014


"'Department of Medicine, Baylor College of Medicine, Center for Innovations in Quality, Effectiveness and Safety, the Michael E. DeBakey Veterans Affairs Medical Center and the Section of Health Services Research, Houston, Texas, USA

2University of Texas School of Biomedical Informatics and the UT-Memorial Hermann Center for Healthcare Quality & Safety, Houston, Texas, USA

Correspondence to

Dr Hardeep Singh;


Objectives: Electronic health record (EHR)-based alerts can facilitate transmission of test results to healthcare providers, helping ensure timely and appropriate follow-up. However, failure to follow-up on abnormal test results (missed test results) persists in EHR-enabled healthcare settings. We aimed to identify contextual factors associated with facility-level variation in missed test results within the Veterans Affairs (VA) health system.

Design, setting and participants: Based on a previous survey, we categorised VA facilities according to primary care providers' (PCPs') perceptions of low (n=20) versus high (n=20) risk of missed test results. We interviewed facility representatives to collect data on several contextual factors derived from a sociotechnical conceptual model of safe and effective EHR use. We compared these factors between facilities categorised as low and high perceived risk, adjusting for structural characteristics. Results: Facilities with low perceived risk were significantly more likely to use specific strategies to prevent alerts from being lost to follow-up (p=0.0114). Qualitative analysis identified three high-risk scenarios for missed test results: alerts on tests ordered by trainees, alerts 'handed off' to another covering clinician (surrogate clinician), and alerts on patients not assigned in the EHR to a PCP. Test result management policies and procedures to address these high-risk situations varied considerably across facilities.

Conclusions: Our study identified several scenarios that pose a higher risk for missed test results in EHR-based healthcare systems. In addition to implementing provider-level strategies to prevent missed test results, healthcare organisations should consider implementing monitoring systems to track missed test results.

Strengths and limitations of this study

Effectiveness of test results management in electronic health record (EHR)-enabled settings might be influenced by several sociotechnical factors, which have not been examined in detail before.

This study uses a mixed-methods approach to examine the role of several sociotechnical factors involved in 'missed' abnormal test results. Several generalisable high-risk scenarios for missed test results emerged. Certain test management practices described in our study might only apply to Veterans Affairs facilities, potentially limiting their widespread generalisability.


Electronic health record (EHR) systems can potentially reduce communication problems associated with paper-based transmission of test results.1 2 Computer-based test results can be transmitted securely and instantaneously to providers' EHR inboxes as 'alerts', reducing turnaround time to follow-up.3

Although EHRs appear to reduce the risk of missed test results,2 4 5 they do not eliminate the problem.2 3 6 Lack of timely follow-up of test results remains a major patient safety concern in most healthcare organisations.7-9 Previous work has shown that test result follow-up failures can be traced to ambiguity among providers about responsibility for follow-up,10-12 perceived 'information overload' among providers who receive large amounts of information electronically,13 and the concurrent use of paper-based and EHR systems to order and report test results.1 14 Other test result management practices may also facilitate or thwart timely follow-up. For instance, we found remarkable differences in rates of abnormal pathology result follow-up between two US Department of Veterans Affairs (VA) healthcare facilities, despite their use of a common EHR system.15 Although some variation in test result follow-up can be attributed to individual pro-viders,12 system factors, such as organisational policies and procedures, are likely to play a substantial role.10 16 For example, in organisations using EHRs, the effectiveness of test result management may be influenced by technical factors, such as hardware and

software, as well as non-technical factors, such as organisational policies, procedures and culture. These 'socio-technical' factors include factors related to EHR technology, as well as non-technical issues at the organisational, provider and clinical-workflow levels.17 Thus far, organisation-level or facility-level information about test results management practices is poorly documented or understood, but this knowledge of local organisational context may be useful in understanding organisation-wide vulnerabilities and explain why some healthcare settings may have fewer missed test results than others. Our study objective was to identify facility-level contextual factors that increase or decrease the risk of missed test results. Our contextual factors were derived from a sociotechnical conceptual model used16 in patient safety research in EHR-based settings, and thus, we use the term sociotechnical factors from hereon in this paper.

METHODS Study design

We used a mixed-methods approach to compare VA facilities deemed at higher and lower risk for missed test results on a variety of sociotechnical variables. Conceptually, we derived the sociotechnical variables from an eight-dimension sociotechnical model previously used by our team in EHR-related safety research, including test results management (see figure 1).17 18 This model dimensions include both technological as well as non-technological dimensions (such as human, process and organisational factors)19 relevant for the study of EHRs and patient safety. We classified higher and lower risk facilities based on results of a previous survey of VA primary care providers (PCPs) in which respondents provided their perceptions of missed test results (see 'Facility selection' below).13 17 We collected data through interviews with representatives from

participating facilities after obtaining approval from our local Institutional Review Board.


Based on a nationwide study of all VA facilities, we selected 40 facilities (see Facility selection for details) for our analysis. The VA has had a comprehensive EHR in place at all its facilities for over a decade.20 Most routine and abnormal laboratory and imaging test results are communicated through a notification system in the EHR known as the 'View Alert' system.12 Regional and facility-level policies and committees provide guidance for use of the system, including which specific test result notifications (alerts) are mandatory (ie, unable to be 'turned off' by providers21) and which may be customised by individual providers. Facilities also have flexibility to create certain test result management policies and procedures (such as which test results would warrant verbal notification to ordering providers) to address their local needs and contexts.

Phase I: quantitative study

Facility selection

Because we were interested primarily in facility-level differences in alert management practices, we used a three-step process to select facilities from which to recruit participants.

Step 1: Calculating perceived vulnerability. We conducted a cross-sectional, web-based survey of all VA PCPs (N=5290) from June 2010 through November 2010. The survey content was guided by our eight-dimensional sociotechnical model16 and assessed PCPs' perceptions of multiple facets of EHR-based test-result notifications. The survey was developed by a multidisciplinary team who wrote and refined items using input from subject-matter experts and then pilot testing the survey for readability, clarity and ease of completion. Details of the survey development are published elsewhere.17 We classified facilities on the basis of PCPs' responses to two items in this survey17: "I missed alerts that led to delayed care" and "The alert system makes it possible for providers to miss alerts." Both survey items were rated on a five-point Likert scale from 'strongly agree' to 'strongly disagree'. Responses to these two questions were positively correlated13 with responses pertaining to informa-

tion overload,

11 22 23 24

which itself is related to safety,

Figure 1 Eight-dimensional sociotechnical model of safe and effective electronic health record use.

system performance, and organisational and communication practices.25 We calculated the mean of the two question scores to create an aggregate score of perceived vulnerability to missed test results. We sorted facilities by perceived vulnerability score and designated those with a score in the top 30% (3.315 or above on a five-point scale) and bottom 30% (2.947 or lower) as low and high perceived risk, respectively.

Step 2: Adjusting for site characteristics. We controlled for facility-level structural characteristics using the 'nearest neighbour' methodology for creating peer groups for healthcare facilities.26 Our criteria for peer grouping by

facility complexity included patient volume, academic affiliation, disease burden and patient reliance on VA for healthcare, care delivery structures, medical centre infrastructure and community environment.26

Step 3: Prioritizing facility pairs. We generated a list of potential pairs of high and low perceived risk facilities with otherwise relatively similar structures (ie, for each facility pair, the structural difference score was small), attempting to maximise contrasts between structural similarity and differences in perceived vulnerability. We contacted 48 facilities for participation in this order of prioritisation.


We separately interviewed one patient safety manager (PSM, n=40) and one IT/EHR staff member (designated in the VA as clinical application coordinator (CAC, n=40)) at each facility. PSMs provide oversight for facility-wide patient safety programmes and serve as the point of contact for patient safety warnings and patient safety advisories. CACs coordinate the implementation of EHRs, provide ongoing support for the clinical application software and work closely with providers to resolve day-to-day issues related to EHRs. We selected these informants due to their unique roles pertinent to our sociotechnical factors of interest.


Participant recruitment

We invited a CAC and a PSM at each facility to participate in the study. We followed our initial invitation with reminder emails and telephone calls to non-respondents. Our study design required participation from a CAC and PSM at both facilities in a given pair; otherwise, we moved to the next pair on the list.

Interview guide development

We used an interview guide containing structured and open-ended questions to gather data on a broad range of sociotechnical contextual factors, each of which was mapped to at least one of the eight constructs in our conceptual model (table 1). Questions predominantly focused on the configuration and use of the EHR-based test results notification system and on specific aspects of the test result alert management process, including strategies to prevent missed alerts. The interview guide was developed with input from subject-matter experts and finalised after a thorough process of question refinement. We pilot tested the interview guide with five PSMs and four CACs and refined the questions based on their feedback.

Data collection

A sociologist (SM) conducted semistructured, 30 min telephone interviews with the PSM and CAC at each site between January 2012 and August 2012. An informed consent was obtained from all participants before starting the interview. All interviews were audio-recorded. Responses to structured interview questions were

entered into a Microsoft Access database (Redmond, Washington, USA) and expressed as binary responses (eg, yes/no) for quantitative analysis. Open-ended responses were transcribed for content analysis.


Quantitative analysis

We used descriptive statistics to summarise alert management policies and practices. We initially assessed the association between the facility sociotechnical characteristics and the level of perceived risk of missed test results in analyses that did not adjust for site characteristics. Variables that were continuous such as the number of enabled alerts were categorised into dichotomous groups, based on examination of the empirical distributions and clinical judgement of the research team regarding appropriate cut points. We analysed the continuous variables both as continuous using the Wilcoxon rank-sum test and as dichotomous using Fisher's exact test. The Wilcoxon test did not reveal any differences between the high and low vulnerability facilities. Thus, for ease of presentation, we reported the Fisher's exact test statistics from the two-by-two analyses for all variables. To create final pairings between specific high and low perceived risk facilities, we used the Gale-Shapely algo-

rithm, minimising structural difference scores between paired facilities while maximising the differences in vulnerability to missed alerts. We then conducted analyses with McNemar's test using the matched pairs of high and low risk sites. These analyses allowed us to test the association between the facility sociotechnical variables and the perceived risk groups, adjusting for multiple facility-level structural features. p Values of 0.05 or less were considered as statistically significant. Quantitative analyses were performed using SAS V.9 software.

Phase II: qualitative content analysis

We analysed responses to open-ended interview questions (see online supplementary appendix 1) after performing quantitative analyses. This sequence helped ensure that our qualitative analysis addressed information that could help to explain or contextualise any significant differences found between high and low perceived risk facilities. To classify interview transcript text, we used qualitative content analysis, which focuses on reducing text into manageable segments through application of inductive and/or deductive codes.28 29 We used a deductive approach to reduce the data to sub-stantively relevant categories.

Three investigators, a sociologist (SM), a human factors engineer (MWS) and an industrial/organisational psychologist (SJH), reviewed interview transcripts to identify responses to open-ended questions on why test results are missed and how facilities attempted to prevent this from occurring. We specifically focused on responses to questions that explored organisational issues related to follow-up of test results, including management of unacknowledged alerts (ie, abnormal test

Table 1 Interview questions

Sociotechnical dimension Interview questions


1. Hardware and software Equipment and software required to run the applications

2. Clinical content Data, information, and knowledge entered, displayed, or transmitted in EHRs

3. User interface

Aspects of the EHR system that users interact with

4. People

Humans involved in the design, development, implementation and use of HIT

5. Organisational policies Internal policies and procedures that affect all aspects of HIT management

6. State and federal rules External forces that facilitate or place constraints

on HIT use

7. Workflow and communication

Work processes needed to ensure proper functioning of the system

8. Monitoring

Measurement of system availability, use, effectiveness, and unintended consequences of system use

Does your site have any modified software that impacts alert management? Do you generate any reports to monitor the changes made to the software? The number of mandatory, enabled and disabled alerts?

Are there any national or network level mandatory alerts?

In a typical workday, how many providers

request support?

How often do you get calls from

providers about missed or lost alerts?

How much time is spent on View Alert


Does the site have specific training on View Alerts?

Does your facility have a policy on test result communication? Do you have an EHR committee for oversight?

Are you aware of the VHA 2009 directive for communication of test results?

Do you have any mechanisms to prevent alerts from falling through the cracks/ alerts being missed? Do you have a case manager that gets notified about certain abnormal results? Are alerts set to go to a team rather than specific provider?

What monitoring practices do you have in place for follow-up of critical/abnormal diagnostic test results? Is acknowledgement and follow-up of alerts monitored at your facility?

The EHR allows facilities to make changes to the software to address local needs, which can affect how alerts are managed

The notification management options within CPRS can be used to turn specific notification on or off. Alerts can be enabled, disabled or set as mandatory. Some alerts are mandated centrally. The number of enabled and mandatory alerts can affect alert volume

A poorly designed user interface can lead to difficulties in managing alerts, prompting the providers to seek support to manage alerts

CPRS uses the 'View Alert' notification system to inform clinicians about critical test results. Providers should have necessary training to process view alerts

Having a test result communication policy is important to ensure that there is no ambiguity regarding acknowledgement and follow-up of alerts

The VHA 2009-019 directives mandates that patients should be notified about all test results within 14 days

Facilities should have mechanisms in place to make sure that critical alerts are not missed/lost. Back-up procedures to prevent alerts falling through cracks should be implemented

In order to keep track of critical test result follow-up, good monitoring practices should be in place

CPRS, computerised patient record system; EHR, electronic health record; Administration; VISN, Veterans Integrated Service Network.

HIT, health information technology; VHA, Veterans Healthcare

results alerts that remained unread) after a certain time, institutional practices for monitoring follow-up of test results, surrogate assignment processes, trainee-related follow-up issues and follow-up practices when the ordering/responsible provider was not readily identifiable. Two members of the research team (SM and MWS) read a subset of selected transcripts carefully, highlighting text that described alert management practices. Interview responses were classified into specific practices and further reduced to substantively relevant codes. After generating a set of preliminary codes, we validated the codes through an iterative process. For example, responses to a question regarding tests ordered by

trainees were coded into the following four categories: additional recipient of alerts, communication with supervisor, presence of specific policy regarding trainee alerts, handling of outpatient alerts. The coded transcripts were discussed by the researchers (SM, MWS and SJH) to reach a consensus when there were disagreements. We used ATLAS.ti software (Berlin, Germany) to manage textual data.


We approached 48 facilities, of which 8 either declined to participate in the study or were unresponsive to our

requests. We recruited a total of 40 participating facilities (20 high and 20 low perceived risk).

Quantitative results

Facility-level test result management practices Table 2 compares the proportions of sociotechnical factors endorsed by informants at high and low perceived risk facilities. Notably, the vast majority of facilities in both groups customised alert settings locally and required unread alerts to remain in the ordering provider's inbox for at least 14 days. However, only about 70% of facilities overall had some mechanism to prevent alerts from remaining unread (unacknowledged), with 50% of our high perceived risk facilities versus 90% of the low perceived risk facilities having a method in place. In the group comparisons (shown in table 2) that did not control for facility characteristics, we did not find other differences between high and low perceived risk facilities on quantitative variables.

Analysis of matched pairs

As with the group comparisons noted above, the only characteristic that differed significantly between high and low performing facilities was having mechanisms to prevent alerts from 'falling through the cracks' (p=0.0114).

Qualitative results

Qualitative analysis of alert management practices did not reveal any systematic differences between high and low perceived risk facilities. However, from the content of these interviews, we identified three practices related to high-risk scenarios for missed test results.

High-risk scenario 1: tests ordered by trainees

Most facilities (31/40; 77.5%) were training sites for one or more medical residency programmes. Across facilities, the most common arrangement for transmitting test results was a 'dual notification' system in which results were delivered to both the resident and to one or more permanent staff members. However, for outpatient tests, some facilities defaulted to transmitting results only to the ordering provider. Thus, if the ordering provider was a resident, there was no additional recipient for these test results, making alerts vulnerable to being missed in the event that the resident changed locations between the time the test was ordered and the time the result became available. Furthermore, although residents were expected to identify a co-signer or 'surrogate' clinician and clear all pending alerts before leaving the facility at the end of a rotation or residency training, at many sites residents routinely left the facility without doing so. No monitoring mechanisms were in place to ensure that residents met these expectations.

We tell them that either their residents have got to process alerts before they leave-unrealistic, they've got to order it in their attending's name, they've got to take

action when results [are returned], they have to add a care manager as an additional signer to their note to track it, I think those are the steps. We give them options, and we tell them that they need to process their orders-or set a surrogate- when they leave—but they don't. (CAC, Site 019)

High-risk scenario 2: assignment of 'surrogates'

Before clinicians leave their offices for an extended period (eg, week-long vacation), they are expected to designate another covering clinician (surrogate) to receive their alert notifications. Respondents reported using various practices to manage the surrogate assignment process. For instance, at some facilities the process was mediated through providers' supervisors and CACs, while at other facilities providers handled the process entirely themselves. There was also variability in how the surrogate assignment process was monitored. For example, some facilities had developed systems for monitoring unprocessed alerts (eg, monthly reporting), while other facilities had little or no such monitoring in place.

Two main problems with surrogates emerged in interviews. The most common concern, reported at eight facilities, was that providers failed to assign a surrogate altogether. Less often, the identified surrogate failed to act on alerts (3 facilities reported this). Frequently, there was little or no communication between the surrogate and the provider who was out of office.

...if there is no surrogate that's a problem. Another issue is if when you're away, the surrogate takes care of stuff, but you don't know what happened. Sometimes the surrogate writes notes in EHR but other times the surrogate just takes care of it and moves on, and you don't know what happened until the next time you see the patient. Not really a safety concern because the surrogate does the appropriate thing, but it is a communication problem. (CAC, Site 115)

High-risk scenario 3: patients not assigned to a PCP

Alerts can only be sent to a PCP when the computer can recognise that the patient is assigned to one. All patients within the VA system are assigned to a PCP of record. However, for several reasons, including when patients are not seen by their PCPs for a certain length of time, the patients may be 'unassigned' within the EHR. In general, PCPs act as the coordinating hub and often serve as the safety net or 'back-up' for the patient's needs. Thus, if patients are not assigned to a PCP in the system, this could create ambiguity about who is responsible for coordinating care. We found that a number of facilities (18) had an assigned 'back-up' reviewer—a physician, nurse or even a CAC—to process alerts for patients not assigned to a PCP. In these cases, alerts were sent both to the ordering provider and to the designated backup recipient. However, at some facilities alerts were transmitted only to the ordering provider, which was especially problematic when the ordering provider was a resident/trainee.

Table 2 Comparison of low and high perceived risk facilities on sociotechnical variables

High perceived risk facilities Low perceived risk facilities

Sociotechnical variables n (%) n (%) Total p Value

Hardware and software, clinical content, interface-related factors

1. Does the site have modified software?

No 20(100.0) 17 85.0) 37 92.5) 0.2308

Yes 0 (0.0) 3 15.0) 3 7.5)

2. Number of enabled alerts*

10 or less 5(27.8) 6 35.3) 11 31.4) 0.6550

11 or more 13(72.2) 11 64.7) 24 68.6)

3. Number of mandatory alerts*

10 or less 9(45.0) 5 26.3) 14 35.9) 0.4323

11 or more 11 55.0) 14 73.7) 25 64.1)

4. VISN level mandatory alerts

No 17 85.0) 18 90.0) 35 87.5) 1.00

Yes 3(15.0) 2 10.0) 5 12.5)

5. How long alerts stay in the alert window?*

>14 days 15(78.9) 16 88.9) 31 88.8) 0.6599

14daysorless 4(21.1) 2 11.1) 6 16.2)

6. How often do you get calls from providers about missed/lost alerts*

Every few months 8 (40.0) 10 50.0) 29 72.5) 0.8341

At least once a month or more 12(60.0) 10 50.0) 11 27.5)

People-related factors

7. Time on EHR training*

2 h or less 4(21.1) 1 5.6) 5 13.5) 0.3398

More than 2 h 15(78.9) 17 94.4) 32 86.4)

8. Does the site have specific training on View Alerts?

No 11 55.0) 11 55.0) 22 55.0) 1.00

Yes 9 (45.0) 9 45.0) 18 45.0)

9. Time spent on View Alert training*

10 minor less 5(38.5) 8 50.0) 13 .8) 4. 4 0.7107

More than 10 min 8(61.5) 8 50.0) 16 55.2)

10. Does the site utilise super users

No 16(80.0) 15 75.0) 31 77.5) 1.00

Yes 4 20.0) 5 25.0) 9 22.5)

Workflow and communication-related factors

11. Action taken for unacknowledged alerts*

No action/do not know 3(15.0) 1 5.0) 4 10.0) 0.605

Some action taken 17 85.0) 19 95.0) 36 90.0)

12. Alerts go to team rather than providers

No 13(65.0) 12 30.0) 25 60.0) 1.00

Yes 7 (35.0) 8 20.0) 15 40.0)

System measurement and monitoring

13. Mandatory acknowledgment and follow-up of alerts

No 5 25.0) 9 45.0) 14 35.0) 0.3203

Yes 15(75.0) 11 55.0) 26 65.0)

14. Programming, implementation and impact tracked?

No 14(70.0) 13 65.0) 27 67.5) 1.00

Yes 6 (30.0) 7 35.0) 13 32.5)

15. Monitoring practices for follow-up of critical tests?

No 4 20.0) 6 30.0) 10 25.0) 0.7164

Yes 16(80.0) 14 70.0) 30 75.0)

Organisational policies and procedures

16. Does test result alert policy address alert management?

No 6 (30.0) 3 15.0) 9 22.5) 0.4506

Yes 14(70.0) 17 85.0) 31 77.5)

17. Does the facility have an EHR committee?

No 11 (55.0) 9 45.0) 20 50.0) 0.7524

Yes 9 (45.0) 11 55.0) 20 50.0)


Table 2 Continued

High perceived risk facilities Low perceived risk facilities Sociotechnical variables n (%) n (%) Total p Value

18. Does your facility have a case manager who is notified of abnormal test? No 15(75.0) 14(70.0) Yes 5 (25.0) 6 (30.0) 19. Mechanism to prevent alerts falling through the cracks No 10(50.0) 2(10.0) Yes 10(50.0) 18(90.0) State and federal rules 20. Awareness of VHA directive 2009-019? No 4(20.0) 2(10.0) Yes 16(80.0) 18(90.0) 29 (72.5) 11 (27.5) 12 (30.0) 28 (70.0) 6 (15.0) 34 (85.0) 1.00 0.0138 0.6614

*Categorisation of these variables was based on examination of the empirical distribution and clinical judgement of the research team regarding appropriate cut points. Wilcoxon rank-sum test did not reveal any differences between the high and low vulnerability facilities for these five variables, which we analysed both as continuous and as categorical. For ease of presentation, we have reported x2 test statistics. CPRS, computerised patient record system; EHR, electronic health record; HIT, health information technology; VHA, Veterans Healthcare Administration; VISN, Veterans Integrated Service Network.

The ordering provider or whoever is set up in a team of some sort will get those alerts. It could go to a team if a team is assigned, but if not, it will go to the ordering provider. When we have trainees and if team is not assigned, it is frustrating. (CAC, Site 007)

In addition to identifying high-risk situations for missed alerts, informants described monitoring strategies to ensure that test results receive follow-up. Reporting all unacknowledged alerts to the chief of staff, performing random chart audits, and using a 'call cascade' system to escalate unacknowledged findings to additional personnel were some of the monitoring strategies implemented. Six low perceived risk and nine high perceived risk facilities had alert escalation systems by which a secondary provider, service chief or CAC received alerts left unacknowledged beyond a certain time period. Twelve facilities monitored unacknowledged alerts by generating reports to the chief of staff.

Unacknowledged alerts go to the supervisor, then higher up. It escalates up the line. It goes to their supervisor and then it keeps escalating up the ladder. And if it's for unsigned notes and stuff, we're not involved, but they have a meeting every Wednesday with the director--"they" is the service chiefs and the quad (the director, the chief of staff, the associate director, and the director of nursing), and they provide a report of any progress notes or encounters or all that stuff that's not been signed off on, so I guess that's another way to catch it too. (CAC, Site 109)

Most facilities (33/40, 83%) monitored follow-up of certain test results only when they considered them 'critical' (eg, life threatening or sometimes urgent at-risk results such as abnormal chest X-ray suggestive of malignancy), but the processes for doing so were highly variable. Some facilities had a formal process of generating monthly laboratory reports to evaluate follow-up, while others relied on random chart review of critical test

reports. A variety of personnel types assisted in the monitoring role, including business office designees, diagnostic service (laboratory and radiology) staff, nurses, PSMs and quality coordinators. One monitoring process common to most facilities was the requirement for the diagnostic service (laboratory and radiology) to document contact with the responsible provider about critical results. PSMs at seven facilities were either not sure about the process for monitoring critical test result follow-up, or reported no formal process for doing so existed.

The administrative officers or designee of the services are responsible to follow up on the view alerts that are reported to them by the business office—there is a person who does the monitoring. The business office prints a list and gives it to this person who is responsible to follow up with the Chief of Surgery or the provider. (PSM, Site 106)

Lab and radiology monitor this every month and they present a report to the performance improvement council every six months, in compliance with the National Patient Safety Goal. (PSM, Site 121)


Although previous studies have described failures in

test-result follow-up,5 6 10 30 including some focused on

EHR—enabled health systems,31-35 little is known about vulnerabilities predisposing clinicians or organisations to missed test results in EHR-based settings. Because EHRs change both individual as well as the healthcare environment or setting where they are implemented in many important ways, it is important to understand contextual factors that influence test result follow-up in order to improve safety in this area.36-39 Our study evaluated sociotechnical factors that might affect missed test results in a single integrated health system that uses a comprehensive EHR. Some of the sociotechnical issues

we identified are generalisable to many healthcare institutions and pose a higher risk for missed test results. Given certain unique vulnerabilities in EHR-based settings, our findings are noteworthy for healthcare organisations that are currently implementing EHRs to communicate test results.

We found that providers in VA facilities that used additional strategies or systems to prevent missed test results perceived less risk of missing test results. However, these preventive strategies were cursory, despite several readily identifiable high-risk areas across our study facilities. Few institutions use monitoring strategies to prevent missed test results.40 Because many of our high-risk situations are likely to be found in other institutions, we believe some of our findings are generalisable and there are several lessons learned from our work. For example, we found that test result follow-up in situations with 'surrogate clinicians' was especially problematic; these types of hand-off situations are common in most institutions. Current EHRs have limited capabilities to facilitate failsafe hand-off communication,13 17 and this is would be of specific concern to academic institutions that use EHRs for test results management.

Our findings suggest that interventions to reduce missed test results might need to target organisational factors and not just individual providers. While some local flexibility is essential, our findings suggest that future initiatives to improve test result follow-up both within and outside the VA should consider a higher degree of standardisation for the most vulnerable processes. Although the VA is an integrated system with many uniform policies and procedures throughout its facilities, we found that certain high-risk components of the test result management process were shaped by a number of ad hoc practices implemented by each facility. Context here appeared to be defined largely by facility-level practices rather than by some form of standardised or national guidance.

This study has several limitations. First, our measures of risk at the facility level were based not on an actual number of missed results but rather on subjective assessments provided by PCPs in a previous survey. Additionally, PCP response rate across facilities was variable. For instance, while the overall response rate was 52%, the lowest was 42% and only two facilities has >60% response rate. We did not include non-PCPs in our survey. It is possible that they have a different perception about missed test results, thus biasing our findings. However, our study presents only an initial exploratory examination of differences between high and low performing facilities and further larger studies should be conducted to confirm the quantitative analyses in this study. Because our study included only a small subset of VA facilities, we may have had insufficient power to identify other sociotechnical factors that were related to perceived risks of missed results. Test management practices described in our study apply to the EHR used at VA facilities, potentially limiting wider generalisability. However,

other healthcare systems are implementing integrated EHRs with similar notification systems, and many of the sociotechnical factors identified are relevant to non-VA settings.5 41 42 Although we collected data on a range of variables, most interview questions were close-ended, and not all factors of interest were explored in greater depth. Nonetheless, our findings shed light on important issues such as lack of standardisation of processes and monitoring of test results.

In conclusion, in addition to implementing providerlevel strategies to prevent missed test results, healthcare organisations should consider implementing monitoring systems to track missed test results. Some of the socio-technical factors we identified are likely applicable to many healthcare organisations and pose a higher risk for missed test results.

Acknowledgements The authors thank Daniel R Murphy, MD MPH for assistance with the graphic design of figure 1.

Contributors SM contributed to the conception and design of the project and the analysis and interpretation of the data. She drafted the article, worked with the team on revisions, and gave final approval of the version to be published. MWS contributed to the conception and design of the project, data acquisition, and the analysis and interpretation of the data. He supplied critical revisions to the manuscript and gave final approval of the version to be published. DFS contributed to the conception and design of the project and data acquisition. He supplied critical revisions to the manuscript and gave final approval of the version to be published. NJP contributed to the design of the project and data acquisition. She also provided statistical analysis support. She supplied critical revisions to the manuscript and gave final approval of the version to be published. SJH, DE and VM contributed to the design of the project, data acquisition, and the analysis and interpretation of the data. They supplied critical revisions to the manuscript and gave final approval of the version to be published. HS contributed to the conception and design of the project and the analysis and interpretation of the data. He drafted the article, provided critical revisions, and gave final approval of the version to be published.

Funding This work was supported by the VA National Center of Patient Safety and partially supported by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, and the Center for Innovations in Quality, Effectiveness and Safety (#CIN 13-413). SM is supported by AHRQ training fellowship in Patient Safety and Quality and partially supported with resources at the VA HSR&D Center for Innovations in Quality, Effectiveness and Safety (#CIN 13-413), at the Michael E. DeBakey VA Medical Center, Houston, TX.

Competing interests None.

Ethics approval Baylor College of Medicine Institutional Review Board.

Provenance and peer review Not commissioned; externally peer reviewed.

Data sharing statement No additional data are available.

Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://


1. Singh H, Arora H, Vij M, etal. Communication outcomes of critical imaging results in a computerized notification system. J Am Med Inform Assoc 2007;14:459-66.

2. Singh H, Thomas EJ, Sittig DF, etal. Notification of abnormal lab test results in an electronic medical record: do any safety concerns remain? Am J Med 2010;123:238-44.

3. Wahls TL, Cram PM. The frequency of missed test results and 22. associated treatment delays in a highly computerized health system.

BMC Fam Pract 2007;8:32.

4. Elder NC, Dovey SM. Classification of medical errors and 23. preventable adverse events in primary care: a synthesis of the

literature. J Fam Pract 2002;51:927-32.

5. Callen JL, Westbrook JI, Georgiou A, etal. Failure to follow-up test 24. results for ambulatory patients: a systematic review. J Gen Intern

Med 2011;27:1334-48.

6. Casalino LP, Dunham D, Chin MH, etal. Frequency of failure to 25. inform patients of clinically significant outpatient test results.

Arch Intern Med 2009;169:1123-9.

7. Office of the National Coordinator for Health Information Technology 26. (ONC), Department of Health and Human Services. Health

information technology: standards, implementation specifications, and certification criteria for electronic health record technology, 2014 27. edition; revisions to the permanent certification program for health information technology. Final rule. Fed Regist 2012;77:54163-292. 28.

8. Wynia MK, Classen DC. Improving ambulatory patient safety. JAMA 29. 2011;306:2504-5.

9. Lorincz CY, Drazen E, Sokol PE, etal. Research in ambulatory 30. patient safety 2000-2010: a 10-year review. American Medical Association, 2011.

10. Singh H, Thomas EJ, Mani S, etal. Timely follow-up of abnormal 31. diagnostic imaging test results in an outpatient setting: are electronic medical records achieving their potential? Arch Intern Med 2009;169:1578-86. 32.

11. Hysong SJ, Sawhney MK, Wilson L, et al. Provider management strategies of abnormal test result alerts: a cognitive task analysis. J Am Med Inform Assoc 2010;17:71-7.

12. Hysong SJ, Sawhney MK, Wilson L, etal. Understanding the 33. management of electronic test result notifications in the outpatient

setting. BMC Med Inform Decis Mak 2011;11:22.

13. Singh H, Spitzmueller C, Petersen NJ, etal. Information overload 34. and missed test results in electronic health record-based settings.

JAMA Intern Med 2013;173:702-4.

14. Gandhi TK, Kachalia A, Thomas EJ, etal. Missed and delayed 35. diagnoses in the ambulatory setting: a study of closed malpractice

claims. Ann Intern Med 2006;145:488-96.

15. Laxmisan A, Sittig DF, Pietz K, etal. Effectiveness of an electronic 36. health record-based intervention to improve follow-up of abnormal pathology results: a retrospective record analysis. Med Care 2012;50:898-904. 37.

16. Sittig DF, Singh H. A new sociotechnical model for studying health information technology in complex adaptive healthcare systems. Qual Saf Health Care 2010;19(Suppl 3):i68-74.

17. Singh H, Spitzmueller C, Petersen NJ, etal. Primary care 38. practitioners' views on test result management in EHR-enabled

health systems: a national survey. J Am Med Inform Assoc 2013;20:727-35. 39.

18. Sittig DF, Singh H. Improving test result follow-up through electronic

health records requires more than just an alert. J Gen Intern Med 40.


19. Sittig DF, Singh H. Eight rights of safe electronic health record Use.

JAMA 2009;302:1111-3. 41.

20. Brown SH, Lincoln MJ, Groen PJ, etal. Vist A—US department of veterans affairs national-scale HIS. Int J Med Inform 2003;69:135-56.

21. Singh H, Vij MS. Eight recommendations for policies for 42. communicating abnormal test results. Jt Comm J Qual Patient Saf 2010;36:226-32.

Woods DD, Patterson E, Roth EM. Can we ever escape from data overload? A cognitive systems diagnosis. Cogn Technol Work 2002;4:22-36.

Krall MA, Sittig DF. Clinician's assessments of outpatient electronic

medical record alert and reminder usability and usefulness

requirements. ProcAMIA Symp 2002:400-4.

Beasley JW, Wetterneck TB, Temte J, etal. Information chaos in

primary care: implications for physician performance and patient

safety. J Am Board Fam Med 2011;24:745-51.

Murphy DR, Reis B, Kadiyala H, etal. Electronic health

record-based messages to primary care providers: valuable

information or just noise? Arch Intern Med 2012;172:283-5.

Byrne MM, Daw CN, Nelson HA, etal. Method to develop health

care peer groups for quality and financial comparisons across

hospitals. Health Serv Res 2009;44(2 Pt 1):577-92.

Gale D, Shapley LS. College admissions and the stability of

marriage. Am Math Mon 1962;69:9-15.

Weber RP. Basic content analysis. 2nd edn. Newbury Park,

CA: Sage Publications Inc, 1990.

Elo S, Kyngas H. The qualitative content analysis process. J Adv Nurs 2008;62:107-15.

Chen ET, Eder M, Elder NC, et al. Crossing the finish line: follow-up of abnormal test results in a multisite community health center. J Natl Med Assoc 2010;102:720-5.

Callen J, Georgiou A, Li J, etal. The safety implications of missed test results for hospitalised patients: a systematic review. BMJ Qual Saf 2011;20:194-9.

Callen J, Paoloni R, Georgiou A, et al. The rate of missed test results in an emergency department: an evaluation using an electronic test order and results viewing system. Methods Inf Med 2010;49:37-43.

Gordon JR, Wahls T, Carlos RC, etal. Failure to recognize newly identified aortic dilations in a health care system with an advanced electronic medical record. Ann Intern Med 2009;151:21-7, W5. Roy CL, Rothschild JM, Dighe AS, etal. An initiative to improve the management of clinically significant test results in a large health care network. Jt Comm J Qual Patient Saf 2013;39:517-27. Cram P, Rosenthal GE, Ohsfeldt R, etal. Failure to recognize and act on abnormal test results: the case of screening bone densitometry. Jt Comm J Qual Patient Saf 2005;31:90-7. Taylor SL, Dy S, Foy R, et al. What context features might be important determinants of the effectiveness of patient safety practice interventions? BMJ Qual Saf 2011;20:611-17. Ovretveit JC, Shekelle PG, Dy SM, et al. How does context affect interventions to improve patient safety? An assessment of evidence from studies of five patient safety practices and proposals for research. BMJ Qual Saf 2011;20:604-10. Glasgow RE, Emmons KM. How can we increase translation of research into practice? Types of evidence needed. Annu Rev Public Health 2007;28:413-33.

Eccles MP, Armstrong D, Baker R, etal. An implementation research agenda. Implement Sci 2009;4:18. Graber ML, Trowbridge RL, Myers JS, et al. The next organizational challenge: finding and addressing diagnostic error. Jt Comm J Qual Patient Saf 2014;40:102-10.

Poon E, Gandhi T, Sequist T, etal. "I wish I had seen this test result

earlier!" Dissatisfaction with test result management systems in

primary care. Arch Intern Med 2004;164:2223-8.

Poon EG, Wang SJ, Gandhi TK, etal. Design and implementation of

a comprehensive outpatient Results Manager. J Biomed Inform


BMJ Open

How context affects electronic health record-based test result follow-up: a mixed-methods evaluation

Shailaja Menon, Michael W Smith, Dean F Sittig, Nancy J Petersen, Sylvia J Hysong, Donna Espadas, Varsha Modi and Hardeep Singh

BMJ Open 2014 4:

doi: 10.1136/bmjopen-2014-005985

Updated information and services can be found at:

These include:

Supplementary Material

References Open Access

Email alerting service

Supplementary material can be found at: 985.DC1.html

This article cites 39 articles, 7 of which you can access for free at:

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See:

Receive free email alerts when new articles cite this article. Sign up in the box at the top right corner of the online article.

Topic Articles on similar topics can be found in the following collections

Collections Health informatics (120) Health policy (372) Health services research (761)

To request permissions go to:

To order reprints go to:

To subscribe to BMJ go to: