Nothing Special   »   [go: up one dir, main page]

Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2019 Aug 28;26(12):1505–1514. doi: 10.1093/jamia/ocz126

Physicians’ gender and their use of electronic health records: findings from a mixed-methods usability study

Saif Khairat 1,2,, Cameron Coleman 1,3, Paige Ottmar 4, Thomas Bice 5, Ross Koppel 6,7, Shannon S Carson 5
PMCID: PMC7647147  PMID: 31504578

Abstract

Objective

Physician burnout associated with EHRs is a major concern in health care. A comprehensive assessment of differences among physicians in the areas of EHR performance, efficiency, and satisfaction has not been conducted. The study sought to study relationships among physicians’ performance, efficiency, perceived workload, satisfaction, and usability in using the electronic health record (EHR) with comparisons by age, gender, professional role, and years of experience with the EHR.

Materials and Methods

Mixed-methods assessments of the medical intensivists' EHR use and perceptions. Using simulated cases, we employed standardized scales, performance measures, and extensive interviews. NASA Task Load Index (TLX), System Usability Scale (SUS), and Questionnaire on User Interface Satisfaction surveys were deployed.

Results

The study enrolled 25 intensive care unit (ICU) physicians (11 residents, 9 fellows, 5 attendings); 12 (48%) were men, with a mean age of 33 (range, 28-55) years and a mean of 4 (interquartile range, 2.0-5.5) years of Epic experience. Overall task performance scores were similar for men (90% ± 9.3%) and women (92% ± 4.4%), with no statistically significant differences (P = .374). However, female physicians demonstrated higher efficiency in completion time (difference = 7.1 minutes; P = .207) and mouse clicks (difference = 54; P = .13). Overall, men reported significantly higher perceived EHR workload stress compared with women (difference = 17.5; P < .001). Men reported significantly higher levels of frustration with the EHR compared with women (difference = 33.15; P < .001). Women reported significantly higher satisfaction with the ease of use of the EHR interface than men (difference = 0.66; P =.03). The women’s perceived overall usability of the EHR is marginally higher than that of the men (difference = 10.31; P =.06).

Conclusions

Among ICU physicians, we measured significant gender-based differences in perceived EHR workload stress, satisfaction, and usability—corresponding to objective patterns in EHR efficiency. Understanding the reasons for these differences may help reduce burnout and guide improvements to physician performance, efficiency, and satisfaction with EHR use.

Design

Mixed-methods assessments of the medical intensivists’ EHR use and perceptions. Using simulated cases, we employed standardized scales, performance measures, and extensive interviews.

Keywords: EHR, performance, efficiency, satisfaction, burnout, critical care

INTRODUCTION

Burnout from physicians’ use of electronic health records (EHRs) has become a paramount concern in health care.1,2 Increasing EHR time requirements—and the tradeoff of reduced time with patients—are repeatedly cited as major contributors to physician burnout.3 EHRs increase the amount of time providers spend on reviewing and documenting clinical information.4 Findings that physicians spend an average of 44%-65% of their time at computers, as opposed to only 24% in communication with patients,5,6 are disconcerting, especially given efforts toward greater patient-centeredness in healthcare. When surveyed about EHR use, 46.5% of more than 6300 physicians disagreed or strongly disagreed that time spent on clerical tasks was reasonable.1 In addition to limiting face-to-face time with patients, other EHR factors also contribute to physician frustration, including click-heavy, data-busy screens and stringent documentation requirements (leading to “note bloat”).2,7 Importantly, current EHR systems are linked to increased probabilities of medical errors as a result of poor usability, information overload, and other unintended consequences.8–15

Information overload generated by continuous data flow creates barriers to finding relevant data in EHRs. While new technologies enable continuous patient monitoring, the voluminous additional data contribute to information overload in high-risk environments such as intensive care units (ICUs). Critical care providers are confronted with more than 200 variables during rounds.16 On average, critically ill patients generate a median of 1348 individual data points per day.17 However, providers take on average only 2 minutes to gather, synthesize, and act on patient data.18 Information overload and failures in information processing are directly linked to cognitive errors and misdiagnosis.19

To attain higher performance and better outcomes, gender-differences among physicians have been studied over the years. For example, previous studies investigated gender differences in patient-centered conversations, physician-patient communication, and decision-making processes.20–22 The role of women in medicine is expected to continue to grow; currently, women compromise 30% of the physician workforce and 50% of medical students.23 Because use of electronic health records compromises a significant amount of physician’s time and effort, there is a need to study both male and female physicians’ experiences interacting with EHRs. Improved understanding of enablers and challenges may help optimize the overall EHR experience for all physicians.

The objective of this study is to better understand the relationship between EHR use and ICU physician performance, satisfaction, and workload. We studied physicians’ performance on clinical tasks, efficiency using EHRs, perceived workload stress, satisfaction, and perceptions of burnout associated with EHRs. We examined systematic differences by gender.

MATERIALS AND METHODS

Study design

We conducted a cross-sectional, mixed-methods assessment of the physician-EHR relationship among medical intensivists and the EHR (Epic; Epic Systems, Verona, WI) at one institution. We investigated this relationship by age, gender, professional role, and experience with the EHR.

We have previously reported the methodology for the current study in detail.24 In brief, physicians participated 1-by-1 in a 3-part study that included: (1) completion of 4 simulation patient cases in the EHR, (2) completion of 4 surveys, and (3) participation in a face-to-face, semistructured interview. For each participant, all 3 parts of the study took place during a single encounter. We obtained written informed consent from all participants. Participants were compensated with $100 gift cards. This study was reviewed and approved by the Institutional Review Board.

For the study’s simulation component, 4 cases representative of typical medical ICU (MICU) patients were developed by a board-certified pulmonary and critical care attending and an internal medicine resident (T.B. and C.C.), neither of whom participated in the study. Description of test cases are summarized subsequently, while accompanying clinical tasks are discussed in greater detail in previously published work.24 Two research assistants and an Epic consultant built the patient cases into the institutional EHR training environment, which closely mimics the live Epic environment.

Cases:

  • A 44-year-old woman with multisystem organ failure. Participants review clinical documentation, manage medications, and respond to consultations.

  • A 60-year-old woman with acute hypoxic respiratory failure. Participants review clinical documentation and flowsheets, evaluate changes and mechanical ventilation, and analyze microbiology data.

  • A 25-year-old man with severe infection (sepsis). Participants assess flowsheets, laboratory data, antibiotics, and fluid management.

  • A 56-year-old male trauma patient with postoperative heart failure and volume overload. Participants identify weight trends during previous visits and manage intravenous fluids and medications.

We provided standardized instructions to each participant to complete each patient scenario. Participants logged into their usual EHR view and reviewed the clinical cases sequentially, completing the assigned clinical tasks by providing verbal responses to questions posed by the experimenter or by performing actions in the EHR as indicated. Case questions were created by a domain expert (T.B.) and included questions about consultations received by the patient, the ordered labs, explanations about modifications in orders, justifications for current and changes in ventilator settings, lists of intravenous fluids and their changes, and the latest patient vital signs.

We recorded users’ times, mouse clicks, and keystrokes. We also documented participants’ age, gender, professional role, number of years of Epic experience (self-reported), and estimated number of hours using Epic per week (self-reported).

Following the simulations, the primary author (S.K.) conducted 1-on-1 interviews, accompanied by a note-taker who recorded interviewee responses. Participants were asked to share their perceptions and elaborate on: 1) the accessibility of information in the EHR and 2) the association between EHR use and physician burnout. Interviews were audio recorded and transcribed verbatim by 2 trained, research assistants. Interview transcripts were subsequently coded and analyzed by a qualitative research expert using standard software (Dedoose; University of California, Los Angeles, Los Angeles, CA).

Setting and participants

The study was conducted at a tertiary academic medical center in the Southeast with a 30-bed MICU. All testing took place in a standardized EHR usability laboratory on site, equipped with computer workstation, away from live clinical environments. We recruited participants through flyers and departmental emails. Eligible participants (1) were physicians in the MICU (ie, faculty or trainee) and (2) had MICU critical care experience using Epic.

Outcomes

Primary outcomes were task performance scores, efficiency of use (clicks, time, screens), and perception of workload, usability, and satisfaction. Secondary outcomes were interview responses addressing EHR usability and association with burnout (Table 1).

Table 1.

Overview of outcome measures

Measure Instrument/measurement strategy Description
Performance/accuracy Scores on clinical questions/tasks with 4 simulation cases
  • Total performance score based on 21 questions/clinical tasks

  • Scoring: 1 point (correct), 0.5 points (partially correct), 0 points (incorrect)

  • All responses scored by the physician who developed the cases

Efficiency Standard usability software (TURF, Houston, TX)
  • Time to complete the 4 cases

  • Total mouse clicks

  • Number of EHR screen visits in the simulation patient test cases

  • Keyboard strokes and screen capture recorded as adjunct measures

Usability System Usability Scale25–28 (administered electronically)
  • Includes 10 items, each scored on 5-point Likert-type scale (range, 1 [worst] to 5 [best])

  • Total System Usability Scale score calculated on a 100-point scale, with the following ranges:

    • 0-50 = Unacceptable; grade “F”

    • 51-80 = Acceptable; grade “C”

    • >80 = Excellent; grade “A”

    • >68 = above mean usability; < 68 = below mean usability29

Satisfaction Questionnaire for User Interaction Satisfaction30 (administered via paper)
  • Includes 20 items

  • Items are scored on a 9-point Likert-type scale (range, 0 [worst] to 9 [best])

Workload NASA-Task Load Index31,32 (administered via paper) Workload scores calculated across 6 domains:• Mental demand     • Physical demand     • Temporal demand• Performance     • Effort     • Frustration
  • Performance variable was reversed scored as per the survey tool design

  • Total combined workload scoring: (range, 0 [best] to 100 [worst])

  • The threshold for “overwork” is 55, per prior healthcare studies33

EHR, electronic health record.

Statistical analysis

Descriptive statistics were used for subgroup comparisons. Statistical analyses were conducted using independent sample t tests and 2-way analyses of variance. First, independent sample t tests were conducted on all 3 of the surveys (NASA-Task Load Index, Questionnaire for User Interaction Satisfaction, and System Usability Scale) and on performance data to examine differences in responses between genders. A 2-way analysis of variance was conducted for gender and age to determine if there are significant interactions between these 2 variables on both the survey responses and performance. Participants were categorized by age (25-29, 30-34, and 35+ years), which generally correspond to level of clinical training. A 2-way analysis of variance was then conducted with gender and age category as fixed factors. All statistical testing included adjustment to control for potential confounding by age or clinical role. We used IBM’s SPSS version 22.0 (IBM Corp, Armonk, NY).

RESULTS

Twenty-five ICU physicians participated in the study (11 residents, 9 fellows, and 5 attending physicians (Table 2). Twelve (48%) were men; mean age was 33 (range, 28-55 years) years and mean weekly Epic use was 31.1 (interquartile range, 7.95-52.1) hours. Mean prior experience with Epic was 4 (interquartile range, 2.0-5.5) years. With such narrow range of experience, further subgroup comparisons by this variable were not conducted. Similarly, age and clinical role tracked very closely. Therefore, subgroup analyses focused on gender.

Table 2.

Baseline characteristics of physician participants

All Men Women
(N = 25) (n = 12) (n = 13)
Age, y 33.2 ± 6.1 34.9 ± 7.6 31.5 ± 3.1
Clinical role
 Resident 11 5 6
 Fellow 9 3 6
 Attending 5 4 1
Epic experience a
Years 4.2 ± 1.3 4.3 ± 1.4 4.0 ± 1.0
Hours per week 32.6 ± 22 29.8 ± 17.9 35.3 ± 25.4

Values are mean ± SD or n.

aSelf-reported.

Performance

As shown in Table 3, the mean performance score was 19.1 of a possible 21 points, equaling an accuracy rate of 91.2% ± 7.1%. Overall task performance scores were similar for men (90% ± 9.3%) and women (92% ± 4.4%), with no statistically significant differences (difference = 2.0%; 95% confidence interval [CI], –8.6% to 3.3%; P = .374). Scores for each individual patient case were likewise similar, with men and women performing equally well.

Table 3.

Breakdown of task performance and EHR efficiency during simulation testing

Simulation, patient case Task performance
Efficiency
Time (minutes: seconds)
Mouse clicks
EHR screens visited
All Men Women All Men Women All Men Women All Men Women
1. Multisystem organ failure 5.0 ± 0.72 4.9 ± 0.86 5.1 ± 0.58 10: 07 12: 05 9: 28 91 101 84 23 26 23
(total score = 6 points) (83.3) (81.3) (85.3) ± 3: 33 ± 5: 27 ± 3: 50 ± 38 ± 46 ± 28 ± 5 ± 5 ± 5
2. Acute hypoxic respiratory failure 3.7 ± 0.35 3.8 ± 0.34 3.7 ± 0.38 6: 01 5: 56 5: 33 55 61 48 17 19 17
(total score = 4 points) (92.5) (93.8) (91.4) ± 3: 31 ± 2: 23 ± 2: 38 ± 22 ± 24 ± 19 ± 5 ± 5 ± 6
3. Sepsis 6.7 ± 0.69 6.6 ± 0.90 6.7 ± 0.44 9: 13 10: 20 8: 19 107 116 101 27 29 28
(total score = 7 points) (95.1) (94.0) (96.2) ± 1: 12 ± 3: 18 ± 1: 50 ± 32 ± 37 ± 27 ± 7 ± 6 ± 9
4. Volume overload 3.8 ± 0.65 3.7 ± 0.89 3.9 ± 0.28 7: 29 8: 25 6: 45 74 80 68 18 20 18
(total score = 4 points) (95.0) (91.7) (98.0) ± 2: 00 ± 3: 15 ± 2: 31 ± 28 ± 36 ± 16 ± 6 ± 7 ± 4
Total score (accuracy), all cases 19.2 ± 1.5 18.9 ± 1.96 19.4 ± 0.93 34: 43 ± 8: 28 38: 40 ± 10: 48 31: 36 ± 8: 42 327 ± 87 355 ± 101 301 ± 67 85 ± 19 83 ± 21 86 ± 20
(total score = 21 points) (91.2) (90.0) (92.0)
Difference in points, men vs women
  • Difference = 0.5;

  • 95% CI, –0.753 to 1.753;

  • P = .374

  • Difference = 7: 04;

  • 95% CI, –1.02 to 15.2;

  • P = .21

  • Difference = 54;

  • 95% CI, –16 to 124;

  • P = .13

  • Difference = 3;

  • 95% CI, -14-20;

  • P = .71

Values are mean ± SD or mean ± SD (%).

CI, confidence interval; EHR, electronic health record.

Efficiency

Though the differences were not statistically significant, women consistently had higher levels of efficiency than men on several indicators, including task completion time and total mouse clicks. As per Table 3, participants completed all 4 cases in a mean time of 34.7 ± 8.5 minutes, but did not spend an equivalent amount of time on each case. In general, female physicians spent less total time completing all 4 cases, with a mean total time of 31.6 ± 8.7 minutes while men spent a mean total of 38.7 ± 10.8 minutes (difference = 7.1; 95% CI, –3.23 to 16.13; P = .207). No interactions were observed between either age group or clinical role and task completion time.

Clicks: Participants recorded a total mean number of 327 ± 87 mouse clicks in completing all 4 test cases (Table 3). The distribution of mouse clicks corresponded to the time spent on each case. Women recorded fewer total mouse clicks compared with men, with a mean number of 301 ± 67 mouse clicks and 355 ± 101 mouse clicks, respectively (difference = 54; 95% CI, –13.411 to 121.211; P = .13). Female physicians visited 3 more EHR screens in total (n = 86) as compared with male physicians (n = 83) (difference = 3; 95% CI, –14 to 20; P = .71).

Gender differences in perceived workload, satisfaction, and usability

Physicians’ reported workload, satisfaction, and usability scores were calculated from the NASA-Task Load Index, Questionnaire for User Interaction Satisfaction, and System Usability Scale (SUS), respectively. Across the board, we found significant gender differences for each of the 3 surveys (Figure 1).

Figure 1.

Figure 1.

Results from instruments measuring perceived workload (NASA-Task Load Index [NASA TLX] Survey). White circles represent outliers. Light gray territory highlights scores above the median while dark gray shows scores below the median. The median is indicated in red. Differences that are significant at the .05 level (2-tailed) are denoted by 1 asterisk, while differences that are significant at the .01 level (2-tailed) are denoted by 2 asterisks.

Workload: Overall, male physicians reported a significantly higher perceived EHR workload compared with female physicians, with a mean workload of 54.3 ± 5.95 and 36.8 ± 10.57, respectively (difference = 17.5; 95% CI, 41.8 to 48.6; P < .001). Men perceived workload percentages were 61.6%, which were almost a third higher than women. Using industry standard thresholds for overwork (total Task Load Index >55), male participants exhibited overwork during the EHR simulation whereas women did not (Supplementary Appendices A).33

Effort and Frustration: Male physicians reported significantly higher levels of frustration with the EHR compared with women, with mean male frustration scores more than twice as high as the women: 61.25 ± 18.8 vs 28.1 ± 21.6 (difference = 33.15; 95% CI, 19.0 to 47.3; P < .001). As shown in Figure 1, Male physicians also reported significantly more effort in completing tasks in the EHR compared with women, with men having a mean score of 66.3 ± 10.7 and women having a mean score of 41.1 ± 20.8 (difference = 22.2; 95% CI, 11.2-38.9; P < .001). Similarly, men reported that completing tasks in the EHR was significantly more mentally demanding. Mean score for men was 65.0 ± 18.8 compared with 45.9 ± 21.6 for women (difference = 19.1; 95% CI, 2.2 to 35.9; P = .02). For performance, female physicians had a mean score of 50.7 ± 14.3 while male physicians had a mean score of 44.1 ± 22.2, which was not statistically significant (P = .38). As for physical demand, male physicians reported a mean score of 15 ± 8.2, while female physicians had a mean score of 11.1 ± 10.2, which also was not statistically significant (P = .31). Scores >55 (of 100) signify overwork. There were no interactions found between either age group or clinical role and perceived EHR workload.

EHR Satisfaction: Female physicians reported significantly higher satisfaction with the ease of use of the EHR interface than men, as women had a mean score of 4.08 ± 0.49 vs 3.42 ± 0.99 for men (difference = 0.66; 95% CI, 0.02-1.32; P = .03). Female physicians also found the EHR system significantly easier to learn how to operate, with a mean score of 5.77 ± 1.74 vs 4.17 ± 1.47 for men (difference = 1.6; 95% CI, 0.256-2.94; P = .002). Women reported marginally significantly higher satisfaction with information accessibility (difference = 1.08; 95% CI, –1.05 to 2.81; P = .06) and ability to learn new features through trial and error than men (difference = 1.31; 95% CI, –0.178 to 2.97; P = .07). There were no interactions found between either age group or clinical role and satisfaction (Figure 2 and Supplementary Appendix B).

Figure 2.

Figure 2.

Results from instruments measuring perceived satisfaction with the EHR (Questionnaire for User Interaction Satisfaction). White circles represent outliers. Light gray territory highlights scores above the median while dark gray shows scores below the median. The median is indicated in red. Differences that are significant at the .05 level (2-tailed) are denoted by 1 asterisk, while differences that are significant at the .01 level (2-tailed) are denoted by 2 asterisks.

EHR Usability: Women’s perceived overall usability of the EHR was only marginally higher than that of the men, with a mean total of 66.35 ± 11.75 of 100 for women vs 56.04 ± 14.04 for men (difference = 10.31; 95% CI, 10.30-15.25; P = .06). Both of these scores fall in the “marginal usability” range by traditional interpretation of the SUS tool. Similarly, female physicians reported significantly lower scores than male physicians for both EHR complexity ratings (difference = 0.77; 95% CI, –1.125 to –0.070; P = .03) and EHR cumbersomeness (difference = 0.69; 95% CI, –1.414 to –0.137; P = .01). There were not differences in reported confidence levels using the EHR (difference = 0.38; 95% CI, –0.370 to 1.139; P = .30) (Figures 3, 4 and Supplementary Appendix C ).

Figure 3.

Figure 3.

Results from instruments measuring perceived individual usability of the electronic health record system (System Usability Scale). White circles represent outliers. Light gray territory highlights scores above the median while dark gray shows scores below the median. The median is indicated in red. Differences that are significant at the .05 level (2-tailed) are denoted by 1 asterisk, while differences that are significant at the .01 level (2-tailed) are denoted by 2 asterisks.

Figure 4.

Figure 4.

Results from instruments measuring total perceived usability of the electronic health record system (System Usability Scale [SUS] survey). White circles represent outliers. Light gray territory highlights scores above the median while dark gray shows scores below the median. The median is indicated in red. Differences that are significant at the .05 level (2-tailed) are denoted by 1 asterisk, while differences that are significant at the .01 level (2-tailed) are denoted by 2 asterisks.

Semistructured interview findings

During 1-on-1, semistructured interviews, 80% (n = 20) of participants reported frustration when trying to find information in the EHR. Consistent with the survey data, men expressed more frustration than did women: 11 of 12 male physicians vs 9 of 13 female physicians reported experiencing difficulty locating information. When asked about relationships between EHRs and burnout, 68% believe the EHR contributes to burnout (75% of male vs 57% of female physicians), Table 4.

Table 4.

Quantitative results from 1-on-1, semistructured interviews

All Men Women
(N = 25) (n = 12) (n = 13)
Difficulty finding information in EHR Yes 20  (80) 11  (92) 9  (69)
No 5  (20) 1  (8.3) 4  (31)
EHR contributes to burnout Yes 17  (68) 9  (75) 8  (62)
No 8  (32) 3  (25) 5  (38)

Values are n  (%).

EHR, electronic health record.

For both men and women, the major complaint focused on time spent looking through patient information, alert fatigue, and documentation (Table 5). Participants repeatedly said they became doctors to work with people, not computers.

Table 5.

Qualitative results from 1-on-1, semistructured interviews

Frustrated by EHR “I’ve definitely missed important features on a patient because I was unable to find correct culture data based on how it was reported in different tabs. I can say, in many cases, that if there is a positive result, that I don’t feel comfortable unless I check 3 different places to see if there isn’t a result, and that’s a big deal for me.”– Internal Medicine Resident I have trouble looking at … medications, and comparing between current medications and … recent medications and figuring out when a medication ended, so that I can figure out when to start it. I have challenging time interpreting.” – ICU Fellow
Not frustrated by EHR “Umm. Could be easier. You know, experience matters so I know where to look for most everything now. So, yes and no.” – ICU Fellow “In general, I’d say, after 3, almost 3 years now using Epic, I’m … fairly proficient.” – Internal Medicine Resident
EHR contributes to burnout “The (healthcare) system has used the EHR as a tool to decrease overhead costs on clerical personnel by shifting the clerical duties over to the physician. I definitely know ordering meds requires more clicks than it should. My two biggest issues are having to go to too many different places to get my key elements. Not being able to find half of what’s in the EMR cause they just don’t make it, like all those nurses notes, they just don’t make it [needed information] accessible.” – Attending Physician “I feel like you have to spend so much time on it, which is just the nature of what medicine has become. I don’t know if it’s so much the EHR itself, it’s just that we have so much information now and we have to review it all and there so much documentation that has to be done that’s required.”– Internal Medicine Resident
EHR does not contribute to burnout “I think the EHR has made things easier for us. I like that I can go to any computer station wherever I am and be able to sign off on a couple of notes and things like that.” – Attending Physician “I think the EHR has made things easier for us. I like that I can go to any computer station wherever I am and be able to sign off on a couple of notes and things like that” – Attending Physician

Values are n (%).

EHR, electronic health record; ICU, intensive care unit.

Different for New Physicians: One-third of participants who did not see a relationship between EHR and burnout (n = 8, 32%) noted that they only knew care with EHRs and therefore, they can’t compare work to pre-EHR eras. However, participants who practiced in the pre-EHR era were grateful for not chasing paper charts or reading illegible handwriting.

EHR features: favorites and frustrations

We asked participants to name the top 3 favorite and most frustrating features of Epic (Table 6). For favorite EHR features, male and female physicians both reported the summary features in the flowsheet screen that organizes and presents patient information; and the Results Review screen and active medication screens—including medication interactions and allergy warning, plus longitudinal trends. The third favorite feature differed between genders. Men preferred the “Care Everywhere” function (allows viewing records from different institutions using the same EHR system). In contrast, the majority of women favored the general search function, analogous to the “Google search bar” within a patient’s chart, and the filter option in chart review as these features expedite information seeking and retrieval.

Table 6.

Interview responses, and most favorite and frustrating EHR features

Semistructured interviews All Men Women
(N = 25) (n = 12) (n = 13)
Favorite EHR features
  1. Summary Features - Tabs which summarize patient info such as Flowsheets

  2. Search function

  3. Results/Medication Management

  1. Summary features

  2. Integrated system

  3. Results/Med Management + Customizability

  1. Results/Med Management

  2. Search Function

  3. Summary features

Most frustrating EHR features
  1. Poorly designed interface issue

  2. System functionality problem (how well the system functions)

  3. Information redundancies

  1. Poor interface design

  2. System functionality

  3. Information overload

  1. Poor interface design

  2. System functionality

  3. Information redundancies in Results/Med Management + Customizability

EHR, electronic health record.

EHR Usability: Poorly designed interfaces and functionality problems were among the top frustrating features noted by both women and men. Participants noted poor interface designs and numerous errors messages associated with orders (leading to “alert fatigue”). They cited problems from variability between inpatient and outpatient formats, and inconsistent display of lab results. Physicians also reported difficulty navigating through blood product administration, difficulty determining the total volume and timing of intravenous fluid administration, and the fact that the microbiology tab only show data from current hospitalization without old microbiology data juxtaposed.

Male physicians report high frustration with information overload, explicitly complaining of too much information in the EHR (eg, the need to visit multiple screens because of poor design, too many pathways to access the same information, non-objective organization of information, jumbled and dense formatting of the medication administration record). Women were frustrated by redundancy (specifically, in the Results Review and Active Medication tabs) and discordance in data presentation (eg, outdated and conflicting information in the Vital Signs tab compared with the flowsheet). As with the men, women also report poor medication administration record formats for reviewing active and current medications.

DISCUSSION

To our knowledge, this is the first study to employ multiple methods to investigate physician differences in use of EHRs—including performance measures, observations, surveys, and attitudes. Our study leveraged an extensive mixed-methods approach to better understand physician-EHR behavior and perceptions. By including 4 EHR simulation patient cases, 4 surveys, and 1-on-1, face-to-face interviews, our data collectively offer a comprehensive comparison of physicians’ performance, efficiency, and attitudes toward EHRs.

We report profound gender-based differences in EHR use and perceptions. Female physicians had significantly lower perceived workload stress during EHR tasks, higher satisfaction with the EHR interfaces, and greater perceived usability of the EHR. In general, also, women are consistently and significantly more satisfied with the amount of effort required to find information and complete EHR tasks, and they have higher levels of satisfaction with EHRs’ usability, complexity, and cumbersomeness than did men. These findings correspond with patterns toward greater EHR efficiency (clicks, time) for female physicians despite similar outcomes on task performance. Although the differences in efficiency were not statistically significant in our study, we note that even small amounts of time and clicks saved per patient may be clinically significant, especially when scaled to the typical physician workload of 10-20 patients.

A possible explanation for the observed gender differences is found in the interview responses. Female physicians expressed higher satisfaction or lower frustration. They explicitly identified the use of the general search bar and filters in the EHR, which may explain their greater efficiency and satisfaction. When retrospectively reviewing each participant’s screen capture video, we validated that female physicians did indeed use the search function and the filter feature a lot more than male physicians id. This led them to a better search process, quicker review of information, and a less frustrating experience. If this feature represents a more efficient and user-friendly way to navigate patient charts for some clinical tasks, then women’s differential use of this feature may help explain the observed correlation with efficiency and satisfaction, and may also represent an opportunity for improved EHR training. In general, a large majority of study subjects voiced major difficulties finding important information in EHRs; arguing that EHRs contribute to professional burnout.

A large majority of subjects experienced major difficulties finding important information in EHRs, arguing that EHRs contribute to professional burnout.

Previous studies have shown that physician gender may influence quality of care.34–36 However, none of these studies investigated whether EHR challenges were a factor in these patient outcomes. The consistent association between gender and the objective and subjective outcomes observed in this study warrants further investigation. Potential explanations for this relationship may include gender-specific differences in attention to detail.34 Furthermore, we note that our findings contrast with other large scale studies demonstrating higher rates of burnout among female physicians37; our findings add nuance to the growing body of work examining physician burnout, and the inverse relationship between burnout and EHR satisfaction among women in our study suggests a possible protective effect that warrants further study.

Previous studies investigated gender-related factors regarding technology acceptance. Gender-related factors were found important for promoting technology acceptance among physicians. Although, not statistically significant, these research noted a difference in acceptance between genders where female providers had higher levels of acceptance of the technologies.38,39 For example, one study reported that gender plays a role in how mobile technologies are integrated into practice by providers.40 Moreover, weak evidence suggested that young male physicians had positive correlations with EHR use during primary care consultations.41 Our results support previous findings that there are gender factors influencing technology acceptance and use; and builds on previous knowledge by presenting new knowledge showing gender differences in EHR use, efficiency, and satisfaction.

Strengths and limitations and future direction

Strengths of this study include a mixed-methods approach, high fidelity of the simulation EHR cases to actual patient cases, and diverse physician roles. Another strength is the study of medical intensivists’ EHR experience that combines data-intense patient records coupled with complex coordination of care plans, which allows for a holistic assessment of EHR usability.

Limitations include a single setting and single-EHR system, which reduce generalizability. However, we note that Epic is the most commonly used EHR among acute care hospitals in the United States.42 The breadth of our mixed-methods testing limited the overall sample size, and the relatively small sample size could have led to a chance finding or sampling bias. The slight imbalance in gender representation in the attending physician group may have introduced bias. However, we note that the literature indicates that 85%-97% of usability issues emerge with a sample size of 5-15 participants31–33; our sample (n = 25) exceeds the conventional usability study standards, which adds credibility to our findings.

The focus on the MICU, while possibly comparable to other ICUs and very complex inpatient settings, may limit the generalizability of the findings to other medical areas and specialties. Additional research in other settings will be needed to explore wider generalizability. Also, although consistent gender differences arose, it is probable that EHR satisfaction rates may have been influenced by the nature of the medical specialization explored in this study.

Future work should investigate the potential ramifications of such gender-based usability patterns with respect to patient outcomes. This study detected different EHR usability patterns based on gender suggesting distinguishable satisfaction and burnout levels. Although there were slight differences in performance, more research is needed to investigate the effect of these usability gender-differences on patient safety and the long-term well-being of physicians. Furthermore, more studies are needed to assess the impact of EHR use on other professional roles such as nursing, pharmacy, and other health professions.

CONCLUSION

Among ICU physicians, we observe significant gender-based differences in perceived EHR workload, satisfaction, and usability, which correspond with similar and objective patterns in EHR efficiency. We also find a consensus around challenges finding information in EHRs. Findings suggest that EHR redesigns and tailored EHR training may improve EHR use. Understanding the reasons for these differences may provide insights to improve physician performance, efficiency, and satisfaction with EHRs, as well as combat physician burnout.

FUNDING

This work was supported under grant number 1T15LM012500-01; and through the University of North Carolina at Chapel Hill Office of the Provost Junior Faculty Award.

CONTRIBUTIONS

Study concept and design involved SK and CC. Acquisition, analysis, or interpretation of data involved SK, CC, and TB. Drafting of the manuscript involved SK, CC, PO, RK, and SSC. Critical revision of the manuscript for important intellectual content involved SK, RK, and SSC. Statistical analysis involved PO and SK. Administrative, technical, or material support involved TB and SK. Study supervision involved SK and SSC.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

Conflict of interest statement

None declared.

Supplementary Material

ocz126_Supplementary_Data

REFERENCES

  • 1. Shanafelt TD, Hasan O, Dyrbye LN, et al. Changes in burnout and satisfaction with work-life balance in physicians and the general US working population between 2011 and 2014. Mayo Clinic Proc 2015; 9012: 1600–13. [DOI] [PubMed] [Google Scholar]
  • 2. Downing N, Bates DW, Longhurst CA.. Physician burnout in the electronic health record era: are we ignoring the real cause? Ann Intern Med 2018; 1691: 50–1. [DOI] [PubMed] [Google Scholar]
  • 3. Gardner RL, et al. Physician stress and burnout: the impact of health information technology. J Am Med Inform Assoc 2019; 262: 106–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Carayon P, Wetterneck TB, Alyousef B, et al. Impact of electronic health record technology on the work and workflow of physicians in the intensive care unit. Int J Med Inform 2015; 848: 578–94. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Arndt BG, Beasley JW, Watkinson MD, et al. Tethered to the EHR: primary care physician workload assessment using EHR event log data and time-motion observations. Ann Fam Med 2017; 155: 419–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Neri PM, Redden L, Poole S, et al. Emergency medicine resident physicians' perceptions of electronic documentation and workflow: a mixed methods study. Appl Clin Inform 2015; 0601: 27–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Grabenbauer L, Skinner A, Windle J.. Electronic health record adoption - maybe it's not about the money: physician super-users, electronic health records and patient care. Appl Clin Inform 2011; 24: 460–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Tawfik DS, Profit J, Morgenthaler TI, et al. Physician burnout, well-being, and work unit safety grades in relationship to reported medical errors. Mayo Clinic Proc 2018; 9311: 1571–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Singh H, Spitzmueller C, Petersen NJ, et al. Information overload and missed test results in EHR-based settings. JAMA Intern Med 2013; 1738: 702–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Middleton B, Bloomrosen M, Dente MA, et al. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc 2013; 20 (e1): e2–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Johnson CM, Johnson TR, Zhang J.. A user-centered framework for redesigning health care interfaces. J Biomed Inform 2005; 381: 75–87. [DOI] [PubMed] [Google Scholar]
  • 12. Zhang J. Human-centered computing in health information systems. Part 1: analysis and design. J Biomed Inform 2005; 381: 1–3. [DOI] [PubMed] [Google Scholar]
  • 13. Hultman G, Marquard J, Arsoniadis E, et al. Usability testing of two ambulatory EHR navigators. Appl Clin Inform 2016; 72: 502–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Ratwani RM, Hodgkins M, Bates DW.. Improving electronic health record usability and safety requires transparency. JAMA 2018; 32024: 2533.. [DOI] [PubMed] [Google Scholar]
  • 15. Khairat S, Coleman GC, Russomagno S, et al. Assessing the status quo of EHR accessibility, usability, and knowledge dissemination. EGEMS (Wash DC) 2018; 61: 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Morris A. Computer applications In: Hall JB, Schmidt GA, Wood LDH, eds. Principles of Critical Care. New York: McGraw-Hill Professional; 1992: 500–14. [Google Scholar]
  • 17. Manor-Shulman O, Beyene J, Frndova H, et al. Quantifying the volume of documented clinical information in critical illness. J Crit Care 2008; 232: 245–50. [DOI] [PubMed] [Google Scholar]
  • 18. Brixey JJ, Tang Z, Robinson DJ, et al. Interruptions in a level one trauma center: A case study. Int J Med Inform 2008; 774: 235–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf 2012; 217: 535–57. [DOI] [PubMed] [Google Scholar]
  • 20. Roter DL, Hall JA, Aoki Y.. Physician gender effects in medical communication: a meta-analytic review. JAMA 2002; 2886: 756–64. [DOI] [PubMed] [Google Scholar]
  • 21. Mauksch LB. Questioning a taboo: physicians' interruptions during interactions with patients. JAMA 2017;317:1021–2. [DOI] [PubMed] [Google Scholar]
  • 22. Cooper-Patrick L, et al. Race, gender, and partnership in the patient-physician relationship. JAMA 1999; 2826: 583–9. [DOI] [PubMed] [Google Scholar]
  • 23. Barr DA. Gender differences in medicine—from medical school to Medicare. Mayo Clin Proc 2017; 926: 855–7. [DOI] [PubMed] [Google Scholar]
  • 24. Khairat S, Coleman C, Newlin T, et al. A mixed-methods evaluation framework for EHR usability studies. J Biomed Inform 2019; 94: 103175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Brooke J. SUS-A quick and dirty usability scale In: Jordan PW, Thomas B, McLelland I, Weerdmeester BA, eds. Usability Evaluation in Industry. London: Taylor & Francis; 1996; 189–94. [Google Scholar]
  • 26. Lewis JR, Sauro J.. The factor structure of the system usability scale In: Kuroso M, ed. HCD 2009: Human Centered Design. Berlin: Springer; 2009: 94–103. [Google Scholar]
  • 27. Wright A, Neri PM, Aaron S, et al. Development and evaluation of a novel user interface for reviewing clinical microbiology results. J Am Med Inform Assoc 2018; 258: 1064–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Belden JL, Koopman RJ, Patil SJ, et al. Dynamic electronic health record note prototype: seeing more by showing less. J Am Board Fam Med 2017; 306: 691–700. [DOI] [PubMed] [Google Scholar]
  • 29. Sauro J. Measuring Usability with the System Usability Scale (SUS). 2011. https://measuringu.com/sus/. Accessed May 1, 2019.
  • 30. Chin JP, Diehl VA, Norman KL. Development of an instrument measuring user satisfaction of the human-computer interface. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Washington, DC: ACM; 1988: 213–18.
  • 31. Pickering BW, Herasevich V, Ahmed A, Gajic O. Novel representation of clinical information in the ICU: developing user interfaces which reduce information overload. Appl Clin Inform 2010; 12: 116–31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Hart SG, Staveland LE.. Development of NASA-TLX (Task Load Index): results of empirical and theoretical research In: Hancock PA, Meshkati N, eds. Advances in Psychology. Amsterdam: North-Holland; 1988: 139–83. [Google Scholar]
  • 33. Chera BS, Mazur L, Jackson M, et al. Quantification of the impact of multifaceted initiatives intended to improve operational efficiency and the safety culture: a case study from an academic medical center radiation oncology department. Pract Radiat Oncol 2014; 42: e101–8. [DOI] [PubMed] [Google Scholar]
  • 34. Tsugawa Y, Jena AB, Figueroa JF, et al. Comparison of hospital mortality and readmission rates for medicare patients treated by male vs female physicians. JAMA Intern Med 2017; 1772: 206–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Kim C, McEwen LN, Gerzoff RB, et al. Is physician gender associated with the quality of diabetes care? Diabetes Care 2005; 287: 1594–8. [DOI] [PubMed] [Google Scholar]
  • 36. Berthold HK, Gouni-Berthold I, Bestehorn KP, et al. Physician gender is associated with the quality of type 2 diabetes care. J Intern Med 2008; 2644: 340–50. [DOI] [PubMed] [Google Scholar]
  • 37. Kane L. Medscape National Physician Burnout, Depression & Suicide Report 2019. 2019. https://www.medscape.com/slideshow/2019-lifestyle-burnout-depression-6011056? src=WNL_physrep_190116_burnout2019 &uac=303966MG&impID=1861588&faf=1#4. Accessed May 1, 2019.
  • 38. Gagnon M-P, Ghandour EK, Talla PK, et al. Electronic health record acceptance by physicians: testing an integrated theoretical model. J Biomed Inform 2014; 48: 17–27. [DOI] [PubMed] [Google Scholar]
  • 39. Tubaishat A. Perceived usefulness and perceived ease of use of electronic health records among nurses: application of technology acceptance model. Inform Health Social Care 2018; 434: 379–89. [DOI] [PubMed] [Google Scholar]
  • 40. Schooley B, Walczak S, Hikmet N, et al. Impacts of mobile tablet computing on provider productivity, communications, and the process of care. Int J Med Inform 2016; 88: 62–70. [DOI] [PubMed] [Google Scholar]
  • 41. Lanier C, Cerutti B, Dao MD, et al. What factors influence the use of electronic health records during the first 10 minutes of the clinical encounter? Int J Gen Med 2018; 11: 393–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Bermudez E, Warburton P. US Hospital EMR Market Share 2018 Small Hospitals Hungry for New Technology 2018. https://klasresearch.com/report/us-hospital-emr-market-share-2018/1279. Accessed May 1, 2019.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocz126_Supplementary_Data

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES