AI in Healthcare-See more insights

People First: Types of AI and How to Put Them to Work on Clinician Burnout and Health Disparities

Kevin Johnson, MD

Kevin Johnson, MD, MS, University Professor of Biomedical Informatics, Computer Science, Pediatrics, and Science Communication at the University of Pennsylvania; Vice President of Applied Clinical Informatics in the University of Pennsylvania Health System

Artificial intelligence (AI) isn’t new: The first neural network computer appeared in the early 1950s. Hype about AI revved up in the 1970s, but early iterations of AI stalled out through a combination of too much hype, too much complexity, and too little attention to the human element.

We can learn from prior AI failures, not to mention failures of EHR implementation, to successfully integrate certain types of AI into daily healthcare operations. We will not solve the clinician burnout crisis or achieve equity in healthcare using AI alone, but strategic applications of AI can assist us in both areas.

The cornerstones of successful AI implementation will be people: Those who envision, lead, and train, and those who reskill, adapt, and integrate AI tools into frontline care, whether for administrative or for patient-facing purposes.

Four Main Types of AI

Right now, generally speaking, AI works in one of four ways:

  1. Diagnostic: AI can sort information into categories to answer questions about a patient’s status, playing a role of detection. For instance: Does this patient have prostate cancer?
  2. Predictive: AI can assist us with pharmacogenomics, identifying patients who are likely to respond to a particular therapy. It may also be an aid to predicting length of hospital stay, risk of readmission, and other scenarios in which hospital operations and patient safety overlap.
  3. Prognostic: AI can help us anticipate the probable course of a disease and personalize patients’ care, such as when constellations of genes or variants make cancers harder to treat.
  4. Generative: Generative AI can generate any new token from a set of tokens: text from text, audio from audio, pictures from pictures, and so on. Generative AI can assist us with drafting responses to routine patient questions and other at-desk tasks—but because gen-AI has a knack for confidently expressed inaccuracies, clinicians must review any information provided.

Six Key Actions for AI

In medicine, we do six main things with information: (1) generate messages (a.k.a. inbox), (2) order, (3) document, (4) search, (5) summarize, and (6) guide (a.k.a. clinical decision support). AI-related tools already exist for each of those domains. Soon more organizations will have access to tools to give us better data insights and to manage revenue cycles. We'll be able to do work with population health summarization, improve patient experiences, and get help with back-office things such as capacity management. Because of the risk-benefit ratios, these sorts of applications will likely mature more quickly than most diagnostic applications.

In terms of streamlining back-office functions, here’s some of what you can expect:

  • Scheduling: It’s ideal if patients can specify a need vs. a practitioner. For large organizations that have, say, 30 different types of orthopedic doctors, not all of whom are in the Department of Orthopedic Surgery, AI tools for scheduling can streamline processes for patients and organizations.
  • Patient portal messages: Patient portal messages are already here, and publications regarding the results will be emerging. When patients ask typical questions like, “What does this lab result mean?” a generative pretrained transformer, or GPT, can generate a draft answer (requiring editing). One study found that this sort of patient portal message response tool did not reduce clinicians’ work, but did increase their satisfaction, presumably by relieving the cognitive load of crafting a message de novo.
  • Clinical documentation: What if, instead of spending appointment time documenting a clinical encounter, we could consistently spend that time engaging with the patient and family? In the University of Pennsylvania Health System, we have experienced promising early results with ambient scribe technology.

Burnout Is Prevalent

Burnout rates continue to hover at around 50 percent for physicians. The proportion of nurses may be higher. If you walk around any hospital, where people are exhausted and it just never ends, you'll see it.

Burnout is a systems issue. Weights piled onto the shoulders of clinicians include overscheduling, understaffing, the perpetually increasing length of clinical notes, dealing with our famously unusable EHRs, and increases in RVU targets.

Burnout may be amplified or diminished by individual factors, including social and demographic aspects, such as for those of us who belong to the sandwich generation.

Health Disparities Call for New Screening Requirements

CMS recently passed legislation that requires reporting of, and thus screening for, five health disparities: (1) food insecurity, (2) interpersonal safety, (3) housing insecurity, (4) transportation insecurity, and (5) utilities. Screening for the social determinants of health (SDOH), though it may be a step in the right direction for equity, will increase workload—which is not a step in the right direction for burnout. Clinician leaders will need time, space, and personnel to implement SDOH screening. That means adjustments to workflows, staffing, expectations for patient encounters, and methods of data collection.

AI Can Help With Disparities and Burnout

AI may give us fresh insights and lift some burdens:

  • Searching for disparities data: Information related to health disparities in EHRs, when it is present, is buried or scattered. AI can help with searching and analyzing this unstructured data.
  • Screening for SDOH: It can be challenging for those on the front lines of care delivery to ask patients very personal screening questions, and some patients may feel startled, followed by embarrassed, defensive, even angry. To set the stage, some initial screening questions could potentially be completed with a chatbot from the patient’s home.
  • Documentation: As ambient scribe technologies become more widely available, physicians, nurses, and advanced practice clinicians will be able to apply their training while focusing on the patient, not a computer. We’ve reaped some rewards already in a trial currently underway at Penn. Around the country, technologies like DAX are reducing clinicians’ pajama time.
  • Patient portal messages: As mentioned, ChatGPT can draft messages to answer common questions—and practitioners can engineer their prompts to suit any level of health literacy (before editing the result). This tech-assisted drafting process intervenes in practitioners’ inbox burden, a major contributor to burnout. By addressing health literacy gaps, it battles health disparities. And by promoting clear communication, it improves patient safety.
  • Other patient communications: There are chatbot projects running all around the country. In some areas, like genetic counseling, incorporating chatbots into some patient communication could increase access to human professionals.

People Before Technology

To implement AI-powered tools in healthcare, we need to account for three layers of challenges: technology, process, and people.

  • Technology is the smallest piece and the easiest to solve. If the question of how to implement AI-powered tools into healthcare were an iceberg, then technology would be at the surface. However, process issues and then—at the base, and much bigger—people issues are where we find our real work.
  • Process issues should be familiar to us from precedent: As veterans of the EHR-conversion wars know, it’s easy to magnify the inefficiencies of an existing organizational structure through a misguided technology install. In integrating AI tools, we need to recognize and reduce our existing complexities.
  • People issues will be the gorilla in the room—and also key to our success or failure. We need to address willingness, skill, comfort, and cost. Willingness comes first because people may fear that they are being asked to learn the very thing that is about to take their job. Before we can train anyone, we have to persuade people to get on board with training.

Stanford’s Curtis Langlotz, MD, PhD, a leader in radiology, has famously said, “Artificial intelligence will not replace radiologists . . . but radiologists who use AI will replace radiologists who don’t.” That goes for all of us.


The guidelines suggested here are not rules, do not constitute legal advice, and do not ensure a successful outcome. The ultimate decision regarding the appropriateness of any treatment must be made by each healthcare provider considering the circumstances of the individual situation and in accordance with the laws of the jurisdiction in which the care is rendered.


The opinions expressed here do not necessarily reflect the views of The Doctors Company. We provide a platform for diverse perspectives and healthcare information, and the opinions expressed are solely those of the author.


;