AI and Healthcare: Past, Present, Promise, and Peril
Past
The concept of thinking machines, first conceived as robots, has been around since the 1940s. Alan Turing recognized the value of differentiating human communication from machine interaction and developed the Imitation Game, now known as the Turing Test, in 1950. Computing science improved quickly. By 1959 a computer had learned to play chess better than its programmer. A conversation program designed to entertain hospitalized patients came close to defeating the Turing Test in 1965. The first significant application of artificial intelligence (AI) in medicine began in 1972, when Stanford initiated work on MYCIN, a clinical decision support program based on evaluating diagnostic testing data.
Present
The science of artificial, or augmented, intelligence has progressed to a point where the term “AI” is categorical, and machine learning, adaptive learning, and generative learning are subsets. In machine and deep learning, programming is based on algorithms. The algorithms may be locked, which limits output to defined rule sets, or adaptive, where the program is trained with large data sets to differentiate information, recognize patterns, and provide interpretations of the data. Clinical decision making associated with electronic medical record medication order entry is an example of locked machine learning. Adaptive learning has proved helpful in education and gaming. Advances in machine learning technology and programming have led to the development of neural networks that more closely mimic how humans think. Neural networks can process auditory and visual data and have proved useful in natural language processing and augmented review of diagnostic imaging data in radiology, dentistry, pathology, and cardiology. The effectiveness of AI implementations in healthcare depends on access to sufficient amounts of diverse unbiased data for training and maintenance. A 2022 report from the Government Accountability Office (GAO) highlighted challenges related to obtaining training data, citing the hesitance of healthcare organizations to share data and challenges associated with negotiating data-sharing contracts. The old computing adage, "garbage in, garbage out," takes on new meaning and urgency when the output may determine human well-being.
Generative AI is the latest and most advanced iteration. The term "generative" refers to the human-like ability to review characteristics, patterns, and trends in training information and generate work products in written, visual, and auditory form. Large language models (LLMs), such as ChatGPT, are generative. According to the creators of ChatGPT, "the dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests." Many other LLMs are in use, including Google Bard and Meta. However, the programming for ChatGPT is open source, which has created opportunities for anyone with the skills to create an application programming interface (API) using the technology.
Promise
Artificial intelligence has gained a strong foothold in healthcare. Specialties such as cardiology and radiology use artificial intelligence widely for collecting and screening imaging data. Delayed, wrong, and missed diagnoses are a leading source of patient harm, and a common contributing factor in medical professional liability claims. Machine learning–enabled diagnostic decision support (DDS) programs have been available for nearly a decade. Consider the advantages of DDS that can converse with the end user.
With the advent of generative AI and programs such as GPT-4, we are beginning to see a flood of uses in healthcare. AI has already shown promise in customer relationship management, and some of these successes can contribute to healthcare. Applications such as chatbots, automated reminders with a chat option, and interactive scheduling are already facilitating the patient care journey in healthcare organizations. Researchers and vendors are considering AI-generated care summaries, interactive patient education, and patient follow-up management.
Generative AI applications may help reduce the administrative burdens that frustrate physicians and other clinicians. LLMs have shown promise in performing activities such as coding and summarizing data, which could simplify billing processes such as prior authorizations and appeals. AI has proven effective in improving healthcare workflows such as scheduling, inventory management, and clinical documentation. Real-time patient data analysis can improve patient safety and mitigate morbidity by searching for specific indications of sepsis or patient decline requiring rapid response and alerting appropriate personnel. As telehealth-based interventions such as remote patient monitoring gain traction in acute home care, the constant data streams overwhelm clinicians. Artificial intelligence is a potential solution for converting the data flood to prioritized actionable information and communicating it to the appropriate care team members.
Peril
Known weaknesses associated with AI models include bias, confabulation / bad advice, overestimation of / overreliance on abilities, and co-option for criminal use. Early examples of bias include a COVID-19 vaccine distribution algorithm prioritizing university trustees over medical residents and a racially biased healthcare access algorithm. The algorithm prioritized enrollment by healthcare utilization as opposed to underlying conditions. Utilization was driven by insurance status, and the sicker Black patients were less likely to have insurance.
Generative LLMs are prone to confabulation—they make things up. Researchers discovered that while the applications could generate comprehensive answers to medical questions and provide evidence-based citations, some of the answers, including the citations, were convincing fakes. Problems may also occur on the user end of the AI, such as overreliance on the information provided. AI is an adjunct, not the decision maker.
Scientists, critics, and regulatory agencies have all expressed concerns about the criminal use of AI. Computer security experts identified “catastrophic forgetting” in deep neural networks when they were trying to develop responses to increasingly sophisticated "deep fakes." The program would learn to identify the new fakes and forget how to identify earlier versions. How can healthcare organizations and users be certain that the AI-enhanced programs in medical devices they use are safe?
Organizations that develop complex artificial intelligence programs, such as neural networks and LLMs, are developing system cards to identify opportunities for improvement and device standards for program development and maintenance. Meta and OpenAI have made their system cards available.
Regulating AI is proving difficult. The Food and Drug Administration (FDA) has issued three recent documents establishing a regulatory framework, an action plan, and good machine learning practices. To date, LLMs have been subject to FDA review. The Federal Trade Commission (FTC) addresses inappropriate AI use for advertising under the FTC Act and financing under the Fair Credit Reporting Act and Equal Credit Opportunity Act. At the state level, 11 states have enacted legislation to address augmented learning, and 13 states had proposed legislation as of April 2023. However, industry and regulatory leaders are concerned that the lightning pace of innovation will render traditional regulation immaterial and are starting to advocate for self-regulation.
Currently, there are no insurance products available for AI. Bespoke products are available for those who can afford them, but there are no current riders or policies that address artificial intelligence. Therefore, healthcare organizations must consider their current risk financing programs to determine whether there is protection for a significant event involving AI. Careful evaluation of AI product contracts and terms of use will be necessary, specifically indemnification language and risk sharing.
As of this printing date, there have been no liability claims based on patient harm related to reliance on AI-generated information. For providers, negligence risk may present as the doctrine of informed intermediary. Providers must understand the responsibilities of acting or not acting on AI-provided advice. Medical record documentation of human judgment in response to AI guidance will be essential to patient safety and mitigating the risks of allegations of negligence. At the organization level, there may be vicarious liability for introducing an AI product without due diligence and a solid implementation process. On the vendor side, the question is who is responsible for product liability. In the absence of case law, the answer is unclear—hence the need for insuring the risk. Vendors have a duty to warn if their products are found to be defective and must also recommend appropriate use in healthcare. Healthcare leaders must pay attention to both.
The promise of AI in healthcare is tremendous, and so are the perils. Extreme caution and diligence will be required in the current environment of limited regulatory oversight and proposed self-regulation by the developers. Healthcare leaders would be wise to review any self-regulation information made available by the developers of AI products planned or implemented, including the smart cards. The World Health Organization and the White House have released guidelines for the ethical use of AI. The leading criterion in both is the protection of human autonomy.
In July 2023, OpenAI, the creators of ChatGPT, GPT-4, and other generative programs, released the following statement: "We need scientific and technical breakthroughs to steer and control AI systems much smarter than us."
The guidelines suggested here are not rules, do not constitute legal advice, and do not ensure a successful outcome. The ultimate decision regarding the appropriateness of any treatment must be made by each healthcare provider considering the circumstances of the individual situation and in accordance with the laws of the jurisdiction in which the care is rendered.