ChatGPT in Healthcare: Everything You Need to Know About Generative AI

With the November 2022 release of OpenAI’s artificial intelligence-powered chatbot ChatGPT, interest in generative AI is at an all-time high.

Healthcare organizations, from vendors to healthcare systems, are beginning to use generative AI to solve some of their most fundamental challenges. Researchers are using large datasets to draw more complex conclusions.

Here’s what you need to know about generative AI in medicine.

Related: Epic, Microsoft Bring GPT-4 to EHR

What is Generative AI?

Generative AI is the ability of algorithms to automatically generate content from user queries such as text, video, and images.

ChatGPT is a general-purpose generative AI text application from OpenAI. OpenAI has other generative AI applications available for paying customers, and is working with Microsoft, in which he reportedly invested $100 billion. Other big tech companies such as Google and Meta are also launching their own generative AI tools.

How does it work?

Generative AI works by learning and evaluating raw datasets to develop statistically probable outputs. These models have been used for many years on numeric data, but have only been applied to text, images, and audio in the last decade. Large language models like his ChatGPT in OpenAI can converse with humans, summarize articles, and write copy.

Why is the medical industry so excited?

The deployment of ChatGPT is rocking the healthcare world. John Braunstein, chief innovation officer at Boston Children’s, said he quickly realized the technology would have a major impact on healthcare.

“I can say right out of the box, I don’t think I’ve seen anything quite as revolutionary since the iPhone and Google,” said Braunstein.

Related: Why Boston Children’s Center Wants to Hire ChatGPT Experts

Experts say the most directly useful in healthcare are administrative tasks that require clinician or human oversight, such as billing, post-appointment clinical notes, and patient communication.

Michael Hasselberg, chief digital health officer at the University of Rochester Medical Center in New York, said he believes in its power. ChatGPT’s large-scale language AI model is “light years ahead” of what various startups have seen in the market that automate healthcare administrative and revenue cycle processes, Hasselberg said. I’m here.

Why are some people afraid of it?

There are two schools of thought on ChatGPT’s potential in healthcare, summarized at ViVE 2023 in March by Micky Tripathi, Director of the Office of the National Coordinator of Health Information Technology.

“I think we’re all feeling tremendous excitement, and you want to feel tremendous fear,” Tripathi said.

Why are you afraid? Tripati said health equity and quality issues could be perpetuated if the algorithm is used improperly. Another of his concerns with ChatGPT in healthcare is the accuracy of medical information generated by AI solutions.

Dr. Isaac Kohane, Dean of the Department of Biomedical Informatics at Harvard Medical School, recently co-authored the book “The AI ​​Revolution in Medicine: GPT-4 and Beyond.” He wrote that ChatGPT still tends to make up facts. A March study by Stanford Medical researchers found that 6% of the medical articles ChatGPT referred to when answering medical questions were fabricated, potentially harmful to clinical care.

Is ChatGPT protected by HIPAA?

There are also concerns about ChatGPT exposing confidential patient information. The public version of ChatGPT is not protected by the Health Insurance Portability and Liability Act of 1996. To protect patient privacy, organizations such as the Cleveland Clinic and Baptist Health have secured and actively monitored private data repositories as they test ChatGPT. Generative AI implementations require rigorous testing, experts say.

Related: The Future of ChatGPT in Healthcare

Which vendors are piggybacking on the hype?

Even before the ChatGPT craze, healthcare AI investments totaled $4.4 billion in 2022, according to data from Rock Health, a research and digital health venture capital firm. Funding levels have generally declined in the tough economic climate, but experts say interest in AI investments will continue. Vendors are rushing to sell their solutions and capture market share.

Microsoft will bring OpenAI’s GPT-4 model to Epic’s electronic medical records. The first use case will focus on patient communication and data visualization.

Microsoft subsidiary Nuance Communications, a clinical documentation software company, has separately integrated GPT-4 functionality into its EHR. In this case, it is used to summarize the conversation between the clinician and the patient and enter it directly into her EHR.

who else?

Google announced it is training a generative AI solution to summarize insights from a wide variety of dense medical documents. Digital health unicorn Innovaccer is adding conversational AI tools to its provider his platform. Abridge, a medical AI company, uses generative AI to summarize clinical conversations from audio recorded during patient visits.

Which healthcare systems are using generative AI?

Epic has already found three partners for GPT-4 integration. Madison, Wisconsin-based UW Health and UC San Diego Health have signed an agreement, and the California-based Stanford Health Care will soon add the feature as well.

Baptist Health and the Cleveland Clinic in Jacksonville, Fla., have begun experimenting with generative AI. They are working with Microsoft to provide a proof of concept using his ChatGPT for clinical and administrative functions. This includes summarizing data for quality registry reviews and providing relevant diagnostic information.

Both systems plan to implement ChatGPT into clinical workflows later this year.

How else is the healthcare system using it?

Braunstein of Boston Children’s Hospital is a ChatGPT proponent, hiring people who use generative AI applications as “AI Prompt Engineers.” This person uses large language models like ChatGPT to design and develop his AI prompts.

“The skill set of the next decade will be people with the skills of agile engineers, people who know how to interface with large language models,” Braunstein said.

What are some medical areas where it could be used?

Researchers and experts are beginning to explore what impact it might have on patient care. Communication with patients seems to be one of the most prominent areas where it can help. A recent study led by researchers at the University of California, San Diego found that patients preferred ChatGPT’s answers to medical questions to their doctors.

Related: Unraveling ChatGPT’s Early Uses in Healthcare

Researchers are also looking at how generative AI could improve cancer care and save the US $200 billion spent on cancer care. Researchers at Cedars-Sinai Hospital in Los Angeles have found that ChatGPT may improve health outcomes for patients with cirrhosis and liver cancer by providing easy-to-understand information about it. . The study found that generative AI solutions can diagnose one in a million rare diseases with staggering accuracy.

ChatGPT also made headlines when it passed the U.S. Medical Licensure Examination in March with an accuracy of 60% or close to the passing standard. Researchers led by Dr. Tiffany Kung, a researcher at AnsibleHealth, a virtual pulmonary rehabilitation treatment center, say the technology has many potential in medical education.

Does the hype match reality?

The development of AI in healthcare and other fields has far outstripped fledgling government efforts to control AI. The recent interest in generative AI solutions will further accelerate this gap. While some major health systems have their own guardrails in place, finding the technical talent needed to oversee a large, self-regulated AI sector can be difficult for smaller systems. is known.

Despite the excitement, few in the medical community say generative AI will replace humans in the near future. And experts say generative AI is in its relatively early stages and human intervention is critically needed when incorporating these solutions into clinical care.

“AI is far from perfect, and I think it will still need human involvement for some time,” said Eric Brynjolfsson, director of the Digital Economy Lab at Stanford University’s Human-Centric AI Institute. . “It can’t do much.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *