University of Arizona
Marvin Slepian, M.D., J.D., is a Regent Professor at the University of Arizona Tucson School of Medicine.
Today’s passable artificial intelligence is really a pattern matching program, no different than text or email autocomplete. It doesn’t logically understand or understand what it’s saying.
Still, the pattern detection capabilities of generative AI like GPT have the potential to transform data-intensive fields such as healthcare. But with persistent questions about safety, ethics, privacy, trust, and quality of care, some see AI in medicine as a tough pill to swallow.
A new paper published in PLOS Digital Health seeks to gauge public confidence in the technology.
“They’re concerned,” says lead author Dr. Marvin Slepian, Regent Professor of Medicine at UA College of Medicine in Tucson and a member of UA’s BIO5 Institute. “Who will look after us in the future? A doctor? Will an AI doctor help us? Or will he be some sort of computer-based AI system alone? This is like a robot.”
To find out how comfortable people are with AI diagnosis and treatment, UA researchers surveyed about 2,500 people.
Slepian said about 53% of participants said they weren’t sure AI diagnoses were reliable.
“There was a large group of people who thought, ‘Well, this might not be so bad,’ and then there was a wide range of people who felt, ‘This might be a little bit dangerous,'” he said. .
Slepian added that the genie came out of the bottle. The challenge ahead is for doctors and engineers to make AI accurate.