Three experts discussed the importance of AI regulation and combating misinformation in “AI and Healthcare,” the fourth event of the “Conversations on AI and our Data-Driven Society” series. The series is hosted by the Office of the Provost in partnership with Brown’s Data Science Institute.
Moderator Sohini Ramachandran, the DSI director and a professor of biology, data science and computer science, opened the event by asking panelists about their “dream scenario for the integration of AI into healthcare from the perspective of (their) research.”
Hamish Fraser, an associate professor of medical science and health services, policy and practice, pointed to the progress that health apps have made in the past 10 years, and said he hopes that systems would improve enough to provide “an accurate diagnosis and advice about what to do in a home setting.”
Claire Wardle, a professor of health services, policy and practice and co-director of the Information Futures Lab, said that she is excited about “how AI will improve people’s access to personalized care regimes.”
Panelists then went on to share concerns about the future of AI health misinformation.
Fraser said that the quality of ChatGPT’s diagnoses varies from case to case and is hard to predict, citing his current research. He added that a ChatGPT diagnosis only matched a physician’s diagnosis about 60% of the time.
One major issue, panelists agreed, was that people were more likely to trust AI — even if it was incorrect — than their actual doctors, a phenomenon that Wardle attributed to AI systems sounding more “empathetic” and “authoritative.”
“Sometimes we hear people say, ‘I love asking Alexa because she sounds like she knows what she’s talking about,’” Wardle said.
Lorin Crawford, an associate professor of biostatistics, noted that sometimes AI has to make an inaccurate, “outlandish claim” so that the model can develop from its mistakes. Crawford also advocated for healthcare literacy programs in addition to AI literacy.
Fraser noted that continuously updating and releasing new AI models without regulation can spread misinformation. “We really don’t know, ‘are we getting a better version or worse version, or is it going to be different next week?’” he said. “That is a very big trust issue.”
The panelists emphasized the importance of teaching all communities about AI and healthcare while gathering data for AI models.
Crawford warned against “predatory inclusion” of communities in such research. “You don’t want to include people in the study just for the sake of including them as a number,” he said. “You want to think about the communities you’re giving back to.”
As the event concluded, panelists were asked about what Brown students could do to improve their understanding of the intersection between AI and health. They encouraged students to explore their unique interests.
“To me, any skill is useful in this space,” Crawford said. “I don’t think it should be exclusive.”