A team of researchers at the University of California – San Diego have recently published a study highlighting some of the perils and challenges with regulating artificial intelligence (AI) in healthcare setting, pointing for a need for more patient-centric ways to regulate and evaluate these systems. The team’s work is published in a recent article published in JAMA.
AI is one of the hottest topics in healthcare, with a lot of attention being paid to the numerous ways in which AI is changing the healthcare field as a whole: from improving diagnostic capabilities to uncovering new ways to discover and develop new therapeutics faster than ever, AI holds a lot of promise. With this growth, understandably, comes increased scrutiny; a recent Executive Order from the White House provided guidance to the Department of Health and Human Services to develop guidelines and potential regulatory frameworks for responsibly deploying AI in healthcare spaces.
Missing from this order, however, was explicit mention of patient outcomes, a metric that is used in nearly every other healthcare setting.
To highlight the importance of patient outcomes focused regulation in healthcare AI and the pitfalls of ignoring this important metric, researchers decided to test an actual AI system. Specifically, researchers evaluated a system that is designed to operate as an alert system that can help identify individuals who are at risk of developing sepsis, a condition that affects almost 2 million hospitalized individuals every year.
Using third party validation, the team revealed that the alert system failed to recognize about two thirds of the individuals who developed sepsis, highlighting a significant limitation in the AI system.
As a result, the team is calling on the federal government to think critically about how AI regulation will look. Specifically, regulation should focus on patient-oriented outcomes. As with agencies like the US Food and Drug Administration, which ensures pharmaceuticals are safe and effective for patients, regulations should ask whether AI is safe and effective for patients, as well.
Sources: Science Daily; JAMA; WhiteHouse.gov