‘Data is at the center of AI regulation’

April 25, 2024

Artificial intelligence has the potential to revolutionize the medical field, but questions about regulation remain

Key Takeaways:


Meta CEO Mark Zuckerberg stated his goal for artificial intelligence to “cure, prevent, or manage all disease by the end of the century” at an event last year announcing his organization’s funding for AI science research.  

While more mainstream AI like ChatGPT has grown rapidly over the past year, the medical field has used AI in some capacity for decades—from decision support to radiology. But over the past 10 years, the Food and Drug Administration’s approvals of AI medical devices have surged. The FDA approved 100 devices in 2020, compared to just one device in 2010. Most of those approvals have been in the radiology space.  

With the increase in submissions for AI device approvals comes more questions about how to regulate AI. And, according to Aldo Badano, director of the Division of Imaging, Diagnostics and Software Reliability (DIDSR) at the Center for Device and Radiological Health at the FDA, “Data is at the center of AI regulation.”  

“Most of the devices we’ve been talking about are trained on data,” Badano said during his keynote address at an event titled “Shaping the Future of AI Medical Devices” at the Johns Hopkins University Bloomberg Center on April 18. “Yes, we can have knowledge into the algorithms, we can have regularization techniques, but in the end, unless you have good data, you’re going to fail.”

Badano noted as an example the opportunities created by synthetic data—data that is created with AI—which can be used to test and train medical AI devices. Due to the endless possibilities and unlimited samples of synthetic data, it can test devices for rare or diverse populations, or carry out longitudinal studies that would have otherwise taken years to complete. Regarding concerns about the reliability of synthetic data, Badano shared two side-by-side images of breast tissue—one real, one artificially generated—which he said most medical professionals could not tell the difference between.

But even with the benefits of AI to the medical field, doctors and scientists at the symposium cautioned AI should never be the sole decision-maker or replace the role of physicians.

“I am not a physician, but there’s a component of the physician being in the loop and being essential to the delivery of care and medicine and not being removed from that,” said Jennifer Kuskowski, vice president of government affairs at Siemens Healthineers. “I think there has to be a balance … . We’re not trying to replace the physician with AI—they’re essential to this system.”

Pat Baird, head of global software standards at Philips, added that, in order to be effective, data for AI needs to be combined with the knowledge and context we have as humans. He used the example of smart watches sometimes tracking potholes as “steps” as a problem with overreliance on the technology.

The panelists spoke about the difficulty of regulating AI but said guardrails are critical as the technology advances.

“I have never met a perfect clinician, I have never met a perfect regulator, and I have never met a perfect AI,” said Jana Delfino, deputy director of the DIDSR at the FDA. “I think the datasets you need to make sure that your device, algorithm, or model is representative and generalizable across a large population are larger than anybody thought.”