Research

Incorporating Reliable and Ethical AI into Medical Diagnosis and Treatment

Aug. 12, 2024
Illustrations by Denis Freitas

Artificial intelligence (AI) is not new, but the emergence of generative AI has unlocked new potential—and ethical considerations—for the technology. Clinicians, computer scientists, and ethicists are working across the University of Rochester to incorporate reliable and ethical AI into medical diagnosis and treatment.

Caroline Easton, PhD, professor of Psychiatry at the University of Rochester Medical Center (URMC), has leveraged AI to fine-tune an app that uses avatar coaches to guide patients through cognitive behavioral therapy. Used as a complement to clinician-centered therapy, the app allows users to customize their avatar coaches to respond to their particular needs.

AI tools can also act as a second set of eyes for radiologists, but URMC Imaging Sciences Chair Jennifer Harvey, MD, says AI can’t replace radiologists.

“Radiologists are still much better at synthesizing the findings in a way that AI tools cannot,” said Harvey, who is also the Dr. Stanley M. Rogoff and Dr. Raymond Gramiak Professor in Radiology. “For example, a chest CT may have one or two findings flagged by AI, but the radiologist must put all of the findings together to generate likely diagnoses.”

To rely on algorithms for disease detection and treatment, clinicians need to have high confidence in their accuracy. But generative AI can sometimes “get it wrong,” according to Michael Hasselberg, NP, MS, PhD, an associate professor of Psychiatry, Clinical Nursing, and Data Science, and the University’s inaugural chief digital health officer.

AI is only as reliable as the data it is trained on. Nationally recognized AI ethics expert Jonathan Herington, PhD, who is also an assistant professor of Philosophy and Bioethics, warns that AI can perpetuate social and cultural biases. One way to remediate bias is to be more deliberate about the data used to train the system.

Another way is to “always have a human in the loop”—whether that’s a radiologist synthesizing AI’s findings on a CT scan, or an FDA regulator evaluating whether an AI tool is safe and effective.

While FDA certification is not currently required for AI tools, Chris Kanan, PhD, associate professor of Computer Science, can attest to the benefit of taking that extra step. Kanan worked with Paige.AI to develop Paige Prostate—the first FDA–cleared, AI-assisted pathology tool. According to Kanan, FDA certification increases the trust hospitals and clinics have in a product and the likelihood that a product will be covered by insurance.

Want to know more? Read the full story in the University News Center.