Skip to Main Content

The mad dash accelerated as quickly as the pandemic. Researchers sprinted to see whether artificial intelligence could unravel Covid-19’s many secrets — and for good reason. There was a shortage of tests and treatments for a skyrocketing number of patients. Maybe AI could detect the illness earlier on lung images, and predict which patients were most likely to become severely ill.

Hundreds of studies flooded onto preprint servers and into medical journals claiming to demonstrate AI’s ability to perform those tasks with high accuracy. It wasn’t until many months later that a research team from the University of Cambridge in England began examining the models — more than 400 in total — and reached a much different conclusion: Every single one was fatally flawed. 

advertisement

“It was a real eye-opener and quite surprising how many methodological flaws there have been,” said Ian Selby, a radiologist and member of the research team. The review found the algorithms were often trained on small, single-origin data samples with limited diversity; some even reused the same data for training and testing, a cardinal sin that can lead to misleadingly impressive performance. Selby, a believer in AI’s long-term potential, said the pervasiveness of errors and ambiguities makes it hard to have faith in published claims.

Get unlimited access to award-winning journalism and exclusive events.

Subscribe

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.