Skip to main content

Health care has AI fever.

According to a report from CB Insights, health care AI companies brought in a record $2.5 billion worth of investments in the first quarter of 2021 across 111 deals, which is a 140 percent increase from the first quarter of 2020. Moreover, a survey of health care leaders from Intel found that 84 percent say their organization is currently, or will be, using AI—up from 37 percent in 2018. The survey found that the top potential uses of AI include predictive analytics for early intervention, clinical decision support and collaboration across multiple specialties.

It’s not just providers who are interested in AI usage. Payers are increasingly using AI to reduce expenses and identify members whose costs surpass $250,000 in a given year. In a 2020 survey from Deloitte of life science companies, more than 50 percent of respondents said their investments in AI will increase. The technology is expected to have “a transformational impact on biopharma research and development (R&D),” Deloitte notes.

Moreover, experts say COVID-19 pandemic only made the appetite for AI solutions more palatable for health care executives. A report from KPMG found that health care business leaders have been overwhelmingly confident in AI’s ability to monitor the spread of COVID-19 cases (91 percent), help with vaccine development (94 percent) and distribution (88 percent), respectively.

“We’ve seen the near elimination of competitive angst,” says John Halamka, MD, President, Mayo Clinic Platform. “With COVID, we discovered we needed to come together as a coalition, as a society to deal with COVID response. You saw a whole lot of non-obvious partnerships, collaborations and joint ventures happened during COVID.”

The best example of those kinds of partnerships, Halamka notes, is the fact that Google, Microsoft, and Apple came together to create the COVID exposure notification system. These kinds of collaborations, he says, will spur the industry forward in developing and adopting AI.

Of course, Halamka and others acknowledge that AI adoption in health care is still nascent, in particular on the clinical side. Concerns about the ability to integrate into the clinical workflow, data biases and integrity, a lack of an industrywide ethics framework and regulation, and costs and return on investment (ROI), all remain significant barriers to increasing AI adoption.

In part one of a two-part series, Health Evolution will look in-depth at a number of the barriers preventing wider adoption of AI in clinical settings. In part two, we will examine the most promising clinical areas for AI usage.

Barriers with clinical usage of AI

Clinical workflow/poor use cases

Michael Matheny, MD, Co-Director Center for Improving the Public’s Health through Informatics, and Associate Professor in the Departments of Biomedical Informatics, Medicine, and Biostatistics at Vanderbilt University Medical Center, is fairly blunt when it comes to the challenges that are preventing wider adoption of AI.

“Trust in AI from front line clinical communities is really low,” Matheny said. “From the end user perspective, we want to see tools that are relevant and can be integrated into the workflow to help reduce our cognitive burden of the tasks we have to do. We want them to be highly accurate, thus safe to use where there’s not a lot of error when using its judgements and we want them to be unobtrusive.”

Suchi Saria, Founder and CEO of Bayesian Health, an AI-based clinical decision support platform and John C. Malone Endowed Chair and Director of Machine Learning and Healthcare Lab at Johns Hopkins, agrees that one of the big issues that has to be solved is trust. “How do we get them to adopt and trust it? That means many things, but a big part of that is having a research-first approach, infrastructure to do rigorous evaluations, and scaling up high-quality, validated ideas,” she says.

The data scientists and developer community need to find common working ground with frontline clinicians, Matheny says, which is leading to this lack of trust. Related to this challenge is the fact that many AI use cases are poorly defined, says Steven Lin, MD, Founder and Executive Director of the Stanford Healthcare AI Applied Research Team (HEA3RT). Too often, he says, developers and data scientists are building models in an opportunistic way, rather than identifying a problem that needs to be solved.

“We have developers coming to us who are really excited and they tell us their model can do X,Y and Z, only for us to tell them, ‘That’s actually not a problem we have in health care right now.’ They didn’t start with an articulated problem that is aligned with the pressing challenges of clinicians, patients and health systems today,” Lin says. 

Greg Albers, MD, co-founder of the Stanford Stroke Center and Chairman and Scientific Lead of RapidAI, an AI company that specializes in stroke care and complex diseases, says that physicians can get inundated with an abundance of clinical alerts related to different AI modules and programs. “It’s important to get the AI to work together so rather than the physician getting blasted with a whole bunch of messages, it sends them a tailored message that makes more sense for an individual patient,” Albers says. “And then figure out how to get that information to them in the most seamless way on an interface that allows them to have optimal workflow.”

Data integrity and biases

There is a reason that clinicians do not fully trust clinical AI yet. The reality is that AI and machine learning algorithms are not foolproof. In fact, researchers from the University of Cambridge in the U.K. found that not a single AI model that claimed it could detect COVID-19 in other diseases was “of potential clinical use due to methodological flaws and/or underlying biases.” In fact, these problems with a lack of credibility in AI models are pervasive.

“Everyone wants to use these tools, but the literature, the clinical trial data and the bedrock foundation of success is much less solid,” Matheny says. He notes that there have been successes in imaging informatics, particularly with X-Rays, CT scans and eye examinations, which have made other clinical specialties understand the potential power of the technology. But he notes, “You don’t see that level of accuracy in some of the other applications of AI yet and so I think it sort of inflates expectations when you see it get knocked out of the park in a couple of specific areas.”