Skip to main content

Editor’s note: This article is based on the February meeting of the Health Evolution Roundtable on Next Generation IT in Health Care. The Health Evolution Forum is underwritten by Leadership Partners AmeriHealth Caritas, Change Healthcare and Roundtable Partner League.

To understand the current state of regulation around artificial intelligence — and why CEOs leading health systems, health plans and life sciences organizations should feel a sense or urgency to prepare for what’s coming — it helps to reflect back two decades on the way health care organizations approached cybersecurity.

“I often analogize where we are with AI to where we were with cyber 15 or 20 years ago. Most companies didn’t realize it was a problem until it became their problem. And then they were caught flat footed and had to repair a lot of damage,” said Miriam Vogel, President & CEO, EqualAI and a former White House advisor on regulatory affairs and justice matters during a meeting of the Health Evolution Roundtable on Next Generation IT in Health Care. “The regulations are coming. There’s a lot of economic incentive to avoid being on the top page above the fold or the defendant in a large lawsuit.”

As happened with cybersecurity, a number of governmental, public and private organizations are developing regulations, best practices and frameworks to guide, limit and advance how health care organizations leverage AI. A key issue for AI in health care is identifying and reducing bias in every step of the AI lifecycle, from the data sets used to train the algorithms to the questions organizations are asking the algorithms to consider.

“This idea of regulation coming is either a fear factor or an opportunity to engage,” said Aneesh Chopra, President, CareJourney and a former U.S. Chief Technology Officer.

Forthcoming regulations
Many state1 and federal2 regulations are already being developed and proposed in the U.S. Agencies including the Commerce Department, Federal Trade Commission, Food & Drug Administration, National Security Commission, and the White House have all proposed various AI-related guidelines.

The National Defense Authorization Act of 20213 includes provisions relative to ethical and responsible development of artificial intelligence technology, national AI research institutes, and a mandate that the National Institutes of Standards and Technology — the agency that delivers cybersecurity frameworks critical to health care and other industries — “expand its mission to include advancing collaborative frameworks, standards, guidelines for AI, supporting the development of a risk-mitigation framework for AI systems.” NIST, in fact, will be holding a workshop to develop the AI Risk Management Framework in late March 2022.

In early February, Senators Ron Wyden (D-Ore.) and Cory Booker (D-N.J.) and Representative Yvette Clarke (D-N.Y.) introduced the Algorithmic Accountability Act of 2022,4 as they explain, “to bring new transparency and oversight of software, algorithms and other automated systems that are used to make critical decisions about nearly every aspect of Americans’ lives.”  

Notable regulatory efforts in other nations discussed during the Roundtable include Australia, where the Human Rights Commission made reducing risks in AI part of the human rights agenda5 and, in the European Union, a commission published a legal framework for addressing AI-specific risk, titled Proposal for a regulation laying down harmonized rules on artificial intelligence.6

“Whether it’s the EU regs, the NIST framework or the lawyers getting up to speed and litigating under laws currently on the books, liability from AI is something we all need to start planning to avoid through responsible AI governance,” Vogel said.

Priority: Reducing bias
Perhaps the steepest obstacle and most pressing matter to address now is identifying, understanding and reducing bias so that algorithms are not perpetuating and exacerbating discrimination.  

There are niche examples currently underway. In one example, the state of Pennsylvania’s Medicaid program is already evaluating health plan algorithms to ensure that there is no bias in those algorithms, as part of the Pennsylvania Department of Human Services’ broader diversity, equity and inclusion strategy.7

Fellows noted that if every state in the country requires health systems or health plans to test their algorithms that would rapidly become expensive and time consuming.

In another example, the Mayo Clinic Platform will include a product called Validate,8 which is intended to enable people or organizations to validate AI algorithms against large data sets to evaluate efficacy and susceptibility to bias, according to Cris Ross, CIO, Mayo Clinic.

Vogel added that such private validation engines will play an important role moving forward and predicted a cottage industry of certified “algorithmic auditors” and companies that can evaluate algorithms and identify bias that could ultimately be harmful if not addressed.

“We aim to help by establishing responsible AI governance systems on the front end so that your company will have a higher likelihood of success on the backend,” Vogel added.

Vogel noted that while very few health care organizations have already established a robust AI governance structure, the urgency also lies in the fact that doing so in the short-term will be much more effective to prepare for the regulations now than to unpack multiple intertwined AI systems that are deeply embedded into overall processes when the regulations become clear in a few years.

“It is important that we’re thinking about this as proactively as possible now because you don’t want to wait for states or the federal government to drop the hammer,” said Niall Brennan, Chief Analytics and Privacy Officer at Clarify Health and the former Chief Data Officer at CMS.

Conclusion: Time to take action on responsible AI  
CEOs leading health care organizations have the opportunity, and responsibility, to undertake work necessary for reducing bias within algorithms to more effectively and equitably serve patient populations while also protecting the organization from negative publicity and future litigation related to forthcoming regulations.

First steps include understanding potential bias in the data sets you are using, assembling diverse teams to identify potential risks, and recognizing that algorithms will need to be re-examined on a regular basis to reduce bias as AI systems continue to learn and apply new patterns.

Right now, the window is open for beginning such work — prior to forthcoming regulations that will mandate various forms of compliance. 

The Health Evolution Roundtable on Next Generation IT in Health Care, along with the Roundtable on Community Health and Advancing Health Equity and the Roundtable on New Models of Care Delivery, will convene at the upcoming 2022 Summit* in sessions to address how CEOs can move upstream in terms of reducing bias in data as part of a broader set of guidelines currently in development.

“Like in cybersecurity, you really can’t act early enough to protect yourself from this unknown. With AI, when the litigation comes, when the regulation comes, it will be far too late to make sure that you have these systems in place,” Vogel said. “If we don’t reverse course at this critical juncture, we will be not only reinforcing past discrimination and biases, but we will also be spreading it at scale under cover of a black box.” 

*The Health Evolution Summit will take place April 6-8, 2022 at The Ritz-Carlton Laguna Niguel and Waldorf Astoria Monarch Beach Resort & Club hotels in Dana Point, CA. View the agenda or Apply to attend.

Sources & Citations:
1. National Center for State Legislators: Legislation related to artificial intelligence
2. Orrick: US artificial intelligence regulation takes shape
3. Congress.gov: William Thornberry National Defense Authorization Act 2021
4. Senate.gov: Algorithmic Accountability Act of 2022
5. Australian Human Rights Commission: Human Rights and Technology Report
6. European Commission: Proposal for regulation laying down harmonization rules for artificial intelligence
7. Pennsylvania Department of Human Services: Racial equity report 2021
8. Mayo Clinic: Mayo Clinic Platform Validate

Tom Sullivan

Tom Sullivan brings more than two decades in editing and journalism experience to Health Evolution. Sullivan most recently served as Editor-in-Chief at HIMSS, leading Healthcare IT News, Health Finance, MobiHealthNews. Prior to HIMSS Media, Sullivan was News Editor of IDG’s InfoWorld, directing a dozen reporters’ coverage for the weekly print publication and daily website.

X