Heather Jordan Cartwright | January 13, 2021
In my role at Microsoft, I talk to a lot of people across the health industry. Recently, one shared a story about an oncology nurse who was preparing to administer a drug to a child who had a type of cancer that primarily affects adults. The dosage was generated by an AI algorithm. The nurse’s first reaction was that the dose seemed very wrong—too much for a child. Her second reaction was that she must be wrong because her hospital wouldn’t use technology without putting it through testing and validation. Should she trust the algorithm and administer the drug even when her gut told her there was risk this child would end up in the ER?
Some doctors may point to this story as proof that AI isn’t ready for health care. They say machine learning requires decades more research across controlled studies before we can implement it at scale. However, by this logic, we would have to wait for every possible data point to be available before we can start using AI in health care. What’s the reality of that happening? One only has to look at studies like the NIH Whole Genome Sequencing Project which quickly shows that while we’ve made remarkable progress in medical research, our knowledge of human biology remains nascent. There is no question the data we use to create algorithms that support diagnosis and treatment will continue to evolve for some time.
Even if we had access to comprehensive, evidence-based datasets when creating algorithms for health care, a much larger challenge exists which has nothing to do with the science of AI. In the story above, the oncology nurse was put in a difficult position to ignore her training and credentials as a nurse in order to follow directions or disregard hospital process. This shouldn’t have been the case, but this scenario happens too often in today’s health systems. Medicine operates in a hierarchical, top-down world in which the cultural norm is for physicians to write orders and others to carry them out. Care teams need and want to help, and may have valuable feedback, yet are often understaffed and overworked, with little incentive and limited ability to engage in policy and care delivery decisions.
Input from frontline workers and care teams is critical to ensure AI for health care is safe and effective. So rather than wait for AI to transform the frontlines of health care, what if we could empower the frontlines of health care to transform AI?
The good news is, a model already exists. I learned it first-hand when I began my career in the automotive industry. It’s called the Andon Cord.
What is an Andon Cord?
In the early twentieth century, Henry Ford used automation to transform automotive assembly: instead of workers moving from vehicle to vehicle, they stayed in one place and vehicles came to them. During the middle of the twentieth century, Toyota took the assembly line even further. Toyota realized that assembly line workers—who repeat the same task day after day—not only become experts in their craft but are also the first to detect minor discrepancies and variances in the parts they install. In the emerging world of mass production and scale, Toyota took the revolutionary step to empower assembly line workers to intervene when they saw a problem by pulling the Andon Cord.
The original Andon Cord was literally a rope stretched across the assembly line in a manufacturing plant. But this simple rope gave assembly line workers unprecedented power, because when they pulled it the assembly line stopped. The concept? Discover and correct small problems at the source before they scale to become even greater problems downstream. Under the Toyota Production System, employees weren’t just encouraged to pull the Andon Cord if they encountered a problem, it became their primary obligation. Every time someone pulled the Andon Cord, they were thanked by the responding supervisor. No defect was considered to be too small. No one was punished for mistakenly pulling the Andon Cord. And the assembly line never restarted until the problem was fixed. In a book about the Toyota Production System called Toyota Kata, author Mike Rother tells a story from a Toyota plant where the average number of times the Andon Cord was pulled per shift fell from 1,000 to 700. Instead of seeing that as a sign of improvement, the CEO called a meeting to remind them to be more vigilant.
The Andon Cord for AI: Do health systems have the right feedback loops in place for success?
Over the decades, the Andon Cord concept has evolved and it is now used in a wide range of industries. In technology, the Andon Cord concept has been translated to a culture of continuous learning and improvement. Instead of an Andon Cord, engineers file “bugs” when code needs to be fixed and customer service representatives create “tickets” that get assigned to engineers or product managers. But in successful tech companies, every employee at every level is empowered to stop, investigate, and iterate whenever they encounter a problem.
Across the health industry, there are a myriad of verticals and subunits that will leverage AI. From finance to food services, from the supply chain to the pharmacy or the operating room, or from a clinic to the intensive care, there are innumerable Andon Cord opportunities. Rather than think about a single Andon Cord, future success will require inputs from multiple interfaces, data streams and users. It’s for this reason that the concept of the Andon Cord for AI becomes critical: at each interface where algorithms interact with a user, we need feedback loops. Implementing the Andon Cord for AI is not about initiating sporadic process improvement efforts, but rather, creating a culture and overall ecosystem to capture and measure data for continuous learning. If we want AI to be a successful contributor to the future of health care, we need both the Andon Cord itself and a shift in culture to embrace it.
Take for example, a typical hospital environment. Chances are the procurement and finance teams have already identified ways to improve supply chain efficiency, but when is the last time someone asked the janitorial team about what supplies they see wasted during their shift? Most discharge nurses can identify telltale signs that a patient is unlikely to pursue follow up care and will end up back in the ER, but is their feedback documented anywhere? And if it is, is it used to implement measurable changes in the discharge process and follow up procedures?
Shifting from operational efficiency to technology implementation, feedback loops should be used to continuously improve the technology we’re using and do so in a systematic manner. Most hospitals have alert systems in place to flag when something has gone awry, but if the phrase “alert fatigue” is at all familiar to the people on your care teams, it’s an indicator that they have been trained to ignore problems instead of being empowered to fix them.
As a glimpse into the future, consider AI in clinical decision support. In the next several years, physicians may soon be overloaded with predictive models alongside the myriad of tools already available for clinical decision support. Rather than force them to manually navigate that toolbox – start with the Andon Cord for AI concept. Don’t launch a new algorithm for clinical decision support without first ensuring you can track how often a physician does (or doesn’t) follow the recommendations, and why.
We are now at the beginning of a new era that will see AI become one of the health care
industry’s most important and powerful tools for improving health and wellness. But in the future, the best health care delivery systems won’t be the ones that were the first to implement AI. Rather, it will be those that made workers at every level and on every team part of the AI development process by empowering them to pull the Andon Cord.
Implementation of AI is our future, but is your team ready for it?
Machine-learning based algorithms are powerful because they can process far more data than the human brain can consume. But humans must make the final decisions about how to use the guidance that algorithms deliver. As good as even the best algorithms are, they don’t know everything. More importantly, when they don’t work and frontline workers don’t say something, small problems scale to big ones. Frontline workers are the most important piece of the health care ecosystem we are about to implement with AI in health care, and we need to empower them.
Ultimately, the success of AI in health care will depend not just on how well algorithms work, but on the degree of trust and control that health care providers and patients feel they have in using them. This will require the companies that create AI solutions for health care to be very careful and thoughtful about the balance between the opportunities that AI provides and the risks and challenges that it raises, and to ensure that the people who work in the environment in which AI is used are prepared to take advantage of it.
This is central to responsible AI, built on human-centered thinking and a belief that we must be transparent about why and how we build AI solutions, accountable for how they work, and committed to ensuring that they are safe and reliable. These principles all depend on a continuous feedback loop with those who use our AI tools. And they are particularly important in AI solutions for health care, where people’s health and lives are always at stake.
Who’s using the Andon Cord in Health care today?
The Andon Cord concept is still emerging in health care but it has been used very successfully by a handful of health care systems. One of the pioneers is Virginia Mason in Seattle, who under Gary Kaplan’s leadership has traveled to Japan for more than a decade with various teams to study continuous learning. With nearly 500 doctors and two hospitals. Virginia Mason was the first medical center in the country to integrate the Toyota management philosophy throughout its entire system, a process that started in 2002.
At Virginia Mason, the Andon Cord isn’t a rope, of course, but a process that enables—requires—employees to report anything they observe that can harm a patient. Instead of pulling a cord or pushing a button, employees submit a Patient Safety Alert by phone or online. According to a case study published by Virginia Mason in 2018, by the end of 2014, nearly 900 Patient Safety Alerts were being submitted each month. One measure of the success of the Patient Safety Alert System is that over a 10-year period, professional liability claims fell by 74 percent.
The key to the success of the Patient Safety Alert System at Virginia Mason wasn’t the underlying Andon Cord concept itself, but rather the way it drives the culture of the organization and the behavior of frontline employees who are empowered to call “stop” when they see something that seems wrong. It works because employees know they can submit a patient safety alert without fear of negative consequences and they are supported when they submit an alert. Simply put, the Patient Safety Alert System empowers every staff member to be an essential part of the ongoing improvement process.
Responsible AI means we are all in this together.
Artificial intelligence can synthesize, analyze, and predict with levels of speed and accuracy that almost feel magical. But it’s not magic—it’s a tool to accelerate and augment the work of clinicians and caregivers so they can focus more on the human side of health care.
Delivering health care is most certainly not the same as manufacturing a car. But I encourage health care leaders to pause for just a moment and think about the metaphorical Andon Cord. As your teams start incorporating AI into their workflow, it is critical to recognize the importance of human judgment – at all levels of your organization. No matter how advanced machine learning techniques are, and no matter how much data we have to train the algorithms, it always will be essential that health care workers on the frontline are an integral part of the process. They are the only ones who can judge in the moment what makes sense or what seems dangerous to patients.
For developers of this technology, this feedback is an essential link in ensuring that we know what works, what needs to be improved, and what needs to be shut down to protect patient safety. Building this feedback into development processes is foundational to responsible AI for health care.
For health care leaders, responsible AI means recognizing that culture will be even more important than tools. As clinics, hospitals, and labs incorporate more and more machine-learning-based solutions, the safety of your patients will require that you not accept the promise of AI at face value, and instead, create an environment in which your frontline staff knows that you want their feedback all of the time, and that you honor and trust their judgment.
We’re all moving forward together to transform the future of our health industry. It’s up to all of us to identify the best ways to implement AI responsibly. When the next oncology nurse has doubts about the drug dosage an AI algorithm recommends, my mission is to make sure she has an Andon Cord to pull. Where are you going to apply your first Andon Cord?