Harnessing the Power of AI to Promote Safety and Quality for Patients
Estimated reading time: 3-5 minutes
Industry experts weigh the patient safety, quality, regulatory and community considerations that will define AI’s future in healthcare.
There’s no question that AI is already playing a key role in improving operational efficiencies and augmenting patient-centered decision-making today. With the sudden growth in public awareness of AI’s potential, many are creating visions of a “hospital of the future,” along with massive investments in AI that will shape how it is used for years to come. As AI innovation continues to expand, so do the number of considerations around its development and deployment into patient care.
To further this conversation, Philips and The Washington Post convened industry leaders for an in-person event, “Health Innovation.” This included a fireside chat where Nicole Taylor Smith, Global Head of Regulatory Science and Policy, Philips, sat down with I. Glenn Cohen, Deputy Dean and Professor of Law at Harvard Law School, to discuss the fundamental patient safety, quality, regulatory and community concerns that will define AI’s future role in improving personalized care.
The promise of AI and its impact on patient safety
AI’s application in healthcare is not new – there are tangible ways AI is impacting the patient experience of quality and safety today. For example, AI is already helping in the early detection of lung cancer. By analyzing patients’ imaging exams, AI can catch incidental findings of lung cancer that otherwise may have been overlooked, helping activate treatment plans much sooner than if a patient waited until they were symptomatic. This is just the tip of the iceberg – AI’s ability to democratize expertise across healthcare has the potential to enhance patient safety for populations around the world.
It's important to maintain sight of the fact that we're developing products in service of patients, specifically when discussing medical artificial intelligence. If we don't strike the right balance between innovation while promoting and protecting public health and making sure they're safe, we won't get life-changing products to patients."
Nicole Taylor Smith
Global Head of Regulatory Science and Policy, Philips
As AI becomes more embedded into everyday care, it is critical that the road from AI development to clinical practice prioritizes patient safety, and algorithms are purpose-built to serve the needs of patients and the communities in which they will be deployed. This mindset will help ensure these are reliable tools that make a tangible impact on the lives of patients and the most pressing challenges health systems are facing.
AI’s evolving regulatory landscape
Because much of medical AI is not a static tool, but rather it learns and improves over time, regulating its development and deployment is an ongoing challenge. The FDA, alongside industry partners, is working to create a regulatory framework that accounts for AI’s ever-evolving nature and encourages innovation, while safeguarding patient safety, especially for tools that are used to guide diagnostic or treatment decisions. The latest AI guidance from the FDA is evidence of this balancing act, which allows developers to outline a plan for how an algorithm will be monitored, validated and modified at the time of their product submission. These plans support AI’s iterative improvement without the need to resubmit to the FDA each time there is an update to a product. This not only aims to help spur innovation, but also ultimately enhances the safety and efficacy of AI, especially for underrepresented populations, reducing the barriers to incorporating diverse data.
Collaboration among industry players and the FDA will be critical to defining the best path forward for AI regulations. By working together to determine the right mix of guidance and boundaries, we can facilitate the development of ethical and effective AI technology that can truly move the needle on personalized care.
Cohen acknowledged that AI in healthcare is inherently more complicated than many other industries, with more than a dozen players involved in the use of a single AI-based technology.
“Even being very well meaning and trying to adjust the liability for a physician if there's an adverse event due to artificial intelligence, for example – if you only focus on that liability piece, you could create worse problems at other levels. So, it's good that we have collaboration in the federal government already going on, and I'd love to see more of that,” said Cohen.
The critical role of community engagement
AI innovation should ultimately serve the needs of the population whose data helped build it, and the community in which it will be deployed. Engaging with communities will be critical to AI’s success and to ensure AI builds on the relationships healthcare organizations have worked diligently to create within their communities. This includes addressing historic distrust and bias in AI among minority populations. Instilling confidence in AI and therefore the use of a community’s data requires clear articulation of the value proposition, and positioning patients as partners in a project rather than subjects of a study.
Cohen raised the issue that AI is only as strong as the data that has been collected. Bias in the healthcare system has led to inherently flawed data.
“Are we not designing products to meet the needs of certain populations because of bias in the data? How do we fill the data gap? That's the kind of thing I think we have to be very thoughtful and forward-looking on. But it starts at the very beginning of the design process, not the back end where you are trying to solve for bias last minute. It's the question of what are you trying to do, and how is it serving these communities,” said Cohen.
By clicking on the link, you will be leaving the official Royal Philips ("Philips") website. Any links to third-party websites that may appear on this site are provided only for your convenience and in no way represent any affiliation or endorsement of the information provided on those linked websites. Philips makes no representations or warranties of any kind with regard to any third-party websites or the information contained therein.