Philips is already deploying AI-based solutions that empower people to live healthy lifestyles and help healthcare providers to deliver improved outcomes at lower cost, together with better staff and patient experiences. They range from solutions for diagnostic imaging, image-guided minimally invasive therapy, radiology workflow enhancement, clinical decision support, and patient monitoring, to remote connected care, sleep quality improvement and consumer products – all designed to help clinicians do their jobs better and consumers to stay healthy.
At the upcoming all-digital CES2021 Consumer Electronics Show (January 11 - 14, 2021), Pat Baird, Philips’ Head of Global Software Standards, will take part in a panel discussion about the future of AI in healthcare – the opportunities, the benefits, and the barriers to its more widespread adoption. Ahead of the CES panel session, we sat down with Pat to get his opinion on the adoption of AI in healthcare.
What can we do to enhance trust in AI?
I think that explainability is one of the issues the industry needs to address to accelerate the adoption of AI in healthcare. It’s one of the things we addressed in a Consumer Technology Association (CTA) paper I helped to write, in which we recognized that different approaches would be needed to gain the trust of different populations. Trust and transparency in AI is a critical area of focus for Philips. When we design AI-enabled solutions, we strive to complement and benefit our customers, patients, and society as a whole. In addition to the Philips Data Principles on privacy, security, and beneficial use of data, we embrace AI Principles as well.
We also recognized three different categories of trust that need to be addressed. The first is technical trust – how accurate and representative was the data used to train the AI from a technical execution point of view? The second category is human trust – the usability of the system. For example, if the user interface is difficult to use, people are going to assume the AI isn't very good and they are not going to trust it. And the third is regulatory trust, some of which is related to technical execution and usability but also encompasses the ethical, legal and social implications of AI.
What other challenges are there in widespread adoption of AI in healthcare?
I think one of the biggest challenges is getting all the necessary stakeholders around the table – clinicians, patient representatives, insurers, regulators etc. – to address some of the administrative issues. For example, there are concerns about liability. If a doctor believes what the software says and something bad happens, is it the software's fault, or is it the doctor's fault? Who is legally liable? Then there’s the problem of reimbursement codes, because if there is nothing set up to pay the doctors for using the AI, they may not be incentivized to use it. There are also a lot of myths and misunderstandings about what AI can and can’t do that need to be addressed.
Has the COVID-19 pandemic accelerated AI adoption?
It has certainly raised awareness and prompted discussion about how AI can help in triaging and diagnosis, as well in public health management, such as disease outbreak prediction and tracking. Several projects in these areas had started before COVID-19 arrived but it definitely accelerated them and attracted more funding. One of the AI projects I am currently working on is associated with the World Health Organization, which has a separate project specifically around COVID-19 outbreak detection.
What do you see as the main driver of AI in healthcare over the next few years?
The main driver will be the global shortage of clinicians, and by that I don’t mean AI will replace clinicians, quite the reverse. What AI can do is help clinicians to do their existing jobs more efficiently, especially on the administrative side, so they can spend more time treating patients. AI will not only help reducing time spent on administrative tasks, but also lead to automation of certain routine tasks. In addition, it will offer quality control in the future clinical decision support.
Two years ago, California declared a physician shortage because they only had one physician per 500 people. Yet I was talking to someone from Ghana who told me they only have one physician for every 11,000 people. AI can help to amplify clinician performance by taking care of tasks that don't need specialist involvement.
Perhaps it’s best summed up by a nurse I spoke to, when I asked her what’s different about her job compared to ten years ago. She said she became a nurse to take care of people but now spends all her time at the computer taking care of the technology, and very little time with her patients. What we can do with AI, if we do it right, is patient-centric healthcare. By placing the patient, clinician and consumer experience at the center of everything it does, it’s something Philips is very much committed to.