Which factors influence human trust in artificial intelligence? (Credit: Getty Images)
Which factors influence human trust in artificial intelligence (AI)? And how can trust in AI be optimized to improve medical decision making and enhance patient outcomes? These are questions that need to be answered before AI can realize its potential in the healthcare workspace, experts explained during a conference session at Virtual CES 2021.
AI is a rapidly developing technology with the potential to disrupt healthcare on a massive scale. Machine learning algorithms are increasingly capable of performing tasks with greater accuracy, efficiency and effectiveness than healthcare professionals, including everything from triaging patients for medical attention to identifying trends in huge quantities of clinical data. On the flip side, AI-powered systems still lack “human” qualities that are perceived to be important in the provision of healthcare, such as trustworthiness, as well as an ability to express empathy and compassion. This has made many clinicians cautious about the use of AI in medical diagnosis.
Pat Baird, Regulatory Head of Global Software Standards for Philips and one of the thought leaders taking part in the CES webinar, said he believed three different categories of trust needed to be addressed:
>> The first was technical trust, related to the data used to train the AI.
>> The second was human trust, related to the usability of the system.
>> The third was regulatory trust, relating to frameworks and standards, as well as the ethical, legal and social implications of AI.
According to Baird, those developing AI systems need to eliminate bad data as much as possible and ensure their algorithms are trained on non-biased data samples. Such systems should also be user-friendly, with an intuitive interface that helps to overcome human-machine barriers. Leveraging input from medical professionals during system development can be critical in helping to foster trust at an early stage.
A clear set of regulations and standards is also important when it comes to establishing trust in AI. Baird said:
“Standards can help set the expectation of what ‘good’ looks like. There is so much hype and so many questionable claims about AI products and applications right now that we need standards to help differentiate between the good and the bad. We know how to do quality controls—period—regardless of the product or the type, and I think we can reuse a lot of that. The details are different, but I think overall we have a good headstart.”
Regulation of AI is complicated by the fact that not all AI tools are considered medical devices (and therefore aren’t regulated by bodies such as the United States Food and Drug Administration). Companies aren’t obliged to share details about the role of specific software within AI systems either. Nevertheless, providing data and information relating to performance, intended use and input requirements can help to increase trust, as can regular software evaluation.
“The key to trustworthy AI is for manufacturers to build AI that deserves trust.”