3 pernicious myths of responsible AI
-   +   A-   A+     02/05/2024

 Responsible AI isn’t really about principles, or ethics, or explainability. It can be the key to unlocking AI value at scale, but we need to shatter some myths first

Responsible AI (RAI) is needed now more than ever. It is the key to driving everything from trust and adoption, to managing LLM hallucinations and eliminating toxic generative AI content. With effective RAI, companies can innovate faster, transform more parts of the business, comply with future AI regulation, and prevent fines, reputational damage, and competitive stagnation. 

Unfortunately, confusion reigns as to what RAI actually is, what it delivers, and how to achieve it, with potentially catastrophic effects. Done poorly, RAI initiatives stymie innovation, creating hurdles that add delays and costs without actually improving safety. Well-meaning, but misguided, myths abound regarding the very definition and purpose of RAI. Organizations must shatter these myths if we are to turn RAI into a force for AI-driven value creation, instead of a costly, ineffectual time sink.

So what are the most pernicious RAI myths? And how should we best define RAI in order to put our initiatives on a sustainable path to success? Allow me to share my thoughts.

Myth 1: Responsible AI is about principles

Go to any tech giant and you will find RAI principles—like explainability, fairness, privacy, inclusiveness, and transparency. They are so prevalent that you would be forgiven for thinking that principles are at the core of RAI. After all, these sound like exactly the kinds of principles that we would hope for in a responsible human, so surely they are key to ensuring responsible AI, right?

Wrong. All organizations already have principles. Usually, they are exactly the same principles that are promulgated for RAI. After all, how many organizations would say that they are against fairness, transparency, and inclusiveness? And, if they were, could you truly sustain one set of principles for AI and a different set of principles for the rest of the organization?

Further, principles are no more effective at engendering trust in AI than they are for people and organizations. Do you trust that a discount airline will deliver you safely to your destination because of their principles? Or do you trust them because of the trained pilots, technicians, and air traffic controllers who follow rigorously enforced processes, using carefully tested and regularly inspected equipment? 

Much like air travel, it is the people, processes, and technology that enable and enforce your principles that are at the heart of RAI. Odds are, you already have the right principles. It’s putting those principles into practice that is the challenge. 

Myth 2: Responsible AI is about ethics

Surely RAI is about using AI ethically—making sure that models are fair and do not cause harmful discrimination, right? Yes, but it is also about so much more. 

Only a tiny subset of AI use cases actually have ethical and fairness considerations, such as models that are used for credit scoring, that screen résumés, or that could lead to job losses. Naturally, we need RAI to ensure that these use cases are tackled responsibly, but we also need RAI to ensure that all of our other AI solutions are developed and used safely and reliably, and meet the performance and financial requirements of the organization. 

The same tools that you use to provide explainability, check for bias, and ensure privacy are exactly the same that you use to ensure accuracy, reliability, and data protection. RAI helps ensure AI is used ethically when there are fairness considerations at stake, but it is just as critical for every other AI use case as well. 

Myth 3: Responsible AI is about explainability 

It is a common refrain that we need explainability, aka interpretability, in order to be able to trust AI and use it responsibly. We do not. Explainability is no more necessary for trusting AI than knowing how a plane works is necessary for trusting air travel. 

Human decisions are a case in point. We can almost always explain our decisions, but there is copious evidence that these are ex-post stories we make up that have little to do with the actual drivers of our decision-making behavior. 

Instead, AI explainability—the use of “white box” models that can be easily understood and methods like LIME and ShAP—is important largely for testing that your models are working correctly. They help identify spurious correlations and potential unfair discrimination. In simple use cases, where patterns are easy to detect and explain, they can be a shortcut to greater trust. However, if those patterns are sufficiently complex, any explanation will at best provide indications of how a decision was made and at worst be complete gibberish. 

In short, explainability is a nice-to-have, but it’s often impossible to deliver in ways that meaningfully drive trust with stakeholders. RAI is about ensuring trust for all AI use cases, which means providing trust through the people, processes, and technology (especially platforms) used to develop and operationalize them.

Responsible AI is about managing risk

At the end of the day, RAI is the practice of managing risk when developing and using AI and machine learning models. This involves managing business risks (such as poorly performing or unreliable models), legal risks (such as regulatory fines and customer or employee lawsuits), and even societal risks (such as discrimination or environmental damage).

The way we manage that risk is through a multi-layered strategy that builds RAI capabilities in the form of people, processes, and technology. In terms of people, it is about empowering leaders that are responsible for RAI (e.g., chief data analytics officers, chief AI officers, heads of data science, VPs of ML) and training practitioners and users to develop, manage, and use AI responsibly. In terms of process, it is about governing and controlling the end-to-end life cycle, from data access and model training to model deployment, monitoring, and retraining. And in terms of technology, platforms are especially important because they support and enable the people and processes at scale. They democratize access to RAI methods—e.g., for explainability, bias detection, bias mitigation, fairness evaluation, and drift monitoring—and they enforce governance of AI artifacts, track lineage, automate documentation, orchestrate approval workflows, secure data as well as a myriad features to streamline RAI processes. 


Read count: 1003 Previous page Back to top
Other news