Blog

Belief is a should: why enterprise leaders ought to embrace explainable AI

Trust is a must: why business leaders should embrace explainable AI

The new EU regulation aims to make artificial intelligence more trustworthy

The Vice President of the European Commission responsible for media and information issues, Margrethe Vestager, clearly summarized the founding philosophy of the EU draft legal framework for AI at the time of its publication in April.

“Trust is a must,” she said. “The EU is leading the way in developing new global standards to ensure that AI can be trusted. By setting standards, we can pave the way to ethical technology around the world. ”

Any fast-moving technology is likely to inspire suspicion, but Vestager and her colleagues have decided that those in power should do more to tame AI, in part by using such systems more responsibly and being clearer about how they work.

The groundbreaking legislation, which “aims to guarantee the safety and fundamental rights of people and companies while at the same time strengthening the acceptance, investment and innovation of AI”, encourages companies to get involved with so-called explainable AI.

If we want AI to play a role in decision-making, we have the right to understand how AI came to a decision, regardless of its complexity

Most business leaders have welcomed the initiative, understanding that the goal is to build public confidence in AI by encouraging the use of more transparent systems.

Peter van der Putten is director of AI solutions at the cloud software company Pegasystems and assistant professor of AI at the University of Leiden in the Netherlands. He believes that the EU has put in place a “reasonable, risk-based framework” that distinguishes “prohibited, high-risk and low-risk” AI applications.

“This is a significant step forward for both EU consumers and businesses looking to reap the benefits of AI in truly responsible ways,” he says.

The end of “Computer Says No”

Given that many organizations use opaque algorithms to make critical decisions – sometimes with disastrous results – the creation of a legal framework that would encourage them to adopt explainable AI is to be welcomed. So says Matt Armstrong-Barnes, chief technologist at Hewlett Packard Enterprise.

“If we want AI – constructed with complex math – to play a role in decision-making, we as citizens have the right to understand how the AI ​​came to a decision, regardless of its complexity,” he argues. “Explainable AI can answer the fundamental question: Why? Once we know this, the decision can be evaluated to ensure that it is made without prejudice. “Computer says no” is no longer acceptable or desirable. “

Pip White, MD of Google Cloud in the UK and Ireland, agrees. “Your ability to fully understand your AI and machine learning models is key to your ability to adopt the technology safely, especially in regulated industries where trust is vital,” she says. “It’s also of paramount importance in removing biases and other gaps in data or models. The better informed you are about the “why” of AI-driven decisions, the more useful and responsible your AI deployments will be. “

But not all experts believe that the bill, which fines up to 6% of a company’s global sales for the most serious violations, will have a sufficiently positive impact if passed in its current form.

By setting standards, we can pave the way to ethical technology around the world

“You have to admire the EU for being late for the party and telling everyone to turn the music down,” said Mark K Smith, founder and CEO of ContactEngine, a conversational AI company. “I agree that AI needs regulation, but regulation that stifles innovation would not be helpful and would only encourage developments elsewhere.”

A well-timed reset

Van der Putten, who insists that AI should never replace human intelligence, believes the proposed law will serve as a “reset moment” for the technology and its advocates as it will help improve confidence.

EU intervention is timely, agrees Joe Baguley, EMEA vice president and chief technology officer of enterprise software company VMware. A survey by his company earlier this year found that only 43% of Britons trust AI.

“This lack of trust can be traced back to the perceived lack of transparency by AI, which must play an important role for executives,” says Baguley. “There’s no doubt that AI has the potential to revolutionize the workplace and society, but the need for explainable AI becomes more pressing as fears about the technology remain high.”

He continues: “If developers themselves don’t know why and how AI thinks, it leads to a slippery slope as algorithms become more complex. As you give the public more insight into the way AI makes decisions, you will gain more confidence and feel more confident about the organizations using the technology. “

Kasia Borowska, managing director of AI consulting firm Brainpool, believes the rest of the world needs to keep up with the EU in regulating technology.

“The next step must be to make these regulations international, as unequal laws between different blocs can have catastrophic consequences in the long term,” she warns. “International leaders should urgently look into this. We know that AI will bring unprecedented benefits to those in less controlled countries. “

How should companies in the UK respond to the leadership that Brussels is taking? “Be more of a guide dog than a watchdog,” advises Caroline Gorski, group leader of R² Data Labs at Rolls-Royce. “Create your own simple framework that meets EU requirements. Focus on defining what can be done instead of what cannot, and then break it down into steps with verifiable standards for each step. Join everyone and create a procedure. “

Simon Bullmore, co-founder and CEO of data literacy consulting firm Mission Drive, suggests that companies seeking guidance on explainable AI should consider the Open Data Institute, the Alan Turing Institute, and the Office for Artificial Intelligence.

He urges business leaders to view the EU’s initiative as an opportunity to invest in explainable AI – and train themselves and their employees in the technology.

“The regulators intervene when they lose confidence in the competence of the market and the desire to self-regulate,” says Bullmore. “Part of the challenge with using AI is separating what executives know about AI and what their organizations do with it.”

Now that the rules of the game are changing, it will be proactive leaders who gain the competitive edge by getting back to basics with AI.