Blog

GDPR 2.0: What do Europe’s new AI guidelines imply for companies

GDPR 2.0: What do Europe’s new AI rules mean for businesses

This article originally appeared in the May issue of IT Pro 20/20, which is available here. To subscribe to receive every new issue in your inbox, click here.

In April, the European Commission (EC) presented plans to regulate Artificial Intelligence (AI). The first of its kind includes bans on practices that “manipulate people beyond their consciousness through subliminal techniques,” as well as the use of AI for mass law enforcement surveillance and government social scoring currently used in China.

While the move has been widely welcomed by data protection officers and widely seen as a welcome step in the right direction, the new rules could also bring far-reaching changes for companies that could seriously disrupt their business. Given that in 2019 nearly 40% of companies were using some type of AI or machine learning technology – a number that has likely increased due to the digital transformation sparked by a pandemic – large groups of companies could be forced to conduct risk assessments, or to continuously review AI systems.

Failure to do so will have similar consequences as the General Data Protection Regulation (GDPR), which became the de facto data protection standard for many of the world’s largest companies when it came into force in May 2018. Anyone who violates the AI ​​rules can be fined up to. calculate 6% of their worldwide sales or € 30 million, whichever is higher.

A risky endeavor

At the macro level, the new EU rules target “high risk” AI systems such as facial recognition, self-driving cars and AI systems used in the financial industry. In these areas, those who use AI systems need to carry out a risk assessment and take measures to mitigate hazards; Use high quality data sets to train the system; log activities so that AI decisions can be recorded and tracked; keep detailed documentation about the system and its purpose to demonstrate compliance with the law to government regulators; Providing clear and appropriate information to the user; Have “adequate human oversight”; guarantee a “high degree of robustness, security and accuracy”.

While this list of steps has been welcomed by those with a keen eye for privacy, it is unlikely that it would be so graciously received by those who need to make sure these measures are in place. Ilia Kolochenko, CEO of ImmuniWeb, a global application security company developing AI and ML technologies for SaaS-based application security solutions, believes the stringent requirements “will be tedious to put into practice.”

“For example, evaluating high-risk AI systems will be a tedious and costly task that can also compromise many of the trade secrets of European companies,” he told IT Pro. “Also, most AI systems are not static and are constantly being improved, so new regulations don’t even offer a 90% guarantee that the system will remain adequate after the audit.

“In addition, the required explainability and traceability of the AI ​​output is often not technically possible. Finally, the isolated AI regulation leaves the door wide open to traditional software that offers the same capabilities in high-risk operational areas. In short, this timely idea certainly deserves further discussion and elaboration, however practicality will be the key to its success or failure. “

Guillaume Couneson, partner at Linklaters law firm in Brussels, believes there could be other ramifications for companies operating in high-risk areas and for artificial intelligence itself.

“If not properly calibrated, this approach could stifle innovation and create barriers to the adoption of AI in the European Union,” he says.

“It will be important to remain flexible with the categorization in order not only to add new high-risk uses in the future, but also to remove certain uses that would no longer be considered high-risk, for example because the use of an AI system in practice has shown that certain expected risks did not materialize or that individuals have become accustomed to a certain use of AI systems. “

Black box game

While EU rules are primarily aimed at those involved in the development of high-risk AI systems, it is unlikely that only these companies will be affected. With the increasing use of AI and ML technologies, it’s hard to find companies that don’t rely on an algorithm to make important decisions: whether you qualify for a mortgage, how much we pay for our flights, what type of advertisements we’re shown and even the quality of customer support we receive.

Many of these organizations will use a black box solution – a system that can be viewed in terms of its inputs and outputs without knowing its internal operations – which often means companies are unable to connect the dots how these machines make certain decisions.

Andre Franca, director of applied data science at deep tech company causaLens, warns, “Failure to understand how the machine makes a decision can lead to catastrophic and unfair results. The regulation is forcing companies to take more responsibility for the AI ​​machines they provide, which is crucial as AI becomes more prevalent across all industries.

“With fines of up to 4% of a company’s worldwide annual turnover, it is imperative that companies understand how their AI models work in order to demonstrate compliance with the regulations to regulators and the board of directors. The criteria offered by the European AI Board will lead to that companies need to review their existing AI models and what needs to be changed to meet the requirements.

“For companies using causal AI – which is by its nature a glass box solution that allows companies to look under the hood and have full control, visibility, and transparency of their model – they can breathe a sigh of relief that they are over have the necessary information. when the board or a controller knocks. “

Alert AI

Of course, some industries are undoubtedly being scrutinized more closely than others – and not just those developing the AI ​​systems in question. This is largely determined by the amount of personal and sensitive data they process, which means sectors like healthcare, transport and basic public services could be hardest hit.

Franki Hackett, Head of Audit and Ethics at AI and data specialist Engine B, told IT Pro, “The amount of personal and sensitive data used will often determine the impact of these suggestions, and much has already been said about how companies do that Financial services or healthcare providers could be affected.

“For example, companies whose entire business model relies on scraping and reusing photos from social media without express permission to develop facial recognition algorithms,” she added. “Others who use personal data to offer financial services, for example, may need more robust governance structures, and still others will find that they are not affected at all.”

Associated resource

The IT Pro Podcast: Can AI Ever Be Ethical?

As AI becomes more complex, how can we ensure that it is responsibly developed?

Listen now

Camilla Winlo, Advisory Director at DQM GRC, also believes a variety of sectors will be affected by AI regulations, from education and emergency services to hardware manufacturers and border controls. She advises that while the rules are a long way from going into effect, companies need to start taking the appropriate steps now.

“I would recommend organizations likely to be affected by the regulation to review their Data Protection Impact Assessments (DPIAs), accompanying documents and controls in the light of the requirements and see if there are any gaps that need to be filled” she tells IT Pro. “This should be welcomed by the organizations concerned as it will improve their products. The rules are about ensuring that products are attractive to potential customers, that they are socially useful, and that the risk of undesirable outcomes is minimized – all things that any company, with or without regulation, should want. “

GDPR 2.0?

It remains to be seen whether companies will embrace these changes as many are likely to fear it will arrive as GDPR 2.0 – forcing them to grapple with unfamiliar rules and regulations, with the threat of a hefty fine if they don’t adhere to them .

However, the introduction of the rules under the GDPR could make the new rules easier to handle, especially for companies with strict data protection regulations.

Emma Erskine-Fox, Associate at UK law firm TLT, said, “Companies that already have robust governance frameworks for GDPR compliance are likely to have an advantage over those who ‘start from scratch’, but nonetheless the new requirements will add another layer of complexity for all organizations dealing with AI technology.

Similarly, others don’t believe the AI ​​rulebook will be as big a disruption for businesses as the GDPR.

Henrik Nordmark, Director of Science, Data and Innovation at the data science company Profusion says: “These changes are not asking too much. One of the reasons GDPR is a huge challenge is because every business, small or large, needs to handle data and do it all securely.

“But even at the risk of simplifying things, these new rules for AI mainly affect those who create new AI systems. Most companies will have to exercise due diligence in selecting AI products and services from reputable companies, but will not necessarily be involved in developing their own AI. And if it is an AI-centric company, then they should definitely think about the risks their development poses to society and how those risks can be mitigated.

“This corresponds to the safety laws that we have enacted for pharmaceutical companies so that their inventions are also safe.”

Recommended resources

Spotlight: The State of UK and Irish Small Business and IT Today in 2021

The UK and Ireland’s midsize companies faced a difficult 2020 but have a strong platform for recovery

download now

The secure cloud configuration is essential

The central role of cloud security status management

download now

Strengthening the dynamic worker

How CIOs and IT teams can support a dispersed workforce

download now

A new trust model for the 5G era

Data-in-motion security through a 5G infrastructure

download now