Headline
EU Debates AI Act to Protect Human Rights, Define High-Risk Uses
The commission argues that legislative action is needed to ensure a well-functioning market for AI systems that balances benefits and risks.
The European Commission (EC) is currently debating new rules and actions for trust and accountability in artificial intelligence (AI) technology through a legal framework called the EU AI Act. Its aim is to promote the development and uptake of AI while addressing potential risks some AI systems can pose to safety and fundamental rights.
While most AI systems will pose low to no risk, the EU says, some create dangers that must be addressed. For example, the opacity of many algorithms may create uncertainty and hamper effective enforcement of existing safety and rights laws.
The EC argues that legislative action is needed to ensure a well-functioning internal market for AI systems where both benefits and risks are adequately addressed.
“The EU AI Act aims to be a human centric legal-ethical framework that intends to safeguard and protect human rights and fundamental freedoms from violations of these rights and freedoms by algorithms and smart machines,” says Mauritz Kop, Transatlantic Technology Law Forum Fellow at Stanford Law School and strategic intellectual property lawyer at AIRecht.
The right to know whether you are dealing with a human or a machine — which is becoming increasingly more difficult as AI becomes more sophisticated — is part of that vision, he explains.
Kop notes that AI is now mostly unregulated, except for a few sector-specific rules. The act aims to address the legal gaps and loopholes by introducing a product safety regime for AI.
“The risks are too high for nonbinding self-regulation by companies alone,” he says.
Effects on AI Innovation
Kop admits that regulatory conformity and legal compliance will be a burden, especially for early AI startups developing high-risk AI systems. Empirical research that shows the GDPR – while preserving privacy and data protection and data security – had a negative effect on innovation, he notes.
Risk classification for AI is based on the intended purpose of the system, in line with existing EU product safety legislation. Classification depends on the function the AI system performs and on the specific purpose and modalities for which the system is used.
“The legal uncertainty surrounding [regulation] and the lack of budget to hire specialized lawyers or multidisciplinary teams still are significant barriers for a flourishing AI startup and scale-up ecosystem,” Kop says. “The question remains whether the AI Act will improve or worsen the startup climate in the EU.”
The EC will determine which AI gets classified as “high risk” using criteria that are still under debate, creating a list of examples of high-risk systems to help guide judgment.
“It will be a dynamic list that contains various types of AI applications used in certain high-risk industries, which means the rules get stricter for riskier AI in healthcare and defense than they are for AI apps in tourism,” Kop says. “For instance, medical AI is [classified as] high risk to prevent direct harm to patients due to AI errors.”
He notes there is still controversy about the criteria and definition of AI that the draft uses. Some commentators argue it should be more technology-specific, aimed at certain riskier types of machine learning, such as deep unsupervised learning or deep reinforcement learning.
“Others focus more on the intent of the system, such as social credit scoring, instead of potential harmful results, such as neuro-influencing,” Kop added. “A more detailed classification of what ‘risk’ entails would thus be welcome in the final version of the act.”
Facial Recognition as a High-Risk Technology
Joseph Carson, chief security scientist and advisory CISO at Delinea, participated in several of the talks around the act, including as a subject matter expert in the use of AI in law enforcement and articulating the concerns around security and privacy.
The EU AI Act, he says, will mainly affect those organizations that already collect and process personal identifiable information. Therefore, it will impact how they use advanced algorithms in processing the data.
“It is important to understand the risks if no regulation or act is in place and what the possible impact [is] if organizations abuse the combination of sensitive data and algorithms,” Carson says. “The future of the Internet is a scary place, and the enforcement of the EU AI Act allows us to embrace the future of the Internet using AI with both responsibility and accountability.”
Regarding facial recognition, he says the technology should be regulated and controlled.
“It has many amazing uses in society, but it must be something you opt in and agree to use; citizens must have a choice,” he says. “If no act is in place, we will see a significant increase in deepfakes that will spiral out of control.”
Malin Strandell-Jansson, senior knowledge expert at McKinsey & Co, says facial recognition is one of the most debated issues in the draft act, and the final outcome is not yet clear.
In its draft format, the AI Act strictly prohibits the use of real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, as it poses particular risks for fundamental rights – notably human dignity, respect for private and family life, protection of personal data, and nondiscrimination.
Strandell-Jansson points out a few exceptions, including use for law enforcement purposes for the targeted search for specific potential victims of crime, including missing children; the response to the imminent threat of a terror attack; or the detection and identification of perpetrators of serious crimes.
“Regarding private businesses, the AI Act considers all emotion recognition and biometric categorization systems to be high-risk applications if they fall under the use cases identified as such — for example, in the areas of employment, education, law enforcement, migration, and border control,” she explains.
As such, potential providers would have to subject such AI systems to transparency and conformity obligations before putting them on the market in Europe.
The Time to Act on AI Is Now
Dr. Sohrob Kazerounian, AI research lead at Vectra, an AI cybersecurity company, says the need to create a regulatory framework for AI has never been more pressing.
“AI systems are rapidly being integrated into products and services across wide-ranging markets,” he says. “Yet the trustworthiness and interpretability of these systems can be rather opaque, with poorly understood risks to users and society more broadly.”
While some existing legal frameworks and consumer protections may be relevant, applications that use AI are sufficiently different enough from traditional consumer products that they necessitate fundamentally new legal mechanisms, he adds.
The overarching goal of the bill is to anticipate and mitigate the most critical risks resulting from the use and failure of AI, with actions ranging from banning systems deemed to have “unacceptable risk” altogether to heavy regulation of “high-risk” systems. Another, albeit less-noted, consequence of the framework is that it could provide clarity and certainty to markets about what regulations will exist and how they will be applied.
“As such, the regulatory framework may in fact result in increased investment and market participation in the AI sector,” Kazerounian said.
Limits for Deepfakes and Biometric Recognition
By addressing specific AI use cases, such as deepfakes and biometric or emotional recognition, the AI Act is hoping to ameliorate the heightened risks such technologies pose, such as violation of privacy, indiscriminate or mass surveillance, profiling and scoring of citizens, and manipulation, Strandell-Jansson says.
“Biometrics for categorization and emotion recognition have the potential to lead to infringements of people’s privacy and their right to the protection of personal data as well as to their manipulation,” she says. “In addition, there are serious doubts as to the scientific nature and reliability of such systems.”
The bill would require people to be notified when they encounter deepfakes, biometric recognition systems, or AI applications that claim to be able to read their emotions. Although that’s a promising step, it raises a couple of potential issues.
Overall, Kazerounian says it is “undoubtedly” a good start to require increased visibility for consumers when they are being classified by biometric data and when they are interacting with AI-generated content rather than real humans or real content.
“Unfortunately, the AI act specifies a set of application areas within which the use of AI would be considered high-risk, without necessarily discussing the risk-based criteria that could be used to determine the status of future applications of AI,” he said. “As such, the seemingly ad-hoc decisions about which application areas are considered high-risk simultaneously appear to be too specific and too vague.”
Current high-risk areas include certain types of biometric identification, operation of critical infrastructure, employment decisions, and some law enforcement activities, he explains.
“Yet it isn’t clear why only these areas were considered high-risk and furthermore doesn’t delineate which applications of statistical models and machine-learning systems within these areas should receive heavy regulatory oversight,” he adds.
Possible Groundwork for Similar US Law
It is unclear what this act could mean for a similar law in the US, Kazerounian says, noting that it has now been more than half a decade since the passing of GDPR, the EU law on data regulation, without any similar federal laws following in the US — yet.
“Nevertheless, GDPR has undoubtedly influenced the behavior of multinational corporations, which have either had to fracture their policies around data protections for EU and non-EU environments or simply have a single policy based on GDPR applied globally,” he said. “In any case, if the US decides to propose legislation on the regulation of AI, at a minimum it will be influenced by the EU act.”