On April 21, 2021, the European Commission released a highly-anticipated proposal for a regulation governing artificial intelligence (AI). The proposal has been drafted by the Commission and its advisers, and plays a central role in the Commission’s ambitious European Strategy for Data.
While the regulation has a long road before being finalized, businesses should be prepared for significant regulation in this space. Through this Alert, Foley Hoag intends to provide you with the basics of the proposal, including steps that your business can take now to prepare for the types of changes that this regulation will require.
Who is impacted by the regulation?
As written, the proposed rules will cover:
- providers that place on the market or put into service AI systems, irrespective of whether those providers are established in the European Union or in a third country;
- users of AI systems in the EU; and
- providers and users of AI systems that are located in a third country where the output produced by the system is used in the EU.
How does the Commission define AI?
The term “AI system” has a broad definition: “software that…can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing environments they interact with.”
Are there any types of actions prohibited by the proposal?
Yes, the proposal lists a number of AI practices that are prohibited:
- Placing on the market, putting into service or using an AI system that deploys subliminal techniques beyond a person’s consciousness to materially distort a person’s behavior in a manner that causes that person or another person physical or psychological harm.
- Placing on the market, putting into service or using an AI system by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons with the social score leading to detrimental or unfavorable treatment that is either unrelated to the contexts in which the data was originally generated or unjustified or disproportionate.
- Use of “real-time” remote biometric identification systems in publicly accessible spaces for law enforcement purposes. This prohibition admits broad exemptions subject to specific requirements, including prior judicial or administrative authorization for each individual use.
Is facial recognition covered by the proposal?
Yes, certain law enforcement uses of facial recognition systems intended for public spaces are covered under prohibited AI practices.
What areas are the main focus of the proposal?
The proposal focuses on “high risk” areas for AI. This includes:
- Product safety legislation, such as EU legislation on machinery, toys, pressure equipment or medical devices.
- AI systems used for biometric identification and categorization of natural persons, management and operation of critical infrastructure, education and vocational training, employment, law enforcement, migration, asylum and border control, and administration of justice and democratic processes.
How should businesses approach self-regulation of AI?
The draft regulation adopts a “development through deployment” approach. This means that high-risk AI systems are subject to scrutiny before they are placed on the market or put into service as well as throughout their life cycle.
What protocols or systems should businesses consider for their AI governance to meet regulatory benchmarks?
Businesses can start implementing internal governance processes now. According to the draft regulation, businesses should consider establishing a mandatory risk management system, strict data use and data governance requirements, technical documentation and record-keeping requirements, and post-market monitoring and reporting of incidents requirements.
How can businesses meet these requirements?
Businesses should consider external assessments for AI compliance. The draft regulation recommends a conformity assessment performed either by a third party or by the provider itself. The obligations for compliance under the proposal may affect all parties involved: the provider, importer, distributor and user of the AI.
What about transparency?
AI systems must be designed and developed in such a way that human oversight is guaranteed.
Additionally, there are special provisions in the regulation relating to transparency to ensure that people know they are dealing with an AI system, instead of human-centric decision-making processes.
Who is responsible for enforcement?
Preexisting regulation touching AI, like France’s facial recognition statutes, will need to be harmonized with the EU regulation when it is adopted.
If adopted, the regulation will be enforced by the EU member states. But in the future, the proposal foresees the establishment of a European AI Board that will be responsible for: assisting the national supervisory authorities and Commission to ensure the consistent application of the regulation; issuing opinions and recommendations; and collecting and sharing best practices among member states.
The regulation, once adopted, will come into force 20 days after its publication in the Official Journal. The provisions will be enforceable 24 months after that date, creating a long grace period to account for innovation that may occur between the drafting and the adoption.
What happens next?
The draft regulation will likely not be adopted for some time, given the complexity of AI and the number of nations and stakeholders involved. The European Parliament and Council will engage in further consideration and debate, and consider amendments.
However, this draft regulation is likely to set the benchmark for ethical AI. It should be considered carefully by companies in the absence of other guidance, to provide an indication of where national legislation and international consensus is moving on issues surrounding AI.
This article was originally published in the Foley Hoag newsletter. Subscribe here.