The European Union is moving ahead with AI Act - the world’s first comprehensive AI law. On June 14th, 2023, the European Parliament approved the AI Act draft and moved it to the next stage in the legislative process.
Let’s take a closer look at what new rules AI Act contains and what are the next steps.
Goals of the EU AI Act
The primary aim of the EU AI Act is to ensure the safety, transparency, and traceability of AI systems used in the European Union. It also seeks to prevent the violation of fundamental rights, including non-discrimination, freedom of expression, human dignity, personal data protection, and privacy.
The legal definition of AI
In order for the AI Act to be effectively implemented, the EU first needed to define what artificial intelligence is. The European Parliament aimed to create a technology-neutral and uniform definition for AI that could be applied to future AI systems.
In the context of European law, the “artificial intelligence system” (AI system) is defined as a:
software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with
The AI techniques and approaches listed in Annex I are:
(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.
As new AI technologies emerge, the European Commission will update Annex I to maintain an up-to-date definition of AI systems.
Risk levels explained
The EU AI Act introduces a risk-based approach to AI safety and regulations. AI systems will be classified into one of three risk categories:
Unacceptable risk
High risk
Low or minimal risk
The amount of regulations depends on which level the AI application falls into.
Unacceptable risk
Any AI systems that reach an Unacceptable risk level are banned in the EU.
According to the proposed AI Act, AI systems are deemed unacceptable if they:
deploy harmful manipulative “subliminal techniques” beyond a person’s consciousness to change their behaviour or designed to exploit specific vulnerable groups (based on age, or physical or mental disability)
are used by public authorities, or on their behalf, for social scoring purposes
perform real-time remote biometric identification (primarily facial recognition systems) in publicly accessible spaces for law enforcement purposes (except in limited cases, e.g., targeted searches for potential crime victims or tracking suspected criminals or terrorists). This point mainly targets predictive policing.
These systems are considered to be a clear threat to people's safety, livelihoods and rights, and will be banned in the EU.
High risk
An AI system is considered a High risk if it adversely impacts people's safety or fundamental rights.
An AI system is deemed to be high risk if it falls under one of two categories:
AI systems used as a safety component of a product or as a product falling under EU’s health and safety legislation, such as toys, cars, medical devices, lifts, or aviation
AI systems deployed in one of the eight specific areas listed in Annex III:
Biometric identification and categorisation of natural persons
Management and operation of critical infrastructure
Education and vocational training
Employment, worker management and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum and border control management
Assistance in legal interpretation and application of the law
The European Commission has the authority to update the list of high-risk AI systems at any time.
Providers of high-risk AI systems must register their systems in an EU-wide database managed by the European Commission before introducing them to the market or putting them into service. Providers of AI systems not currently governed by EU legislation must conduct a conformity self-assessment to demonstrate compliance with the new requirements for high-risk AI systems and may utilize CE marking.
High-risk AI systems must adhere to several requirements, including risk management, testing, technical robustness, data training and governance, transparency, human oversight, and cybersecurity. In short, providers of high-risk AI systems will be required to explain in detail how their system works, its intended purpose, possible risks, safety assessment and how the system will be monitored through its lifecycle to make sure it complies with the EU regulations.
Low or minimal risk
All other AI systems presenting only low or minimal risk could be developed and used in the EU without conforming to any additional legal obligations.
The European Commission says the vast majority of AI systems used in the EU fall into this category. Examples of low or minimal-risk AI systems are AI used in video games, spam filters, chatbots or image generators.
However, the proposed AI Act envisions the European Commission and EU member states creating codes of conduct encouraging providers of non-high-risk AI systems to voluntarily apply the mandatory requirements for high-risk AI systems. Additionally, these codes of conduct may incorporate voluntary commitments related to environmental sustainability, accessibility for individuals with disabilities, stakeholder participation in the design and development of AI systems, and diversity among development teams.
Transparency obligations
All AI systems designed to interact with people must be clearly labelled as such unless it is obvious from the circumstances and context of use that they are not human. For instance, when someone interacts with a chatbot, it should be clear that it is a chatbot and not a human.
Systems generating image, audio, or video content (like text-to-image generators) must label their output as AI-generated.
What about disclosing copyrighted material used in training AI systems?
The issue of disclosing copyrighted material used in training AI systems is one of the more controversial points of discussion surrounding the EU AI Act. This could potentially lead to companies like OpenAI withdrawing their services from the EU.
The proposed regulations do not explicitly state that all companies must disclose the copyrighted material used in training. Instead, the rules specify that providers of high-risk AI systems must include a part of technical documentation:
where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including information about the provenance of those data sets, their scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection)
This means that providers of high-risk AI systems are required to disclose the source of the training data, provide a description of the data sets, and explain the methods used to acquire and prepare the data. However, providers of low-risk AI systems will be encouraged to voluntarily adhere to the mandatory requirements for high-risk AI systems, even if their system is not classified as high-risk.
Who the rules will apply to?
The new rules would apply primarily to providers of AI systems established within the EU or in a third country placing AI systems on the EU market or putting them into service in the EU, as well as to users of AI systems located in the EU.
To prevent circumvention of the regulation, the new rules would also apply to providers and users of AI systems located in a third country where the output produced by those systems is used in the EU.
In other words, if the AI system touches the EU, the rules defined in EU AI Act will apply to that system.
The proposed regulations will not apply:
to AI systems developed or used exclusively for military purposes
to public authorities in a third country, nor to international organisations, or authorities using AI systems in the framework of international agreements for law enforcement and judicial cooperation
What if someone breaks the rules?
Those found to be in breach of the AI Act may face fines of up to €30 million or 6% of global profits, whichever is higher.
For a company like Microsoft, which heavily supports OpenAI and uses its services, the potential consequences of violating the rules could include a fine of over $11 billion.
What about innovation?
One of the concerns around the EU AI Act is how will it impact the research and development of AI in Europe and how will it impact the competitiveness of European AI startups and companies.
The AI Act proposes the creation of AI regulatory sandboxes. These sandboxes are designed to be an environment that facilitates the development, testing and validation of innovative AI systems (for a limited period of time) before they are put on the market.
Small-scale providers and start-ups will have priority access to the AI regulatory sandboxes.
European Artificial Intelligence Board
AI Act will also create European Artificial Intelligence Board. The board will be tasked with advising and assisting the European Commission with implementing the rules outlined in the AI Act. The Board will collect and share expertise and best practices, and ensure uniform implementation of the AI regulations among EU member states.
What’s next?
The proposed AI Act has been accepted by the European Parliament on June 14th, 2023. The talks will now begin with EU countries in the Council on the final form of the law.
The negotiations will properly begin once Spain takes over the rotating presidency of the Council in July. The aim is to reach an agreement by the end of this year.
Once the agreement has been reached, the AI Act will be adopted by the EU. Then the member states will have to implement AI Act within their national legal systems. This process may take up to two years.
Sources:
Disclaimer: This post does not constitute legal advice. The information provided is for general purposes only and should not be relied upon for specific legal situations. It is advisable to consult with a qualified legal professional for personalized advice. The author and publisher of this post are not responsible for any errors or actions taken based on the information provided.
H+ Weekly sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers and to my Patreons: whmr, Florian, dux, Eric and Andrew. Thank you for the support!