The Articles of the EU Artificial Intelligence Act (25.11.2022)



Preamble 1 to 10, Artificial Intelligence Act (Proposal 25.11.2022)


(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI-based goods and services crossborder, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.


(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations.

Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop, import or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU).

To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.


(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.


(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial.


(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law.

To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services.

By laying down those rules and building on the work of the High-level Expert Group on Artificial Intelligence as reflecetd in the Guidelines for Trustworthy Artificial Intelligence in the EU, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence as stated by the European Council, and it ensures the protection of ethical principles, as specifically requested by the European Parliament.


(5a) The harmonised rules on the placing on the market, putting into service and use of AI systems laid down in this Regulation should apply across sectors and, in line with its New Legislative Framework approach, should be without prejudice to existing Union law, notably on data protection, consumer protection, fundamental rights, employment and product safety, to which this Regulation is complementary.

As a consequence all rights and remedies afforded by such Union law to consumers and other persons who may be negatively impacted by AI systems, including as regards the compensation of possible damages pursuant to Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, remain unaffected and fully applicable. On top of that, this Regulation aims to strengthen the effectiveness of such existing rights and remedies by establishing specific requirements and obligations, including in respect of transparency, technical documentation and record-keeping of AI systems.

Furthermore, the obligations placed on various operators involved in the AI value chain under this Regulation should apply without prejudice to national laws, in compliance with Union law, having the effect of limiting the use of certain AI systems where such laws fall outside the scope of this Regulation or pursue other legitimate public interest objectives than those pursued by this Regulation. For example, national labour law and the laws on the protection of minors (i.e. persons below the age of 18) taking into account the United Nations General Comment No 25 (2021) on children’s rights, insofar as they are not specific to AI systems and pursue other legimitate public interest objectives, should not be affected by this Regulation.


(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on key functional characteristics of artificial intelligence such as its learning, reasoning or modelling capabilities, distinguishing it from simpler software systems and programming approaches.

In particular, for the purposes of this Regulation AI systems should have the ability, on the basis of machine and/or human-based data and inputs, to infer the way to achieve a set of final objectives given to them by humans, using machine learning and/or logic- and knowledge based approaches and to produce outputs such as content for generative AI systems (e.g. text, video or images), predictions, recommendations or decisions, influencing the environment with which the system interacts, be it in a physical or digital dimension. A system that uses rules defined solely by natural persons to automatically execute operations should not be considered an AI system.

AI systems can be designed to operate with varying levels of autonomy and be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The concept of the autonomy of an AI system relates to the degree to which such a system functions without human involvement.


(6a) Machine learning approaches focus on the development of systems capable of learning and inferring from data to solve an application problem without being explicitly programmed with a set of step-by-step instructions from input to output. Learning refers to the computational process of optimizing from data the parameters of the model, which is a mathematical construct generating an output based on input data.

The range of problems addressed by machine learning typically involves tasks for which other approaches fail, either because there is no suitable formalisation of the problem, or because the resolution of the problem is intractable with non-learning approaches. Machine learning approaches include for instance supervised, unsupervised and reinforcement learning, using a variety of methods including deep learning with neural networks, statistical techniques for learning and inference (including for instance logistic regression, Bayesian estimation) and search and optimisation methods.


(6b) Logic- and knowledge based approaches focus on the development of systems with logical reasoning capabilities on knowledge to solve an application problem. Such systems typically involve a knowledge base and an inference engine that generates outputs by reasoning on the knowledge base. The knowledge base, which is usually encoded by human experts, represents entities and logical relationships relevant for the application problem through formalisms based on rules, ontologies, or knowledge graphs.

The inference engine acts on the knowledge base and extracts new information through operations such as sorting, searching, matching or chaining. Logic- and knowledge based approaches include for instance knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning, expert systems and search and optimisation methods.


(6c) In order to ensure uniform conditions for the implementation of this Regulation as regards machine learning approaches and logic- and knowledged based approaches and to take account of market and technological developments, implementing powers should be conferred on the Commission.


(6d) The notion of ‘user’ referred to in this Regulation should be interpreted as any natural or legal person, including a public authority, agency or other body, using an AI system under whose authority the system is used. Depending on the type of AI system, the use of the system may affect persons other than the user.


(7) The notion of biometric data used in this Regulation should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council, Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council.


(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons typically at a distance, without their active involvement, through the comparison of a person’s biometric data with the biometric data contained in a reference data repository, irrespectively of the particular technology, processes or types of biometric data used.

Such remote biometric identification systems are typically used to perceive (scan) multiple persons or their behaviour simultaneously in order to facilitate significantly the identification of a number of persons without their active involvement. Such a definition excludes verification/authentication systems whose sole purpose would be to confirm that a specific natural person is the person he or she claims to be, as well as systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises.

This exclusion is justified by the fact that such systems are likely to have a minor impact on fundamental rights of natural persons compared to remote biometric identification systems which may be used for the processing of the biometric data of a large number of persons. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays.

‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.


(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to an undetermined number of natural persons, and irrespective of whether the place in question is privately or publicly owned. and irrepective of the activity for which the place may be used, such as commerce (for instance, shops, restaurants, cafés), services (for instance, banks, professional activities, hospitality), sport (for instance, swimming pools, gyms, stadiums), transport (for instance, bus, metro and railway stations, airports, means of transport ), entertainment (for instance, cinemas, theatres, museums, concert and conference halls) leisure or otherwise (for instance, public roads and squares, parks, forests, playgrounds).

A place should be classified as publicly accessible also if, regardless of potential capacity or security restrictions, access is subject to certain predetermined conditions, which can be fulfilled by an undetermined number of persons, such as purchase of a ticket or title of transport, prior registration or having a certain age. By contrast, a place should not be considered publicly accessible if access is limited to specific and defined natural persons through either Union or national law directly related to public safety or security or through the clear manifestation of will by the person having the relevant authority on the place.

The factual possibility of access alone (e.g. an unlocked door, an open gate in a fence) does not imply that the place is publicly accessible in the presence of indications or circumstances suggesting the contrary (e.g. signs prohibiting or restricting access). Company and factory premises as well as offices and workplaces that are intended to be accessed only by relevant employees and service providers are places that are not publicly accessible. Publicly accessible spaces should not include prisons or border control areas.

Some other areas may be composed of both not publicly accessible and publicly accessible areas, such as the hallway of a private residential building necessary to access a doctor's office or an airport. Online spaces are not covered either, as they are not physical spaces. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.


(10) In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to users of AI systems established within the Union.


Important note: This is not the final text of the Artificial Intelligence Act. This is the text of the proposal from the Council of the European Union (25.11.2022).


The Articles of the EU Artificial Intelligence Act, proposal from the Council of the European Union (25.11.2022):

https://www.artificial-intelligence-act.com/Artificial_Intelligence_Act_Articles_(Proposal_25.11.2022).html