The Articles of the EU Artificial Intelligence Act (25.11.2022)



Preamble 61 to 70, Artificial Intelligence Act (Proposal 25.11.2022)


(61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation, in line with the state of the art. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council, which are normally expected to reflect the state of the art,should be a means for providers to demonstrate conformity with the requirements of this Regulation.

However, in the absence of relevant references to harmonised standards, the Commission should be able to establish, via implementing acts, common specifications for certain requirements under this Regulation as an exceptional fall back solution to facilitate the provider’s obligation to comply with the requirements of this Regulation, when the standardisation process is blocked or when there are delays in the establishment of an appropriate harmonised standard.

If such delay is due to the technical complexity of the standard in question, this should be considered by the Commission before contemplating the establishment of common specifications. An appropriate involvement of small and medium enterprises in the elaboration of standards supporting the implementation of this Regulation is essential to promote innovation and competitiveness in the field of artificial intelligence within the Union. Such involvement should be appropriately ensured in accordance with Article 5 and 6 of Regulation 1025/2012.


(61a) It is appropriate that, without prejudice to the use of harmonised standards and common specifications, providers benefit from a presumption of conformity with the relevant requirement on data when their high-risk AI system has been trained and tested on data reflecting the specific geographical, behavioural or functional setting within which the AI system is intended to be used. Similarly, in line with Article 54(3) of Regulation (EU) 2019/881 of the European Parliament and of the Council, high-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to that Regulation and the references of which have been published in the Official Journal of the European Union should be presumed to be in compliance with the cybersecurity requirement of this Regulation. This remains without prejudice to the voluntary nature of that cybersecurity scheme.


(62) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service.


(63) It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high-risk AI systems related to products which are covered by existing Union harmonisation legislation following the New Legislative Framework approach, the compliance of those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already foreseen under that legislation. The applicability of the requirements of this Regulation should thus not affect the specific logic, methodology or general structure of conformity assessment under the relevant specific New Legislative Framework legislation. This approach is fully reflected in the interplay between this Regulation and the [Machinery Regulation].

While safety risks of AI systems ensuring safety functions in machinery are addressed by the requirements of this Regulation, certain specific requirements in the [Machinery Regulation] will ensure the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole. The [Machinery Regulation] applies the same definition of AI system as this Regulation. With regard to high-risk AI systems related to products covered by Regulations 745/2017 and 746/2017 on medical devices, the applicability of the requirements of this Regulation should be without prejudice and take into account the risk management logic and benefit-risk assessment performed under the medical device framework.


(64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products.

Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited.


(65) In order to carry out third-party conformity assessment for AI systems intended to be used for the remote biometric identification of persons, notified bodies should be notified under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence and absence of conflicts of interests. Notification of those bodies should be sent by national competent authorities to the Commission and the other Member States by means of the electronic notification tool developed and managed by the Commission pursuant to Article R23 of Decision 768/2008.


(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that whenever a change occurs which may affect the compliance of a high risk AI system with this Regulation (e.g. change of operating system or software architecture), or when the intended purpose of the system changes, that AI system should be considered a new AI system which should undergo a new conformity assessment. However, changes occuring to the algorithm and the performance of AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. automatically adapting how functions are carried out) should not constitute a substantial modification, provided that those changes have been pre-determined by the provider and assessed at the moment of the conformity assessment.


(67) High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. Member States should not create unjustified obstacles to the placing on the market or putting into service of highrisk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking.


(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.


(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register themselves and information about their high-risk AI system in a EU database, to be established and managed by the Commission.

Before using a high-risk AI system listed in Annex III, users of highrisk AI systems that are public authorities, agencies or bodies, with the exception of law enforcement, border control, immigration or asylum authorities, and authorities that are users of high-risk AI systems in the area of critical infrastructure shall also register themselves in such database and select the system that they envisage to use. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council26. In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report.


(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect taking into account the circumstances and the context of use.

When implementing such obligation, the characteristics of individuals belonging to vulnerable groups due to their age or disability should be taken into account to the extent the AI system is intended to interact with those groups as well. Moreover, natural persons should be notified when they are exposed to systems that, by processing their biometric data, can identify or infer the emotions or intentions of those persons or assign them to specific categories. Such specific categories can relate to aspects such as sex, age, hair colour, eye colour, tatoos, personal traits, ethnic origin, personal preferences and interests or to other aspects such as sexual or political orientation. Such information and notifications should be provided in accessible formats for persons with disabilities.

Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin. The compliance with the information obligations referred to above should not be interpreted as indicating that the use of the system or its output is lawful under this Regulation or other Union and Member State law and should be without prejudice to other transparency obligations for users of AI systems laid down in Union or national law.

Furthermore it should also not be interpreted as indicating that the use of the system or its output impedes the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Funadamental Rights of the EU, in particular where the content is part of an evidently creative, satirical, artistic or fictional work or programme, subject to appropriate safeguards for the rights and freedoms of third parties.


Important note: This is not the final text of the Artificial Intelligence Act. This is the text of the proposal from the Council of the European Union (25.11.2022).


The Articles of the EU Artificial Intelligence Act, proposal from the Council of the European Union (25.11.2022):

https://www.artificial-intelligence-act.com/Artificial_Intelligence_Act_Articles_(Proposal_25.11.2022).html