The Articles of the EU Artificial Intelligence Act (25.11.2022)



Article 3, Definitions, Artificial Intelligence Act (Proposal 25.11.2022)


For the purpose of this Regulation, the following definitions apply:


(1) ‘artificial intelligence system’ (AI system) means a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts;


(1a) ‘life cycle of an AI system’ means the duration of an AI system, from design through retirement. Without prejudice to the powers of the market surveillance authorities, such retirement may happen at any point in time during the post-market monitoring phase upon the decision of the provider and implies that the system may not be used further. An AI system lifecycle is also ended by a substantial modification to the AI system made by the provider or any other natural or legal person, in which case the substantially modified AI system shall be considered as a new AI system.


(1b) ‘general purpose AI system’ means an AI system that - irrespective of how it is placed on the market or put into service, including as open source software - is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems;


(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark, whether for payment or free of charge;


(3) [deleted];


(3a) ‘small and medium-sized enterprise’ (SMEs) means an enterprise as defined in the Annex of Commission Recommendation 2003/361/EC concerning the definition of micro, small and medium-sized enterprises;


(4) ‘user’ means any natural or legal person, including a public authority, agency or other body, under whose authority the system is used;


(5) ‘authorised representative’ means any natural or legal person physically present or established in the Union who has received and accepted a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation;


(5a) ‘product manufacturer’ means a manufacturer within the meaning of any of the Union harmonisation legislation listed in Annex II;


(6) ‘importer’ means any natural or legal person physically present or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union;


(7) ‘distributor’ means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market;


(8) ‘operator’ means the provider, the product manufacturer, the user, the authorised representative, the importer or the distributor;


(9) ‘placing on the market’ means the first making available of an AI system on the Union market;


(10) ‘making available on the market’ means any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge;


(11) ‘putting into service’ means the supply of an AI system for first use directly to the user or for own use in the Union for its intended purpose;


(12) ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation;


(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems;


(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property;


(15) ‘instructions for use’ means the information provided by the provider to inform the user of in particular an AI system’s intended purpose and proper use;


(16) ‘recall of an AI system’ means any measure aimed at achieving the return to the provider or taking it out of service or disabling the use of an AI system made available to users;


(17) ‘withdrawal of an AI system’ means any measure aimed at preventing an AI system in the supply chain being made available on the market;


(18) ‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose;


(19) ‘conformity assessment’ means the process of verifying whether the requirements set out in Title III, Chapter 2 of this Regulation relating to a high-risk AI system have been fulfilled;


(20) ‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring;


(21) ‘conformity assessment body’ means a body that performs third-party conformity assessment activities, including testing, certification and inspection;


(22) ‘notified body’ means a conformity assessment body designated in accordance with this Regulation and other relevant Union harmonisation legislation;


(23) ‘substantial modification’ means a change to the AI system following its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation, or a modification to the intended purpose for which the AI system has been assessed. For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification.


(24) ‘CE marking of conformity’ (CE marking) means a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Title III, Chapter 2 or in Article 4b of this Regulation and other applicable Union legal act harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing;


(25) ‘post-market monitoring system’ means all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions;


(26) ‘market surveillance authority’ means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020;


(27) ‘harmonised standard’ means a European standard as defined in Article 2(1)(c) of Regulation (EU) No 1025/2012;


(28) ‘common specification’ means a set of technical specifications, as defined in point 4 of Article 2 of Regulation (EU) No 1025/2012 providing means to comply with certain requirements established under this Regulation;


(29) ‘training data’ means data used for training an AI system through fitting its learnable parameters;


(30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent overfitting; whereas the validation dataset can be a separate dataset or part of the training dataset, either as a fixed or variable split;


(31) ‘testing data’ means data used for providing an independent evaluation of the trained and validated AI system in order to confirm the expected performance of that system before its placing on the market or putting into service;


(32) ‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an output;


(33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data;


(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring psychological states, emotions or intentions of natural persons on the basis of their biometric data;


(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data;


(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons typically at a distance, without their active involvement, through the comparison of a person’s biometric data with the biometric data contained in a reference data repository;


(37) real-time remote biometric identification system’ means a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur instantaneously or near instantaneously;


(38) [deleted]


(39) ‘publicly accessible space’ means any publicly or privately owned physical place accessible to an undetermined number of natural persons regardless of whether certain conditions or circumstances for access have been predetermined, and regardless of the potential capacity restrictions;


(40) ‘law enforcement authority’ means:

(a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or

(b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;


(41) ‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;


(42) [deleted]


(43) ‘national competent authority’ means any of the following: the notifying authority and the market surveillance authority. As regards AI systems put into service or used by EU institutions, agencies, offices and bodies, the European Data Protection Supervisor shall fulfil the responsibilities that in the Member States are entrusted to the national competent authority and, as relevant, any reference to national competent authorities or market surveillance authorities in this Regulation shall be understood as referring to the European Data Protection Supervisor;


(44) ‘serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

(a) the death of a person or serious damage to a person’s health;

(b) a serious and irreversible disruption of the management and operation of critical infrastructure;

(c) breach of obligations under Union law intended to protect fundamental rights;

(d) serious damage to property or the environment.


(45) ‘critical infrastructure’ means an asset, system or part thereof which is necessary for the delivery of a service that is essential for the maintenance of vital societal functions or economic activities within the meaning of Article 2(4) and (5) of Directive …../….. on the resilience of critical entities;


(46) ‘personal data’ means data as defined in point (1) of Article 4 of Regulation (EU) 2016/679;


(47) ‘non-personal data’ means data other than personal data as defined in point (1) of Article 4 of Regulation (EU) 2016/679;


(48) ‘testing in real world conditions’ means the temporary testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of this Regulation; testing in real world conditions shall not be considered as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all conditions under Article 53 or Article 54a are fulfilled;


(49) ‘real world testing plan’ means a document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real world conditions;


(50) ‘subject’ for the purpose of real world testing means a natural person who participates in testing in real world conditions;


(51) ‘informed consent’ means a subject's free and voluntary expression of his or her willingness to participate in a particular testing in real world conditions, after having been informed of all aspects of the testing that are relevant to the subject's decision to participate; in the case of minors and of incapacitated subjects, the informed consent shall be given by their legally designated representative;


(52) ‘AI regulatory sandbox’ means a concrete framework set up by a national competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a specific plan for a limited time under regulatory supervision.


Important note: This is not the final text of the Artificial Intelligence Act. This is the text of the proposal from the Council of the European Union (25.11.2022).


The Articles of the EU Artificial Intelligence Act, proposal from the Council of the European Union (25.11.2022):

https://www.artificial-intelligence-act.com/Artificial_Intelligence_Act_Articles_(Proposal_25.11.2022).html