Article 29, Obligations of users of high-risk AI systems, Artificial Intelligence Act (Proposal 25.11.2022)
1. Users of high-risk AI systems shall use such systems in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this Article.
1a. Users shall assign human oversight to natural persons who have the necessary competence, training and authority.
2. The obligations in paragraph 1 and 1a are without prejudice to other user obligations under Union or national law and to the user’s discretion in organising its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider.
3. Without prejudice to paragraph 1, to the extent the user exercises control over the input data, that user shall ensure that input data is relevant in view of the intended purpose of the high-risk AI system.
4. Users shall implement human oversight and monitor the operation of the high-risk AI system on the basis of the instructions of use. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall inform the provider or distributor and suspend the use of the system. They shall also inform the provider or distributor when they have identified any serious incident and interrupt the use of the AI system. In case the user is not able to reach the provider, Article 62 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of users of AI systems which are law enforcement authorities.
For users that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services legislation, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service legislation.
5. Users of high-risk AI systems shall keep the logs, referred to in Article 12(1), automatically generated by that high-risk AI system, to the extent such logs are under their control. They shall keep them for a period of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data. Users that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services legislation shall maintain the logs as part of the documentation kept pursuant to the relevant Union financial service legislation.
5a. Users of high-risk AI systems that are public authorities, agencies or bodies, with the exception of law enforcement, border control, immigration or asylum authorities, shall comply with the registration obligations referred to in Article 51. When they find that the system that they envisage to use has not been registered in the EU database referred to in Article 60 they shall not use that system and shall inform the provider or the distributor.
6. Users of high-risk AI systems shall use the information provided under Article 13 to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, where applicable.
6a. Users shall cooperate with national competent authorities on any action those authorities take in relation to an AI system, of which they are the user.
Important note: This is not the final text of the Artificial Intelligence Act. This is the text of the proposal from the Council of the European Union (25.11.2022).
The Articles of the EU Artificial Intelligence Act, proposal from the Council of the European Union (25.11.2022):