Article 9, Risk management system, Artificial Intelligence Act (Proposal 25.11.2022)
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It shall comprise the following steps:
(a) identification and analysis of the known and foreseeable risks most likely to occur to health, safety and fundamental rights in view of the intended purpose of the high-risk AI system;
(c) evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61;
(d) adoption of suitable risk management measures in accordance with the provisions of the following paragraphs.
The risks referred to in this paragraph shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.
3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Chapter 2, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable.
In identifying the most appropriate risk management measures, the following shall be ensured:
(a) elimination or reduction of risks identified and evaluated pursuant to paragraph 2 as far as possible through adequate design and development of the high risk AI system;
(b) where appropriate, implementation of adequate mitigation and control measures in relation to risks that cannot be eliminated;
(c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to users. With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environment in which the system is intended to be used.
5. High-risk AI systems shall be tested in order to ensure that high-risk AI systems perform in a manner that is consistent with their intended purpose and they are in compliance with the requirements set out in this Chapter.
6. Testing procedures may include testing in real world conditions in accordance with Article 54a.
7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.
8. The risk management system described in paragraphs 1 to 7 shall give specific consideration to whether the high-risk AI system is likely to be accessed by or have an impact on persons under the age of 18.
9. For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under relevant sectorial Union law, the aspects described in paragraphs 1 to 8 may be part of the risk management procedures established pursuant to that law.
Important note: This is not the final text of the Artificial Intelligence Act. This is the text of the proposal from the Council of the European Union (25.11.2022).
The Articles of the EU Artificial Intelligence Act, proposal from the Council of the European Union (25.11.2022):