Preparing for the EU Artificial Intelligence Act, for EU and non-EU firms (tailored-made training).

Overview

The Artificial Intelligence Act is dramatically changing the market rules for AI systems and services for EU and non-EU manufacturers, importers, distributors, authorised representatives, operators, and users.

One main objective of the Act is to ensure that AI systems placed on the EU market and used in the EU are safe, and respect the existing EU law on fundamental rights. Also, to ensure legal certainty, to facilitate investments in AI, and to facilitate the development of a single market for lawful, safe and trustworthy AI applications.

AI systems fall within the scope of the Artificial Intelligence Act even when they are neither placed on the market, nor put into service, nor used in the EU. To prevent the circumvention of the Artificial Intelligence Act and to ensure an effective protection of natural persons located in the EU, the Act also applies to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the EU.

Aside from the many beneficial uses of artificial intelligence, the technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and are prohibited, because they contradict EU values and respect for human dignity, freedom, equality, democracy, including the right to non-discrimination, data protection, privacy and the rights of the child.

The Artificial Intelligence Act deals with AI-enabled manipulative techniques that can be used to persuade persons to engage in unwanted behaviours, or to deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making and free choices. The Act prohibits the placing on the market and putting into service of certain AI systems that materially distort human behaviour, whereby physical or psychological harms are likely to occur. Such AI systems deploy subliminal components such as audio, image, video stimuli that persons cannot perceive as those stimuli are beyond human perception, or other subliminal techniques that subvert or impair person’s autonomy, decision-making or free choices in ways that people are not consciously aware of, or even if aware not able to control or resist, for example in cases of machine-brain interfaces or virtual reality.

Entities covered by the Act must understand the compliance challenges for AI and high-risk AI systems, and the terminology (notified bodies, digital innovation hubs, testing experimentation facilities, conformity assessments, presumption of conformity, CE marking of conformity, AI regulatory sandboxes, post-market monitoring plan etc.). Also, the interaction of the Artificial Intelligence Act with other EU initiatives, like the European health data space that facilitates non-discriminatory access to health data and covers the training of artificial intelligence algorithms, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance.


Possible modules of the tailor-made training program

Introduction to the Artificial Intelligence Act.
- A fast evolving family of technologies for a wide array of economic and societal benefits across the entire spectrum of industries and social activities.
- A uniform legal framework for the development, marketing and use of artificial intelligence.
- Also, a risk and a cause harm to public interests and rights that are protected by EU law.

Subject matter and scope.
- Understanding the important definitions.
- What is ‘artificial intelligence system’, ‘general purpose AI system’, ‘intended purpose’, ‘reasonably foreseeable misuse’, ‘post-market monitoring system’, ‘emotion recognition system’, ‘serious incident’?

Compliance of general purpose AI systems.
- Requirements and obligations for providers of such systems.

Prohibited artificial intelligence practices.

Classification rules for high-risk AI systems.
- Requirements for high-risk AI systems.
- Compliance with the requirements.
- Risk management system.
- Data and data governance.
- Technical documentation.
- Record-keeping.
- Transparency and provision of information to users.
- Human oversight.
- Accuracy, robustness and cybersecurity.

Obligations of providers of high-risk AI systems.
- Quality management system.
- Documentation keeping.
- Conformity assessment.
- Automatically generated logs.
- Corrective actions.
- Duty of information.
- Cooperation with competent authorities.

Authorised representatives.
- Obligations of importers.
- Obligations of distributors.
- Obligations of users of high-risk AI systems.

Notifying authorities.
- Application of a conformity assessment body for notification.
- Notification procedure.
- Requirements relating to notified bodies.
- Presumption of conformity with requirements relating to notified bodies.
- Subsidiaries of and subcontracting by notified bodies.
- Operational obligations of notified bodies.
- Changes to notifications.
- Conformity assessment bodies of third countries.

Standards conformity assessment, certificates, registration.
- Harmonised standards.
- Common specifications.
- Presumption of conformity with certain requirements.
- Conformity assessment, Certificates.
- Appeal against decisions of notified bodies.
- Information obligations of notified bodies.
- Derogation from conformity assessment procedure.
- EU declaration of conformity.
- CE marking of conformity.
- Registration of relevant operators and of high-risk AI systems.

Transparency obligations for providers and users of certain AI systems.

AI regulatory sandboxes.
- Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox.
- Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes.
- Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes.
- Support measures for operators, in particular SMEs, including start-ups.
- Derogations for specific operators.

Governance.
- Establishment and structure of the European Artificial Intelligence Board.
- Tasks of the Board.

EU database for high-risk AI systems

Post-market monitoring by providers.
- Post-market monitoring plan for high-risk AI systems.

Reporting of serious incidents.

Market surveillance and control of AI systems in the Union market.
- Supervision of testing in real world conditions by market surveillance authorities.
- Powers of authorities protecting fundamental rights.
- Procedure for dealing with AI systems presenting a risk at national level.
- Union safeguard procedure.
- Compliant high-risk or general purpose AI systems which present a risk.
- Formal non-compliance.
- Union testing facilities in the area of artificial intelligence.
- Central pool of independent experts.

Codes of conduct for voluntary application of specific requirements.

Confidentiality.
- Penalties.
- Administrative fines on Union institutions, agencies and bodies.

AI systems already placed on the market or put into service.
- Evaluation and review.

Extraterritorial application of EU law - the application of EU provisions outside the territory of the EU, resulting from EU unilateral legislative and regulatory action.

Entry into force and application.

Master plan and list of immediate actions, for EU and non-EU entities.

Other new EU directives and regulations that introduce compliance challenges to EU and non-EU entities.

Closing remarks.


Target Audience, duration.

We offer a 60-minute overview for the board of directors and senior management of EU and non-EU firms, tailored to their needs. We also offer 4 hours to one day training for risk and compliance teams, responsible for the implementation of the EU directives and regulations.


Instructor.

Our instructors are working professionals that have the necessary knowledge and experience in the fields in which they teach. They can lead full-time, part-time, and short-form programs that are tailored to your needs. You will always know up front who the instructor of the training program will be.

George Lekatis, General Manager of Cyber Risk GmbH, can also lead these training sessions. His background and some testimonials: https://www.cyber-risk-gmbh.com/George_Lekatis_Testimonials.pdf


Terms and conditions.

You may visit: https://www.cyber-risk-gmbh.com/Terms.html



Contact us

Cyber Risk GmbH
Dammstrasse 16
8810 Horgen
Tel: +41 79 505 89 60
Email: george.lekatis@cyber-risk-gmbh.com








Web: https://www.cyber-risk-gmbh.com









We process and store data in compliance with both, the Swiss Federal Act on Data Protection (FADP) and the EU General Data Protection Regulation (GDPR). The service provider is Hostpoint. The servers are located in the Interxion data center in Zürich, the data is saved exclusively in Switzerland, and the support, development and administration activities are also based entirely in Switzerland.


Understanding Cybersecurity in the European Union.

1. The NIS 2 Directive

2. The European Cyber Resilience Act

3. The Digital Operational Resilience Act (DORA)

4. The Critical Entities Resilience Directive (CER)

5. The Digital Services Act (DSA)

6. The Digital Markets Act (DMA)

7. The European Health Data Space (EHDS)

8. The European Chips Act

9. The European Data Act

10. European Data Governance Act (DGA)

11. The Artificial Intelligence Act

12. The European ePrivacy Regulation

13. The European Cyber Defence Policy

14. The Strategic Compass of the European Union

15. The EU Cyber Diplomacy Toolbox