The EU Artificial Intelligence Act



What is the Artificial Intelligence Act?

The Artificial Intelligence Act sets harmonised rules for the development, placement on the market and use of AI systems in the European Union, following a proportionate risk-based approach.

The Act lays down a solid risk methodology to define “high-risk” AI systems that pose significant risks to the health, safety or fundamental rights of persons. Those AI systems will have to comply with a set of horizontal mandatory requirements for trustworthy AI, and follow conformity assessment procedures before those systems can be placed on the EU market.

Clear obligations are placed on providers of AI systems, to ensure safety and respect of existing legislation protecting fundamental rights throughout the whole AI systems’ lifecycle.

The rules will be enforced through a governance system at Member States level, and a cooperation mechanism at Union level with the establishment of a European Artificial Intelligence Board.

Measures are also proposed to support innovation, in particular through AI regulatory sandboxes and other measures, to reduce the regulatory burden and to support Small and Medium-Sized Enterprises (‘SMEs’) and start-ups.

A very important development: The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, is forbidden. Such AI systems deploy subliminal components that individuals cannot perceive, or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person.


13 March 2024 - The European Parliament approved the Artificial Intelligence Act.

The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations.

“Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law).

Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight.

Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

What is next: The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.


February 2, 2024 - EU Member States unanimously endorsed the political agreement.

According to Thierry Breton, European Commissioner for Internal Market:

"We reached two important milestones in our endeavour to turn Europe into the global hub for trustworthy AI:

- Today, EU Member States unanimously endorsed the political agreement that we reached in December on the AI Act. The agreement resulted in a balanced and futureproof text, promoting trust and innovation in trustworthy AI.

- Last week, we adopted a wide range of measures to support Europe’s AI start-ups, complementing the regulatory framework.

Both milestones are equally important for European innovators in AI. They reflect our comprehensive approach to AI: promoting both trust and excellence in AI.

Our vision: a thriving European ecosystem of AI start-ups with talented researchers and engineers, developing large language models in all European languages, based on large amounts of easily accessible high-quality data, training them on the world’s fastest supercomputers, and working with industrial partners to turn them into innovative applications, with access to a large Single Market of 450 million people."


December 9, 2023 - The Council and Parliament reach a provisional agreement on the Artificial Intelligence Act.

Compared to the initial Commission proposal, the main new elements of the provisional agreement can be summarised as follows:

1. Rules on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems

2. A revised system of governance with some enforcement powers at EU level extension of the list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards

3. Better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.

Do we have the final text of the Artificial Intelligence Act?

No. Following the provisional agreement, work will continue at technical level in the coming weeks to finalise the details of the new regulation. The presidency will submit the compromise text to the member states’ representatives for endorsement once this work has been concluded.

The agreed text will have to be formally adopted by both Parliament and Council to become EU law.

The Artificial Intelligence Act will become law 20 days after its publication in the Official Journal of the European Union (the official publication for EU legal acts, other acts and official information from EU institutions, bodies, offices and agencies). In our opinion this will happen during the summer of 2024.


June 14, 2023 - The European Parliament has approved its negotiating position on the proposed Artificial Intelligence Act.

The European Parliament adopted its negotiating position with 499 votes in favor, 28 against, and 93 abstentions. It also amended the list of intrusive and discriminatory uses of AI systems. The list now includes:

- “Real-time” remote biometric identification systems in publicly accessible spaces;

- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;

- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);

- Predictive policing systems (based on profiling, location or past criminal behaviour);

- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and

- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).


Do we have the final text of the Artificial Intelligence Act?

No, this is not the final text.


What is next?

The Parliament will negotiate with the EU Council and the European Commission, in the trilogue process. The aim of a trilogue is to reach a provisional agreement on a legislative proposal that is acceptable to both the Parliament and the Council, the co-legislators. The Commission acts as a mediator, facilitating an agreement between the co-legislators. This provisional agreement must then be adopted by each of those institutions’ formal procedures.


25 November 2022 - The Council of the EU approved a compromise version of the proposed Artificial Intelligence Act.

There are still disagreements in the definition of the AI systems. The Council believes that the definition must not include certain types of existing software. There are also difficulties in the definition of autonomy.

Prohibited AI practices - the text of the proposed Artificial Intelligence Act now considers prohibited AI practices the use of AI for social scoring from private actors. Also, AI systems that exploit the vulnerabilities of a specific group of persons, including persons who are vulnerable due to their social or economic situation.

What about the prohibition of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities? The text of the proposed Artificial Intelligence Act clarifies that such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should be exceptionally allowed to use such systems.

Next step: The European Parliament is scheduled to vote by end of March 2023. The final EU Artificial Intelligence Act is expected to be adopted near the end of 2023.


Article 1, Subject matter.

This Regulation lays down:

(a1) harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;

(a2) prohibitions of certain artificial intelligence practices;

(b) specific requirements for high-risk AI systems and obligations for operators of such systems;

(c) harmonised transparency rules for certain AI systems;

(d) rules on market monitoring, market surveillance and governance;

(e) measures in support of innovation.


Article 2, Scope.

1. This Regulation applies to:

(a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are physically present or established within the Union or in a third country;

(b) users of AI systems who are physically present or established within the Union;

(c) providers and users of AI systems who are physically present or established in a third country, where the output produced by the system is used in the Union;

(d) importers and distributors of AI systems;

(e) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;

(f) authorised representatives of providers, which are established in the Union;


A new Title IA has been added to account for situations where AI systems can be used for many different purposes (general purpose AI), and where there may be circumstances where general purpose AI technology gets integrated into another system which may become high-risk. The compromise text specifies in Article 4b(1) that certain requirements for high risk AI systems would also apply to general purpose AI systems.

However, instead of direct application of these requirements, an implementing act would specify how they should be applied in relation to general purpose AI systems, based on a consultation and detailed impact assessment and taking into account specific characteristics of these systems and related value chain, technical feasibility and market and technological developments. The use of an implementing act will ensure that the Member States will be properly involved and will keep the final say on how the requirements will be applied in this context.

Moreover, the compromise text of Article 4b(5) also includes a possibility to adopt further implementing acts which would lay down the modalities of cooperation between providers of general purpose AI systems and other providers intending to put into service or place such systems on the Union market as high-risk AI systems, in particular as regards the provision of information.

In Article 2 an explicit reference has been made to the exclusion of national security, defence and military purposes from the scope of the AI Act. Similarly, it has been clarified that the AI Act should not apply to AI systems and their outputs used for the sole purpose of research and development and to obligations of people using AI for non-professional purposes, which would fall outside the scope of the AI Act, except for the transparency obligations.

In order to take into account the particular specificities of law enforcement authorities, a number of changes has been made to provisions relating to the use of AI systems for law enforcement purposes. Notably, some of the related definitions in Article 3, such as ‘remote biometric identification system’ and ‘real-time remote biometric identification system’, have been fine-tuned in order to clarify what situations would fall under the related prohibition and high-risk use case and what situations would not.

The compromise proposal also contains other modifications that are, subject to appropriate safeguards, meant to ensure appropriate level of flexibility in the use of high-risk AI systems by law enforcement authorities or reflect on the need to respect the confidentiality of sensitive operational data in relation to their activities.

In order to simplify the compliance framework for the AI Act, the compromise text contains a number of clarifications and simplifications to the provisions on the conformity assessment procedures. The provisions related to market surveillance have also been clarified and simplified in order to make them more effective and easier to implement, taking into account the need for a proportionate approach in this respect. Moreover, Article 41 has been thoroughly reviewed in order to limit the Commission’s discretion with regard to the adoption of implementing acts establishing common technical specifications for the requirements for high-risk AI systems and general purpose AI systems.

The compromise text also substantially modifies the provisions concerning the AI Board ('the Board'), with the objectives to ensure its greater autonomy and to strengthen its role in the governance architecture for the AIA. In this context, Articles 56 and 58 have been revised in order to strengthen the role of the Board in such a way that it should be in a better position to provide support to the Member States in the implementation and enforcement of the AI Act. More specifically, the tasks of the Board have been extended and its composition has been specified.

In order to ensure the involvement of the stakeholders in relation to all issues related to the implementation of the AI Act, including the preparation of implementing and delegates acts, a new requirement has been added for the Board to create a permanent subgroup serving as a platform for a wide range of stakeholders. Two other standing subgroups for market surveillance authorities and notifying authorities should also be established to reinforce the consistency of governance and enforcement of the AI Act across the Union.

With the objective to create a legal framework that is more innovation-friendly and in order to promote evidence-based regulatory learning, the provisions concerning measures in support of innovation in Article 53 have been substantially modified in the compromise text. Notably, it has been clarified that AI regulatory sandboxes, which are supposed to establish a controlled environment for the development, testing and validation of innovative AI systems under the direct supervision and guidance by the national competent authorities, should also allow for testing of innovative AI systems in real world conditions.

Furthermore, new provisions in Articles 54a and 54b have been added allowing unsupervised real world testing of AI systems, under specific conditions and safeguards. In both cases the compromise text clarifies how these new rules are to be interpreted in relation to other existing, sectoral legislation on regulatory sandboxes.


15 July 2022 - Council of EU: Compromise text on the AI Act.

The Commission adopted the proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act, AIA) on 21 April 2021.

In order to address concerns of many Member States that consider that the current definition of an AI system is ambiguous and too broad, and that it fails to provide sufficiently clear criteria for distinguishing AI from more classical software systems, the Czech Presidency has proposed a new version of the definition in Article 3(1), which narrows it down to systems developed through machine learning techniques and knowledge-based approaches.

The basic concepts from the OECD definition of an AI system have been kept, and additionally the concept of autonomy has been included in the definition, as per the specific request of a number of delegations. Furthermore, Recital 6 has been updated accordingly.

The harmonised rules laid down in this Regulation should apply across sectors without prejudice to existing Union law, and in particular without prejudice to Union law on data protection, consumer protection, product safety and employment. This Regulation is intended to regulate AI systems that are to be placed on the market and put into service in the Union and it should complement such existing Union law.

Machine learning approaches focus on the development of systems capable of learning from data to solve an application problem without being explicitly programmed with a set of step-by-step instructions from input to output. Learning refers to the computational process of optimizing from data the parameters of the model, which is a mathematical construct generating an output based on input data.

The range of problems addressed by machine learning typically involves tasks for which other approaches fail, either because there is no suitable formalisation of the problem, or because the resolution of the problem is intractable with non-learning approaches. Machine learning approaches include for instance supervised, unsupervised and reinforcement learning, using a variety of methods including deep learning, statistical techniques for learning and inference (including Bayesian estimation) and search and optimisation methods.

Logic- and knowledge based approaches focus on the development of systems with logical reasoning capabilities on knowledge to solve an application problem. Such systems typically involve a knowledge base and an inference engine that generates outputs by reasoning on the knowledge base.

The knowledge base, which is usually encoded by human experts, represents entities and logical relationships relevant for the application problem through formalisms based on rules, ontologies, or knowledge graphs. The inference engine acts on the knowledge base and extracts new information through operations such as sorting, searching, matching or chaining. Logic- and knowledge based approaches include for instance knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning, expert systems and search and optimisation methods.

‘Artificial intelligence system’ (AI system) means a system that is designed to operate with a certain level of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of human-defined objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions , influencing the environments with which the AI system interacts.


21 April 2021 - Proposal for a Regulation laying down harmonised rules on artificial intelligence.

The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.

Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems.

A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU).

To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.

Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.

At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial.

A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law.

To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council, and it ensures the protection of ethical principles, as specifically requested by the European Parliament.


8.4.2019 - European Commission, Building Trust in Human-Centric Artificial Intelligence.

The European AI strategy and the coordinated plan make clear that trust is a prerequisite to ensure a human-centric approach to AI: AI is not an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being.

To achieve this, the trustworthiness of AI should be ensured. The values on which our societies are based need to be fully integrated in the way AI develops. The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the societies of all Member States in which pluralism, non-discrimination, tolerance, justice, solidarity and equality prevail. In addition, the EU Charter of Fundamental Rights brings together – in a single text – the personal, civic, political, economic and social rights enjoyed by people within the EU.

The EU has a strong regulatory framework that will set the global standard for humancentric AI. The General Data Protection Regulation ensures a high standard of protection of personal data, and requires the implementation of measures to ensure data protection by design and by default. The Free Flow of Non-Personal Data Regulation removes barriers to the free movement of non-personal data and ensures the processing of all categories of data anywhere in Europe. The recently adopted Cybersecurity Act will help to strengthen trust in the online world, and the proposed ePrivacy Regulation also aims at this goal.

Nevertheless, AI brings new challenges because it enables machines to “learn” and to take and implement decisions without human intervention. Before long, this kind of functionality will become standard in many types of goods and services, from smart phones to automated cars, robots and online applications. Yet, decisions taken by algorithms could result from data that is incomplete and therefore not reliable, they may be tampered with by cyber-attackers, or they may be biased or simply mistaken. Unreflectively applying the technology as it develops would therefore lead to problematic outcomes as well as reluctance by citizens to accept or use it.

Instead, AI technology should be developed in a way that puts people at its centre and is thus worthy of the public’s trust. This implies that AI applications should not only be consistent with the law, but also adhere to ethical principles and ensure that their implementations avoid unintended harm. Diversity in terms of gender, racial or ethnic origin, religion or belief, disability and age should be ensured at every stage of AI development. AI applications should empower citizens and respect their fundamental rights.

They should aim to enhance people’s abilities, not replace them, and also enable access by people with disabilities. Therefore, there is a need for ethics guidelines that build on the existing regulatory framework and that should be applied by developers, suppliers and users of AI in the internal market, establishing an ethical level playing field across all Member States. This is why the Commission has set up a high-level expert group on AI representing a wide range of stakeholders and has tasked it with drafting AI ethics guidelines as well as preparing a set of recommendations for broader AI policy. At the same time, the European AI Alliance, an open multi-stakeholder platform with over 2700 members, was set up to provide broader input for the work of the AI high-level expert group.


7.12.2018 - European Commission, Coordinated Plan on Artificial Intelligence.

This plan brings together a set of concrete and complementary actions at EU, national and regional level in view of:

- Boosting investments and reinforcing excellence in AI technologies and applications which are trustworthy and “ethical and secure by design”. Investments shall take place in a stable regulatory context which enables experimentation and supports disruptive innovation across the EU, ensuring the widest and best use of AI by the European economy and society.

- Building on Europe’s strengths, to develop and implement in partnership with industry and Member States shared agendas for industry-academia collaborative Research and Development (R&D) and innovation.

- Adapting learning and skilling programmes and systems to prepare Europe’s society and its future generations for AI.

- Building up essential capacities in Europe underpinning AI such as data spaces and world-class reference sites for testing and experimentation.

- Making public administrations in Europe frontrunners in the use of AI.

- Implementing, on the basis of expert work, clear ethics guidelines for the development and the use of AI in full respect of fundamental rights, with a view to set global ethical standards and be a world leader in ethical, trusted AI.

- Where needed, reviewing the existing national and European legal framework to better adapt them to specific challenges.

This digital transformation requires in many cases a significant upgrading of the currently available infrastructure. The effective implementation of AI will require the completion of the Digital Single Market and its regulatory framework including the swift adoption of the Commission proposal for a European Cybersecurity Industrial, Technology and Research Competence Centre and the Network of National Coordination Centres, reinforced connectivity through spectrum coordination, very fast 5G mobile networks and optical fibres, next generation clouds, as well as satellite technologies.

High-performance computing and AI will increasingly intertwine as we transit to a future using new computing, storage and communication technologies. Furthermore, infrastructures should be both accessible and affordable to ensure an inclusive AI adoption across Europe, particularly by small and medium-sized enterprises (SMEs).

Industry, and in particular small and young companies, will need to be in a position to be aware and able to integrate these technologies in new products, services and related production processes and technologies, including by upskilling and reskilling their workforce. Standardisation will also be essential for the development of AI in the Digital Single Market, helping notably to ensure interoperability.


June 2018 - The European AI Alliance.

The European AI Alliance is an initiative of the European Commission to establish an open policy dialogue on Artificial Intelligence. Since its launch in 2018, the AI Alliance has engaged around 6000 stakeholders through regular events, public consultations and online forum exchanges.

The AI Alliance was initially created to steer the work of the High-Level Expert Group on Artificial Intelligence (AI HLEG).

The group’s Ethics Guidelines as well as its Policy and Investment Recommendations were important documents that shaped the concept of Trustworthy AI, contributing to the Commission’s approach to AI. This work was based on a mix of expert input and community driven feedback.


25 April 2018 - The European Commission outlines a European approach to boost investment and set ethical guidelines.

The European Commission is presenting a series of measures to put artificial intelligence (AI) at the service of Europeans and boost Europe's competitiveness in this field.

The Commission is proposing a three-pronged approach to increase public and private investment in AI, prepare for socio-economic changes, and ensure an appropriate ethical and legal framework. This follows European leaders' call for a European approach on AI.

Europe has world-class researchers, laboratories and start-ups in the field of AI. The EU is also strong in robotics and has world-leading transport, healthcare and manufacturing sectors that should adopt AI to remain competitive. However, fierce international competition requires coordinated action for the EU to be at the forefront of AI development.

The EU (public and private sectors) should increase investments in AI research and innovation by at least €20 billion between now and the end of 2020. To support these efforts, the Commission is increasing its investment to €1.5 billion for the period 2018-2020 under the Horizon 2020 research and innovation programme. This investment is expected to trigger an additional €2.5 billion of funding from existing public-private partnerships, for example on big data and robotics.

It will support the development of AI in key sectors, from transport to health; it will connect and strengthen AI research centres across Europe, and encourage testing and experimentation. The Commission will also support the development of an "AI-on-demand platform" that will provide access to relevant AI resources in the EU for all users.

Additionally, the European Fund for Strategic Investments will be mobilised to provide companies and start-ups with additional support to invest in AI. With the European Fund for Strategic Investments, the aim is to mobilise more than €500 million in total investments by 2020 across a range of key sectors.

The Commission will also continue to create an environment that stimulates investment. As data is the raw material for most AI technologies, the Commission is proposing legislation to open up more data for re-use and measures to make data sharing easier. This covers data from public utilities and the environment as well as research and health data.

With the dawn of artificial intelligence, many jobs will be created, but others will disappear and most will be transformed. This is why the Commission is encouraging Member States to modernise their education and training systems and support labour market transitions, building on the European Pillar of Social Rights. The Commission will support business-education partnerships to attract and keep more AI talent in Europe, set up dedicated training schemes with financial support from the European Social Fund, and support digital skills, competencies in science, technology, engineering and mathematics (STEM), entrepreneurship and creativity. Proposals under the EU's next multiannual financial framework (2021-2027) will include strengthened support for training in advanced digital skills, including AI-specific expertise.

As with any transformative technology, artificial intelligence may raise new ethical and legal questions, related to liability or potentially biased decision-making. New technologies should not mean new values. The Commission will present ethical guidelines on AI development by the end of 2018, based on the EU's Charter of Fundamental Rights, taking into account principles such as data protection and transparency, and building on the work of the European Group on Ethics in Science and New Technologies.

To help develop these guidelines, the Commission will bring together all relevant stakeholders in a European AI Alliance. By mid-2019 the Commission will also issue guidance on the interpretation of the Product Liability Directive in the light of technological developments, to ensure legal clarity for consumers and producers in case of defective products.


9 March 2018 - The European Commission kicks off work on marrying cutting-edge technology and ethical standards.

The European Commission is setting up a group on artificial intelligence to gather expert input and rally a broad alliance of diverse stakeholders.

The expert group will draw up a proposal for guidelines on AI ethics, building on today's statement by the European Group on Ethics in Science and New Technologies.

From better healthcare to safer transport and more sustainable farming, artificial intelligence (AI) can bring major benefits to our society and economy. And yet, questions related to the impact of AI on the future of work and existing legislation are raised. This calls for a wide, open and inclusive discussion on how to use and develop artificial intelligence both successfully and ethically sound.


Objectives of the High-Level Expert Group on Artificial Intelligence.

The general objective of the group shall be to support the implementation of the European strategy on AI. This will include the elaboration of recommendations on future AI-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.

In particular, the group will be tasked to:

1. Advise the Commission on next steps addressing AI-related mid to long-term challenges and opportunities through recommendations which will feed into the policy development process, the legislative evaluation process and the development of a next-generation digital strategy.

2. Support the Commission on further engagement and outreach mechanisms to interact with a broader set of stakeholders in the context of the AI Alliance, share information and gather their input on the group's and the Commission's work.

3. Propose to the Commission AI ethics guidelines, covering issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights, including privacy and personal data protection, dignity, consumer protection and non-discrimination. These guidelines will build on the work of the European Group on Ethics in Science and New Technologies (the European Group on Ethics in Science and New Technologies (EGE) is an independent advisory body established by the President of the European Commission) and of the EU Fundamental Rights Agency in this area (the Fundamental Rights Agency is carrying out an assessment of the current challenges faced by producers and users of new technology with respect to fundamental rights compliance (project "Big Data and Fundamental Rights").