top of page
artificial-intelligence-6767502_1280.jpg

ARTIFICIAL INTELLIGENCE (AI) POLITICS

BAEHF's attitude regarding the application of artificial intelligence in practice is based on the generally accepted human dignity of respect for the person and his personality, while at the same time understanding the massive penetration of AI into everyday life and its frequent presence and consideration at different levels. This is visible and understandable by society.
The process of the presence of AI constantly endures legal, moral and ethical norms and rules and others, which is presented in the European society in the following way (https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence):

 European approach to artificial intelligence

The EU’s approach to artificial intelligence centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.

The way we approach Artificial Intelligence (AI) will define the world we live in the future. To help building a resilient Europe for the Digital Decade, people and businesses should be able to enjoy the benefits of AI while feeling safe and protected.

The European AI Strategy aims at making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy. Such an objective translates into the European approach to excellence and trust through concrete rules and actions.

In April 2021, the Commission presented its AI package, including:

A European approach to excellence in AI

Fostering excellence in AI will strengthen Europe’s potential to compete globally.

The EU will achieve this by:

  1. enabling the development and uptake of AI in the EU;

  2. making the EU the place where AI thrives from the lab to the market;

  3. ensuring that AI works for people and is a force for good in society;

  4. building strategic leadership in high-impact sectors.

The Commission and Member States agreed to boost excellence in AI by joining forces on policy and investments. The 2021 review of the Coordinated Plan on AI outlines a vision to accelerate, act, and align priorities with the current European and global AI landscape and bring AI strategy into action.

Maximising resources and coordinating investments is a critical component of AI excellence. Through the Horizon Europe and Digital Europe programmes, the Commission plans to invest €1 billion per year in AI. It will mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of the digital decade.

The Recovery and Resilience Facility makes €134 billion available for digital. This will be a game-changer, allowing Europe to amplify its ambitions and become a global leader in developing cutting-edge, trustworthy AI.

Access to high quality data is an essential factor in building high performance, robust AI systems. Initiatives such as the EU Cybersecurity Strategy, the Digital Services Act and the Digital Markets Act, and the Data Governance Act provide the right infrastructure for building such systems.

A European approach to trust in AI

Building trustworthy AI will create a safe and innovation-friendly environment for users, developers and deployers.

The Commission has proposed 3 inter-related legal initiatives that will contribute to building trustworthy AI:

  1. European legal framework for AI to address fundamental rights and safety risks specific to the AI systems;

  2. civil liability framework - adapting liability rules to the digital age and AI;

  3. a revision of sectoral safety legislation (e.g. Machinery RegulationGeneral Product Safety Directive).

European proposal for a legal framework on AI

The Commission aims to address the risks generated by specific uses of AI through a set of complementary, proportionate and flexible rules. These rules will also provide Europe with a leading role in setting the global gold standard.

This framework gives AI developers, deployers and users the clarity they need by intervening only in those cases that existing national and EU legislations do not cover. The legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.

Important milestones


  1. September 2022

    Proposal for an AI liability directive



  2. July 2022

    Council of EU: FR Presidency Compromise text on the AI Act


    European Parliament, TRAN opinion


    European Parliament, ITRE opinion



  3. June 2022

    Launch of first AI regulatory sandbox in Spain: Bringing the AI Regulation forward



  4. April 2022

    European Parliament, ENVI opinion



  5. December 2021

    Committee of the Regions, Opinion on the AI Act


    European Central Bank, Opinion on the AI Act




  6. November 2021

    Council of the EU: SI Presidency compromise text on the AI Act


    High-Level Conference on AI: From Ambition to Action (3d European AI Alliance Assembly)


    European Economic and Social Committee, Opinion on the AI Act



  7. June 2021

    Public consultation on Civil liability – adapting liability rules to the digital age and artificial intelligence


    European Commission: Proposal for a Regulation on Product Safety



  8. April 2021

    European Commission: Communication on Fostering a European approach to AI


    European Commission: Proposal for a regulation laying down harmonised rules on AI


    European Commission: updated coordinated plan on AI

    European Commission: Impact assessment of an AI regulation


  9. October 2020

    2nd European AI Alliance Assembly



  10. July 2020

    Inception impact assessment: Ethical and legal requirements on AI

    High-Level Expert Group on AI: Final assessment list on trustworthy AI (ALTAI)

    High-Level Expert Group on AI: Sectorial recommendations of trustworthy AI


  11. February 2020

    European Commission: White paper on AI: a European approach to excellence and trust

    Public consultation on a European approach to excellence and trust in AI


  12. December 2019

    High-Level Expert Group on AI: Piloting of assessment list of trustworthy AI


  13. June 2019

    First European AI Alliance Assembly

    High-Level Expert Group on AI: Policy and investment recommendations of AI


  14. April 2019

    European Commission Communication: Building trust in human-centric artificial intelligence

    High-Level expert group on AI: Ethics guidelines for trustworthy AI


  15. December 2018

    European Commission: Coordinated plan on AI

    European Commission (Press release): AI made in Europe

    European Commission Communication: AI made in Europe

    Stakeholder consultation on draft ethics guidelines for trustworthy AI


  16. June 2018

    Launch of the European AI alliance

    Set up of the high-level expert group on AI


  17. April 2018

    Press release: Artificial intelligence for Europe

    Communication: Artificial intelligence for Europe

    Staff working document: Liability for emerging digital technologies

    Declaration of cooperation on artificial intelligence


  18. March 2018

    Press release: AI expert group and European AI alliance

bottom of page