Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
More

    EU Commission proposes new rules for excellence and trust in AI

    The new AI regulation will make sure that Europeans can trust what AI has to offer. Proportionate and flexible rules will address the specific risks posed by AI systems and set the highest standard worldwide.

    The Commission proposes today new rules and actions aiming to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU. New rules on Machinery will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.

    Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, said: “On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

    Commissioner for Internal Market Thierry Breton said: “AI is a means, not an end. It has been around for decades but has reached new capacities fueled by computing power. This offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security. It also presents a number of risks. Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

    The new AI regulation will make sure that Europeans can trust what AI has to offer. Proportionate and flexible rules will address the specific risks posed by AI systems and set the highest standard worldwide. The Coordinated Plan outlines the necessary policy changes and investment at Member States level to strengthen Europe’s leading position in the development of human-centric, sustainable, secure, inclusive and trustworthy AI.

    The European approach to trustworthy AI

    The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach:

    Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.

    High-risk: AI systems identified as high-risk include AI technology used in:

    • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
    • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
    • Safety components of products (e.g. AI application in robot-assisted surgery);
    • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
    • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
    • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
    • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
    • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

    High-risk AI systems will be subject to strict obligations before they can be put on the market:

    • Adequate risk assessment and mitigation systems;
    • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
    • Logging of activity to ensure traceability of results;
    • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
    • Clear and adequate information to the user;
    • Appropriate human oversight measures to minimise risk;
    • High level of robustness, security and accuracy.

    In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence). Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

    Limited risk, i.e. AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

    Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens’ rights or safety.

    In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules, while the creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation.

    The European approach to excellence in AI

    Coordination will strengthen Europe’s leading position in human-centric, sustainable, secure, inclusive and trustworthy AI. To remain globally competitive, the Commission is committed to fostering innovation in AI technology development and use across all industries, in all Member States.

    First published in 2018 to define actions and funding instruments for the development and uptake of AI, the Coordinated Plan on AI enabled a vibrant landscape of national strategies and EU funding for public-private partnerships and research and innovation networks. The comprehensive update of the Coordinated Plan proposes concrete joint actions for collaboration to ensure all efforts are aligned with the European Strategy on AI and the European Green Deal, while taking into account new challenges brought by the coronavirus pandemic. It puts forward a vision to accelerate investments in AI, which can benefit the recovery. It also aims to spur the implementation of national AI strategies, remove fragmentation, and address global challenges.

    The updated Coordinated Plan will use funding allocated through the Digital Europe and Horizon Europe programmes, as well as the Recovery and Resilience Facility that foresees a 20% digital expenditure target, and Cohesion Policy programmes, to:

    • Create enabling conditions for AI’s development and uptake through the exchange of policy insights, data sharing and investment in critical computing capacities;
    • Foster AI excellence ‘from the lab to the market’ by setting up a public-private partnership, building and mobilising research, development and innovation capacities, and making testing and experimentation facilities as well as digital innovation hubs available to SMEs and public administrations;
    • Ensure that AI works for people and is a force for good in society by being at the forefront of the development and deployment of trustworthy AI, nurturing talents and skills by supporting traineeships, doctoral networks and postdoctoral fellowships in digital areas, integrating Trust into AI policies and promoting the European vision of sustainable and trustworthy AI globally;
    • Build strategic leadership in high-impact sectors and technologies including environment by focusing on AI’s contribution to sustainable production, health by expanding the cross-border exchange of information, as well as the public sector, mobility, home affairs and agriculture, and Robotics.

    The European approach to new machinery products

    Machinery products cover an extensive range of consumer and professional products, from robots to lawnmowers, 3D printers, construction machines, industrial production lines. The Machinery Directive, replaced by the new Machinery Regulation, defined health and safety requirements for machinery. This new Machinery Regulation will ensure that the new generation of machinery guarantees the safety of users and consumers, and encourages innovation. While the AI Regulation will address the safety risks of AI systems, the new Machinery Regulation will ensure the safe integration of the AI system into the overall machinery. Businesses will need to perform only one single conformity assessment.

    Additionally, the new Machinery Regulation will respond to the market needs by bringing greater legal clarity to the current provisions, simplifying the administrative burden and costs for companies by allowing digital formats for documentation and adapting conformity assessment fees for SMEs, while ensuring coherence with the EU legislative framework for products.

    Next steps

    The European Parliament and the Member States will need to adopt the Commission’s proposals on a European approach for Artificial Intelligence and on Machinery Products in the ordinary legislative procedure. Once adopted, the Regulations will be directly applicable across the EU. In parallel, the Commission will continue to collaborate with Member States to implement the actions announced in the Coordinated Plan.

    Background

    For years, the Commission has been facilitating and enhancing cooperation on AI across the EU to boost its competitiveness and ensure trust based on EU values.

    Following the publication of the European Strategy on AI in 2018 and after extensive stakeholder consultation, the High-Level Expert Group on Artificial Intelligence (HLEG) developed Guidelines for Trustworthy AI in 2019, and an Assessment List for Trustworthy AI in 2020. In parallel, the first Coordinated Plan on AI was published in December 2018 as a joint commitment with Member States.

    The Commission’s White Paper on AI, published in 2020, set out a clear vision for AI in Europe: an ecosystem of excellence and trust, setting the scene for today’s proposal. The public consultation on the White Paper on AI elicited widespread participation from across the world. The White Paper was accompanied by a ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics’ concluding that the current product safety legislation contains a number of gaps that needed to be addressed, notably in the Machinery Directive.

    ELE Times Report
    ELE Times Reporthttps://www.eletimes.com/
    ELE Times provides extensive global coverage of Electronics, Technology and the Market. In addition to providing in-depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build experience, drive traffic, communicate your contributions to the right audience, generate leads and market your products favourably.

    Technology Articles

    Popular Posts

    Latest News

    Must Read

    ELE Times Top 10