What’s the EU AI Act?

EU AI Act
Home
Home

The European Union’s AI Act stands as a pioneering piece of legislation, the first of its kind globally, creating a comprehensive legal framework for Artificial Intelligence (AI). This Act is a testament to the EU’s commitment to balancing technological advancement with ethical and societal values. 

Context and Background: The EU’s Digital Strategy 

The EU AI Act forms a core pillar of the EU’s broader digital strategy. This strategy acknowledges the dual nature of AI – as a driver of innovation and as a potential source of risk. The European Union (EU) had been actively working on developing regulations and frameworks to address the challenges posed by General Purpose AI (GPAI) models. These efforts are part of a broader initiative to ensure that AI technologies are developed and deployed in an ethical, transparent, and respecting fundamental rights. Here’s a general overview of how the EU approaches the regulation of AI, including GPAI: 

Risk-Based Approach

The EU’s strategy for AI regulation often involves a risk-based approach. This means that AI systems are assessed based on the level of risk they pose to individuals’ rights and safety. GPAI models, due to their broad applicability and potential impact, might be considered high-risk in many applications. 

Ethical Guidelines

The EU has established ethical guidelines for trustworthy AI. Released in April 2019, these guidelines set out a framework for achieving trustworthy AI, emphasizing ethical principles and values. They emphasize respect for human autonomy, prevention of harm, fairness, and transparency. GPAI models, given their complexity and potential impact, would be expected to adhere strictly to these principles.  

Data Governance and Privacy

Under the GDPR, any AI system, including GPAI, that processes personal data must ensure data protection and privacy. This includes requirements for consent, data minimization, and the right to explanation, especially in cases of automated decision-making. Enacted in May 2018, GDPR has significant implications for AI and automated decision-making.  

Transparency and Accountability

The EU emphasizes the importance of transparency in AI systems. For GPAI, this could mean requirements to disclose how the model works, the data it was trained on, and the logic behind its decisions, especially when used in critical applications. 

Human Oversight

Ensuring human oversight in AI systems is another key aspect. For GPAI models, this might involve mechanisms to ensure human control over the system, particularly in sensitive or high-risk scenarios. 

Ongoing Monitoring and Reporting

Given the evolving nature of GPAI, the EU might require continuous monitoring and regular reporting on the performance, impact, and compliance of these systems. 

The EU recognizes AI’s transformative power and aims to harness this technology for societal good. The AI Act is designed to ensure AI development and deployment are aligned with European values and principles, particularly in high-impact sectors. 

Key Provisions of the EU AI Act

  • Risk-Based Approach: the Act categorizes AI applications into different risk levels: prohibited practices, high-risk, and limited-risk applications. This classification guides the regulatory scrutiny and compliance requirements for each category. 
  • High Risk: Special Focus: high-risk AI systems, such as those used in medical devices, are subject to stringent regulatory requirements. These include strict compliance measures to ensure the trustworthiness and safety of the AI systems. 
  • General Purpose and Generative AI: the AI Act requires transparency in general-purpose and generative AI applications. Companies must inform users when they interact with these AI systems, to promote an environment of trust and awareness.
  • Limited Risk: Ensuring Transparency: AI systems categorized as limited risk, like chatbots or deep fakes, must disclose their artificial nature. This is crucial in preventing deception and manipulation in digital interactions. 

Implications for Businesses 

  • Safety and Rights: the Act emphasizes the safety, transparency, and non-discrimination of high-risk AI systems. It aims to balance the regulatory burden by ensuring the responsible development and use of AI applications across various sectors.  Business needs to ensure they met the requirements of: 
    • Draw the technical documentation in accordance with the regulation and subject to any update 
    • Keep the automatically generated logs. 
    • Have a quality management system 
    • Ensure that the HRAI undergoes the relevant conformity assessment procedure. 
    • Comply with registrations  
    • Take corrective actions if needed and inform the authorities 
    • Mark the solution to indicate the conformity with the EU AI Act 
    • Demonstrate conformity if requested by authorities. 
  • Compliance and Oversight: the Act outlines specific obligations for AI providers, importers, distributors, and users, promoting accountability across the AI system’s lifecycle. 

Sign up for our newsletter to get more news from Teneo.ai

Global Context and Integration 

The EU AI Act is a significant step in global AI regulation. Its influence extends beyond Europe, setting a precedent for other regions to develop their AI legislation. This global perspective is crucial for international businesses and developers operating in multiple jurisdictions. 

Sector-Specific Implications 

Different sectors will experience varied impacts from the AI Act. Industries like healthcare, automotive, and financial services, where AI plays a critical role, need to pay special attention to compliance requirements. This section can explore how the AI Act affects these industries specifically. 

Future Evolution of AI Regulation 

The AI Act is likely just the beginning of a broader regulatory trend. There is potential future developments in AI legislation, both in the EU and globally, and businesses needs to cater for these changes. Apart from the regulatory AI changes, there are other regulations proceeding the EU AI ACT.  

  • OECD Principles on AI (2019): The Organization for Economic Co-operation and Development (OECD) established principles promoting AI that is innovative, trustworthy, and respects human rights and democratic values. https://www.oecd.org/going-digital/ai/principles/  

Teneo AI’s Commitment 

Teneo AI’s commitment to compliance with the EU AI Act and broader international AI regulation is unwavering. Our team is actively pursuing ISO 42001 certification to develop our AI systems with transparency, traceability, and reliability.

Using Teneo for AI Safety 

Teneo is designed to not only understand and interpret human language but also to ensure AI safety. Users can utilize Teneo as a safety tool by incorporating a filtering mechanism on the data sent to the LLMs. Our team designs the filtering process to actively screen the data for any inappropriate or harmful content. This ensures that the AI processes and responds only to safe and suitable information, while the platform catches and excludes any unsuitable content. This enhances the security and reliability of the AI system, allowing users to confidently interact with the AI without concerns about harmful or misleading content. Teneo’s inbuilt controls also ensure privacy compliance, making it a robust tool for AI safety and EU AI Act. 

Our goal is to be at the forefront of compliant AI development, serving as a trusted partner in this evolving regulatory landscape, which you can continue reading about in our Security Center.  

Newsletter
By clicking “Subscribe” you agree to Teneo.ai Privacy Policy and consent to Teneo.ai using your contact data for newsletter purposes
Share this on:

Related Posts

The Power of OpenQuestion

We help high-growth companies like Telefónica, HelloFresh and Swisscom find new opportunities through AI conversations.
Interested to learn what we can do for your business?