AI Act Governance: Best Practices for Implementing the EU AI Act

Thumbnail AI Act Mockup
How to tackle challenges while operationalizing the EU AI Act

Complying with the AI Act will enable companies to build more trustworthy AI products and services. However, companies face a few challenges when it comes to implementing it. appliedAI and its partners identified three key challenges:

1. Lack of actionable guidelines: companies lack clear guidelines to operationalize the AI Act, particularly how best to orchestrate tasks across the enterprise.

2. Ongoing harmonized standards publication: This discourages enterprises from exploring high-risk AI systems and prevents technical and legal stakeholders from having a shared understanding of compliance.

3. Uncertainty around roles and skills: companies are unsure about the necessary skill profiles and the first steps they can already take to operationalize the AI Act.

To support the entire AI community to move from theory to practice,, appliedAI and its partners have published this practitioner-driven whitepaper sharing best practices and lessons learned to implement the requirements for high-risk AI systems.

In this whitepaper, we introduce the AI Act Governance Pyramid framework, a structured approach for operationalizing the AI Act by orchestrating stakeholders across enterprise layers. We then compile technical and governance best practices to implement the AI Act’s requirements for high-risk AI systems from a practitioners' perspective, including references to available international standards. Finally, we updated appliedAI's ML Skill Profiles framework taking the EU AI Act into consideration, and provided a guide about what companies can start doing today to prepare to operationalize the AI Act.

The report is the result of the appliedAI working groups. It is based on the experience of leading experts from appliedIA partner companies.

Authors and contributors

Authors of the Whitepaper:

  • Alexander Machado, Head of Trustworthy AI CoE and former Head of MLOps Processes
  • Manuel Jiménez Mérida, Senior AI Governance Strategist and Trustworthy AI Expert
  • Akhil Deo, Senior AI regulatory expert
  • Anish Pathak, ML Engineer

We thank you for your contributions:

  • Simone Oldekop, Former Head of Responsible AI Office
    Carl Zeiss AG
  • Dirk Wacker, AI Lead Giesecke+Devrient GmbH
  • Steffen Herterich, Lead Principal Engineer - Data Protection and Privacy Infineon Technologies AG
  • Geoffroy Pavillet, Data Protection Counsel Linde GmbH
  • Cecilia Carbonelli, Senior Principal - Head of Algorithm Concept & Modeling - Responsible AI Tech Lead Infineon Technologies AG
  • Christiane Miethge, Senior Manager AI Communication and Policy Infineon Technologies AG
  • Eljalill Tauschinsky, Consultant data protection and data law EnBW Energie Baden-Württemberg AG
  • Heinrich Dold, Senior Transformation Manager EnBW Energie Baden-Württemberg AG
  • Alexandra Wander, Program Manager - Responsible AI Carl Zeiss AG
  • Simone Heitzer, AI Strategist MTU Aero Engines AG
  • Asad Preuss-Dodhy, Sr. Principal - Data Anonymisation and Privacy Technologies Expert Roche Diagnostics GmbH (Information Solutions)
  • Sona Jose, Responsible AI Consultant Carl Zeiss AG
  • Araceli Alcala, RA Manager | RA SME for AI Carl Zeiss Meditec AG
  • Philippe Coution, Head of Digital Interaction & Lead AI Quality TÜV SÜD AG
  • Sebastian Hallensleben, Chair of Joint Technical Committee (JTC) 21, AI CEN and CENELEC