Complying with the AI Act will enable companies to build more trustworthy AI products and services. However, companies face a few challenges when it comes to implementing it. appliedAI and its partners identified three key challenges:
1. Lack of actionable guidelines: companies lack clear guidelines to operationalize the AI Act, particularly how best to orchestrate tasks across the enterprise.
2. Ongoing harmonized standards publication: This discourages enterprises from exploring high-risk AI systems and prevents technical and legal stakeholders from having a shared understanding of compliance.
3. Uncertainty around roles and skills: companies are unsure about the necessary skill profiles and the first steps they can already take to operationalize the AI Act.
To support the entire AI community to move from theory to practice,, appliedAI and its partners have published this practitioner-driven whitepaper sharing best practices and lessons learned to implement the requirements for high-risk AI systems.
In this whitepaper, we introduce the AI Act Governance Pyramid framework, a structured approach for operationalizing the AI Act by orchestrating stakeholders across enterprise layers. We then compile technical and governance best practices to implement the AI Act’s requirements for high-risk AI systems from a practitioners' perspective, including references to available international standards. Finally, we updated appliedAI's ML Skill Profiles framework taking the EU AI Act into consideration, and provided a guide about what companies can start doing today to prepare to operationalize the AI Act.
The report is the result of the appliedAI working groups. It is based on the experience of leading experts from appliedIA partner companies.