Trustworthy AI - Tackling the AI Act with appliedAI

Tackling the AI Act with appliedAI

The AI Act poses a challenge for companies

The AI Act demands that companies address AI governance to comply with its regulations at both the organizational and use-case levels. A major challenge for practitioners is that the necessary tools and standards are still being developed, making it difficult for companies to prepare.

As a result, companies need to find answers to a variety of questions:

  • Which of our use cases fall into the high risk category and how to get our AI use cases compliant?
  • How to make sure that the company achieves compliance on an organizational level as quick and cost-efficient as possible and how to maintain it?
  • How to upskill my AI experts and users with the new requirements?
  • How to upskill my AI experts and employees with the new requirements?
  • How to deal with / translate AI Act requirements to other regulations across the world?

appliedAI works on AI Act compliance in all relevant dimensions

More detailed information about the concrete offerings will be displayed shortly.

First Steps Checklist: How to deal with the AI Act

  1. Establish a basic understanding of the AI Act
  2. Assess AI use case portfolio in terms of risk categories & create a plan for cases falling under categories “high risk” or “prohibited“
  3. Find out your organizational and MLOps AI Act Readiness though the MLOps Assessment
  4. Decide on maximum risk levels per use case cluster, that you allow in your organization
  5. Initiate a basic governance program for AI Act compliance
  6. Setup MLOps process to fulfill most basic AI Act requirements

Our participation at global AI expert networks

To ensure expert support for our customers appliedAI experts work in and with institutions like

Andreas Liebl

Dr. Andreas Liebl

Global Partnership on AI (GPAI)

Dr. Andreas Liebl, Managing Director and Founder of appliedAI, is part of the expert group for innovation and commercialization within the Global Partnership on AI (GPAI), a global initiative to promote the responsible and human-centric development and use of artificial intelligence.

Authors Till Klein Portrait low res

Dr. Till Klein

OECD Working Party on Artificial Intelligence Governance (AIGO)

Dr. Till Klein, Head of Trustworthy AI at appliedAI Institute for Europe gGmbH, is member of the OECD Network of Experts on AI  and supports trustworthy AI within the OECD Working Party on Artificial Intelligence Governance (AIGO). The working party oversees and gives direction to the DPC work programme on AI policy and governance.

Related Whitepapers and Articles

Whitepaper

AI Act: Risk Classification of AI Systems from a Practical Perspective

Whitepaper

How to find and prioritize AI Use Cases

Survey

AI Act Impact Survey

Die Zeit - Article

Überraschende Allianzen im Kampf um KI-Regulierung

Talk with Germany's top AI Act operationalization experts

Are you looking for more information about how your company can become compliant concering the AI Act? Are you interested to learn more about how appliedAI can support you? Contact us now!