This study examines the impact of the risk classification criteria of the AI regulation on AI innovations in companies, based on the designs of the EU Commission, the EU Parliament, and the EU Council, and identifies the questions that need to be addressed to provide more clarity and planning security. The study explicitly focuses on the interpretation of the criteria from a practical perspective. At the time of publication of this study (March 2023), negotiations in Brussels are still ongoing, and we hope that our proposals for precise classification will be taken up by negotiators.
18% of AI systems fall into the high-risk category, 42% into the low-risk category, and for 40%, it is unclear whether they fall into the high-risk category or not. The proportion of high-risk systems in this sample ranges from 18% to 58%. One of the AI systems could be banned.
Unclear risk classifications slow down investments and innovations. The areas where unclear risk classifications are made are primarily critical infrastructure, jobs, law enforcement, and product safety (Annex II).
The investigation of the causes of uncertainty ends with concrete recommendations to policymakers and companies to promote responsible AI innovations.