RegImpact
eu ai acteffective· Published 8/22/2024

The AI Act: Responsibilities of the European Commission (AI Office)

If you are unsure who is implementing and enforcing the new digital law and what the specific time frames are, you might find this post—and our post on the responsibilities of the EU Member States—very helpful. The tables below provide a comprehensive list of all obligations and tasks that the AI Act places upon to the […]

What this rule actually says

The EU AI Act is now in effect, and the European Commission has set up an "AI Office" to enforce it. This office decides which AI systems get flagged as high-risk, oversees compliance, and coordinates enforcement across EU countries. It's not a rule about what founders must build—it's about who's in charge of checking whether founders are following the rules.

Who it applies to

  • Geography: If selling or operating AI in the EU, or targeting EU users, this matters. Even if headquarters is outside the EU, this applies.
  • AI use cases that trigger scrutiny: Medical scribes (healthcare = high-risk), hiring assistants (employment decisions = high-risk), support chatbots (general purpose = lower risk, unless making autonomous decisions).
  • User data: Systems processing personal data of EU residents—name, health info, employment records—are in scope. Anonymous, aggregated data is less of a concern.
  • Company size: No exemption for solo founders or small teams. The Act applies to all AI providers.
  • If your system makes or significantly influences decisions about healthcare, hiring, credit, or law enforcement, expect higher scrutiny.

What founders need to do

  1. Classify your AI's risk level (2–3 days). Review EU AI Act categories: is it high-risk (medical, hiring) or general-purpose? The AI Office publishes guidance; check their website for your use case.
  1. Document what data you collect and how (3–5 days). Write down what personal info the system processes, where it's stored, who accesses it. High-risk systems need detailed risk assessments.
  1. Monitor AI Office enforcement actions (ongoing, ~30 min/month). The AI Office publishes compliance timelines and targeted sectors. Sign up for their updates so you're not blindsided by new guidance specific to your use case.
  1. If high-risk, prepare a compliance plan (1–2 weeks). This means testing for bias, documenting decisions the AI makes, setting up audit trails. The timeline depends on what you're building; medical and hiring tools face stricter deadlines than chatbots.
  1. Get legal counsel if you're unsure (budget €500–2k). An EU-based lawyer familiar with AI Act compliance can review your product in 1–2 hours and tell you exactly what applies.

Bottom line

Monitor now, act in the next 2–3 months if you're building high-risk AI (medical, hiring) targeting the EU; chatbot founders can move slower but should still get clarity on whether they're in scope.