RegImpact
eu ai acteffective· Published 2/19/2025

Small Businesses’ Guide to the AI Act

Everything you need to know about the AI Act, for small and medium-sized enterprises (SMEs) in the EU and beyond.

What this rule actually says

The EU AI Act is Europe's rulebook for AI systems. It sorts AI tools into risk categories—from low-risk (most chatbots) to high-risk (medical diagnostics, hiring decisions)—and requires different oversight depending on which bucket a product lands in. High-risk systems need documentation, testing, and human oversight. Low-risk ones mostly just need transparency.

Who it applies to

  • Geography: If selling to anyone in the EU, or processing EU user data, this applies—even if the company is based elsewhere.
  • Hiring & employment AI: If building a resume screener, interview assistant, or job-matching tool, this is high-risk. You'll need impact assessments and compliance documentation.
  • Medical AI: Medical scribes, diagnostic suggestions, and patient data analysis are high-risk. Standard chatbots answering general health questions are lower-risk.
  • Support & customer service chatbots: General-purpose support bots are usually low-risk. Chatbots making individual decisions (e.g., loan denial, benefits eligibility) are high-risk.
  • Data scope: The rule covers systems using personal data *and* systems that could affect fundamental rights—so even anonymized medical analysis or hiring recommendations trigger it if they shape real outcomes.

What founders need to do

  1. Classify your AI (1–2 days). Map your product to the risk categories. Hiring tools, medical systems, and decision-making AI are almost always high-risk. Chat-based Q&A is usually low-risk.
  1. For low-risk products (ongoing). Document what your AI does, note any limitations, and be transparent with users about AI involvement. This is mostly legal paperwork and a disclosure statement.
  1. For high-risk products (1–2 weeks initially, then ongoing). Commission or run an impact assessment, log training and test results, set up human review workflows for edge cases, and maintain audit trails.
  1. Audit your training data (1 week). Check that datasets aren't biased in ways that would harm protected groups—especially critical for hiring and medical tools.
  1. Monitor compliance as you scale (ongoing). EU regulators will audit high-risk AI. Keep documentation current and watch for guidance updates; the Act is new and enforcement rules are still being written.

Bottom line

If you're building anything for hiring, medical, or high-stakes decisions, start compliance work now; for general chatbots, document your transparency practices and move on.