Experts Comment: The EU AI Act Comes Into Force This August – Will It Help Or Hinder European Startups? – TechRound

Experts Comment: The EU AI Act Comes Into Force This August – Will It Help Or Hinder European Startups? - TechRound

The EU AI Act is already changing how European tech companies operate, and the most demanding part has not arrived yet.

General obligations around prohibited AI systems and transparency for limited-risk tools have been in force since February 2025. The high-risk provisions – the ones that will require quality management systems, conformity assessments, human oversight mechanisms and ongoing monitoring – kick in from August 2027.

The debate is no longer about whether AI should be regulated. Most founders accept that it should be. The argument is about whether this particular framework is calibrated for the companies meant to help build. Critics point to a structural problem: compliance obligations are set by product category, not company size. A three-person startup building an AI-powered CV screening tool faces the same Annex III high-risk requirements as a 10,000-person enterprise building the identical product.

The absolute cost of conformity assessment for a high-risk system runs to six figures in legal, technical and monitoring costs, which for a bootstrapped team burning through runway is a fundamentally different burden than for a company with a standing compliance department.

The February 2026 ACT Online survey found that 58% of EU and UK developers already report regulation-driven launch delays, with average annual losses of between €94,000 and €322,000 per firm attributed to legal uncertainty. The Commission missed its own 2 February 2026 deadline to clarify which use cases qualify as high-risk under Article 6, which has pushed legal teams into ultra-conservative interpretations that delay development even further.

The EU has also been explicit: the AI Act sits on top of GDPR, the Product Liability Directive, CSRD and 27 national enforcement regimes with no guarantee of consistent interpretation across member states.

 

The Asymmetry Problem

 

The case that the Act disproportionately affects smaller firms rests on a straightforward observation: Big Tech has already built the compliance infrastructure.

Companies that spent years absorbing GDPR now have legal teams, risk management processes and vendor assessment frameworks that can absorb a new regulatory layer without disrupting product development. For a seed-stage founder in Berlin or Warsaw, the same requirements can consume engineering capacity, divert runway and delay launches.

There are mitigations built into the Act. Article 62 provides for regulatory sandboxes, reduced fees and simplified procedures specifically for SMEs, and member states are required to establish at least one sandbox per country by August 2026.

For startups using third-party AI models rather than building their own, the compliance exposure is also significantly lower: deployers face lighter obligations than providers, and most API-first startups sit in the deployer category unless they fine-tune models on proprietary data. That status shift – from deployer to provider – is one of the least understood aspects of the law and one of the most consequential for startups scaling into enterprise markets.

The more optimistic read is that the Act creates something Europe has historically struggled to produce: a shared trust framework that compliant builders can use as a commercial differentiator. In B2B and public-sector procurement, being able to demonstrate AI Act compliance may become a contract precondition long before any regulator levies a fine. Enterprise vendor risk assessments are already demanding documentation that mirrors the Act’s requirements.

For founders who treat compliance as a product investment rather than a legal obligation, that could represent a genuine advantage over US competitors operating under lighter-touch regimes.

We asked experts what they think.

More from Artificial Intelligence

Our Experts:

 

  • Collin Hogue-Spears, Senior Director, Black Duck Software
  • Maria Mohonea, Trademark and Design Attorney, Barnes Law
  • Matias Rodsevich, CEO and Founder, PRLab
  • Anatoly Kvitnitsky, CEO and Founder, AI or Not
  • Magnus Wretblad, Co-Founder, Multiply.co
  • Srividya Narayanan, Regulatory Affairs Specialist and Co-Founder, ReguTron

 

Collin Hogue-Spears, Senior Director, Black Duck Software

 

Collin Hogue-Spears, Senior Director, Black Duck Software
“Yes, and the asymmetry is legal, not financial. Big Tech absorbed the provider and deployer liability lesson with GDPR. Most startups have not. A five-person SaaS team integrating OpenAI or Mistral APIs sits in the deployer category with total compliance exposure under €50,000. The moment that team fine-tunes the model on proprietary customer data to win an HR screening or credit scoring deal, Article 3 elevates the company to provider status. The bill jumps to €200,000 to €500,000 upfront and €80,000 to €150,000 annually, plus legal liability for the fundamental safety of the architecture. Most founders do not realise their status shifted until the first enterprise Vendor Risk Assessment asks for the Quality Management System documentation.

“The Act is calibrated by product category, not company scale, and it stacks on top of GDPR, CSRD and 27 national enforcement authorities with no guarantee of consistent interpretation. The Commission missed its own 2 February 2026 legal deadline under Article 6 to clarify which use cases qualify as high-risk, which pushes legal departments into ultra-conservative interpretations that block development. Founders are being asked to comply with rules that do not yet exist in workable form.

“Enterprise procurement is enforcing the AI Act before any regulator levies a fine. In 2026, Vendor Risk Assessments demand technical documentation, risk-tier classifications, human oversight statements and CSRD Scope 3 disclosures as contract preconditions. That dynamic converts compliance from cost centre to procurement differentiator, but only for founders who build the portfolio early. The EU’s InvestAI facility mobilises €20 billion for supercomputing infrastructure. Imposing the world’s strictest regulatory standards without corresponding capital access is the structural trap European founders have been warning about since the Draghi Report.”

 

Maria Mohonea, Trademark and Design Attorney, Barnes Law

 

Maria Mohonea, Trademark and Design Attorney, Barnes Law
“The Act applies across the value chain and attaches obligations based on function, not company size. A solo founder building an AI-driven hiring tool faces identical Annex III high-risk obligations to a global enterprise deploying the same product. The framework itself is broadly the right one. If AI is making decisions about jobs, credit or healthcare, then documentation, traceability and accountability are justified.

“The difficulty is how those requirements land in practice. The Act assumes a level of internal structure that most startups simply do not have yet, and the compliance stack sits on top of GDPR and existing product liability rules. The burden is also front-loaded, with costs arriving long before revenue does. Larger companies are simply better set up to deal with this. They already have compliance teams, internal processes and the ability to spread costs across multiple products. The Act does not create that imbalance, but it does reinforce it. And the SME concessions in Article 62, while genuine, reduce fees rather than the underlying workload.

“For founders, the sandboxes are worth engaging with seriously. They offer direct regulator contact that no consultant can replicate. But the wider infrastructure needs to improve: clearer templates, faster guidance and sandboxes that give practical answers rather than just controlled environments. The direction is right. The execution is uneven. And unless the support layer catches up, the Act risks slowing exactly the companies it is trying to enable.”

 

Matias Rodsevich, CEO and Founder, PRLab

 

Matias Rodsevich, CEO and Founder, PRLab
“Regulation that applies equally to everyone is not, in practice, equal at all. The EU AI Act creates a compliance architecture that Big Tech absorbs as overhead while startups absorb as existential cost. A company like Google can dedicate a legal and compliance team entirely to this. A ten-person startup in Berlin or Tallinn cannot. When compliance costs the same regardless of company size, it is not a level playing field. It is a filter that keeps smaller players out. What costs a large company two per cent of its budget can cost a startup twenty.

“The timing problem makes it worse. You cannot ask founders to comply with rules that have not been written yet. The Code of Practice was published only weeks before certain obligations became enforceable, and for founders planning hiring and product roadmaps months in advance, that gap has real consequences.

“The good news is that most founders are worrying about the wrong thing. For startups using third-party AI models rather than building their own, the main obligation is due diligence on vendor relationships, not a direct compliance burden. Classify your systems accurately. Document everything. Then build. Founders are not against responsible AI. They are against making irreversible decisions in a regulatory vacuum. Europe has the talent. It now needs the regulatory clarity to match.”

 

Anatoly Kvitnitsky, CEO and Founder, AI or Not

 

Anatoly Kvitnitsky, CEO and Founder, AI or Not
“When most people think about the EU AI Act, they picture high-stakes use cases: AI in hospitals, law enforcement or government infrastructure. But the Act casts a much wider net, and that is where it gets challenging for startups. Take hiring. If your product uses AI to screen CVs or rank job applicants, you could already be operating in high-risk territory. That is not a niche use case. Thousands of companies are building exactly these kinds of tools.

“The same goes for credit scoring and insurance. There is a generation of fintech and insurtech startups using AI to make lending and underwriting decisions. Under the Act, they will face the same compliance requirements as a global bank. Biometrics, KYC verification, facial recognition and age verification are also firmly in scope, and they attract significant venture investment.

“The core problem is simple: the compliance burden does not shrink because your company does. A ten-person startup faces essentially the same obligations as a 10,000-person enterprise. That is the reality policymakers urgently need to address.”

 

Magnus Wretblad, Co-Founder, Multiply.co

 

Magnus Wretblad, Co-Founder, Multiply.co
“The EU AI Act’s compliance burden will disproportionately disadvantage smaller AI companies, and that should concern anyone who wants Europe to remain competitive. At Multiply, we build AI-powered tools for marketing teams. We are not training frontier models or making autonomous decisions about credit or healthcare. But navigating the Act’s requirements still demands legal resources, documentation overhead and compliance expertise that Big Tech can absorb without blinking. For a startup, that same overhead means slower shipping, diverted focus and real competitive disadvantage against US or Asian players operating under lighter-touch regimes.

“The regulation’s risk-tiered framework is conceptually sound, but the execution often misses the texture of how AI products actually get built. In practice, you are iterating constantly. Your product six months from now looks nothing like today’s. Static compliance documentation does not map cleanly onto that reality.

“What I wish policymakers understood: speed is a safety mechanism in this space. Fast iteration means faster detection of problems. Slowing startups down does not make AI safer. It just makes European startups less competitive while the real frontier moves elsewhere.”

 

Srividya Narayanan, Regulatory Affairs Specialist and Co-Founder, ReguTron

 

Srividya Narayanan, Regulatory Affairs Specialist and Co-Founder, ReguTron
“The EU AI Act is well-intentioned, but its compliance architecture was not designed with startups in mind. As both a regulatory affairs specialist and co-founder of an AI-powered platform in the life sciences space, I see this tension daily. The documentation burden, risk classification logic and conformity assessment requirements demand legal, technical and compliance infrastructure that early-stage companies simply do not have. Big Tech can absorb this. A startup with a three-person team cannot, at least not without diverting runway that should be going into product.

“That said, I do not think the answer is deregulation. The Act creates something valuable: a common language for AI trust. In regulated industries like medtech and pharma, we already know that clear frameworks, however demanding, ultimately benefit responsible builders. The companies that treat compliance as infrastructure rather than a last-minute checkbox will be the ones that scale.

“What I wish policymakers understood is that high-risk classification needs to move faster with the technology. By the time conformity assessment guidance catches up to where AI systems actually are, the market has already moved on. For European founders right now: map your risk classification early, build transparency documentation into your development cycle from day one, and do not wait for 2027 to start.”

 

For any questions, comments or features, please contact us directly.
techround-logo

 



Source link

Leave a Reply