Policy in Motion

What the AI Act Means in Practice for SMEs

A Shifting Policy Landscape

The EU’s AI Act, agreed in 2024 and now moving into implementation, is the world’s first comprehensive framework for regulating artificial intelligence. Alongside it, the UK has taken a lighter, sector-led approach, with guidance framed by principles rather than prescriptive rules. For SMEs operating in or trading with Europe, these policy shifts may feel abstract, yet their impact will be very real.

For smaller businesses, the key challenge is knowing what matters now, and what can safely be treated as “noise.” The AI Act sets ambitious goals: safeguarding citizens’ rights, ensuring transparency, and managing risks. But how should SMEs interpret these requirements in practical terms?

 

Risk Tiers and What They Mean for You

At the heart of the AI Act is a risk-based classification system:

  • Unacceptable Risk: AI uses that manipulate behaviour, exploit vulnerabilities, or conduct mass surveillance. These are outright banned.

  • High Risk: AI systems used in critical areas like recruitment, credit scoring, education, healthcare, or law enforcement. These face strict requirements around transparency, testing, and human oversight.

  • Limited Risk: Systems like chatbots or recommendation engines, which must be clearly labelled as AI but are otherwise lightly regulated.

  • Minimal Risk: Everyday tools, such as AI-enabled spellcheckers, with no additional obligations.

It should be noted that there are many applications where machine learning techniques have been used for decades and are not the target of the AI Act. These use cases are embedded within procedures that have existing well-established quality checks, such infrastructure surveys.

Most SMEs will find their use of AI falls into the limited or minimal categories, especially if using off-the-shelf generative AI tools for marketing, admin, or customer service. But if your business operates in a regulated sector, such as training providers using AI for assessments, or finance firms deploying scoring systems, you may cross into high-risk territory.

The first step is to map your AI use cases against this framework. Understanding where your tools fall helps separate urgent compliance priorities from background noise.

 

Practical Priorities for SMEs

  1. Transparency and Disclosure
    Even for low-risk systems, SMEs will increasingly be expected to disclose when AI is being used. This means clear labelling in customer-facing tools and being ready to explain what role AI plays in your processes.

  2. Data Governance
    The Act places strong emphasis on data quality, bias mitigation, and documentation. SMEs should start small: document the sources of training data (where possible), ensure data is stored securely, and put in place basic checks to avoid discriminatory outcomes.

  3. Human Oversight
    AI should support, not replace, human judgment in critical decisions. SMEs should be explicit about when human review takes place, especially in sensitive processes like hiring or financial decisions.

  4. Vendor Management
    Many SMEs rely on third-party AI platforms. While the Act primarily targets developers of high-risk systems, users still share responsibility. SMEs should request transparency from vendors: How is the model trained? What safeguards are in place? Are outputs explainable?

  5. Skills and Culture
    Compliance is not just a legal task. Building AI literacy within your organisation, helping staff understand risks, responsibilities, and ethical implications, is key to embedding responsible practice.

 

Avoiding the Noise

The AI Act has generated headlines about sweeping new rules and heavy fines. For SMEs, it’s important to separate signal from noise. Not every requirement applies to every business, and not every risk is relevant.

Much of the burden of compliance falls on AI developers, not end users. If you’re adopting rather than building systems, your focus should be on due diligence, transparency with stakeholders, and aligning with existing data protection and consumer rights obligations.

Similarly, while debates continue about national compute investments or regulatory sandboxes, these are primarily relevant for large-scale innovators and infrastructure providers. SMEs should keep informed but avoid being distracted by frameworks that don’t directly impact their day-to-day operations.

 

The UK Approach: Principles Over Prescriptions

Unlike the EU, the UK has opted for a principle-based model of regulation, empowering sector regulators (such as the ICO or FCA) to oversee AI use. This means SMEs working solely within the UK face fewer immediate legal obligations, but should still expect scrutiny around fairness, safety, and transparency.

For cross-border businesses, the EU framework will likely set the global benchmark. Even if you’re UK-based, aligning your practices with the EU’s requirements can provide future-proofing and competitive advantage.

 

Final Thought

For SMEs, the AI Act is not a reason to panic, it’s a chance to embed trust and resilience into how you use technology. By focusing on transparency, data governance, and responsible vendor management, small businesses can meet the spirit of the law without being paralysed by its detail.
The real risk isn’t regulation, it’s treating AI as a black box. SMEs that cultivate literacy, demand clarity from suppliers, and keep humans in the loop will be best placed to navigate not only today’s rules, but tomorrow’s evolving landscape.

Aralia Insights
Next
Next

XR for Communities