AI Regulation Watch
What to Expect in 2026
The regulatory landscape for AI is shifting. As 2026 approaches, UK and EU frameworks are moving from consultation to implementation, bringing both clarity and complexity. For SMEs and heritage organisations already navigating tight budgets and evolving technologies, the question isn't whether to prepare, it's how.
The Regulatory Horizon
The EU AI Act, entering force in phases throughout 2025-2027, establishes a risk-based framework that categorises AI systems by their potential harm. High-risk applications, those affecting safety, fundamental rights, or critical infrastructure, face stringent requirements around transparency, data governance, and human oversight. Meanwhile, the UK is pursuing a sector-specific approach, with regulators like the ICO, CMA, and Ofcom developing AI guidance within their existing remits.
For many organisations, the immediate concern is classification: where does your AI use fall on the risk spectrum? A chatbot offering museum information sits in a different category than an AI system making decisions about grant allocations or access to services. Understanding these distinctions is the first step toward proportionate compliance.
What This Means for Heritage and Cultural Organisations
Heritage organisations occupy an interesting position in this regulatory landscape. Many are already using AI for digitisation, cataloguing, and public engagement, applications generally considered lower risk. Yet, as these tools become more sophisticated, questions of accountability emerge. When an AI system recommends which artefacts to prioritise for conservation, or how to interpret community feedback, who bears responsibility for those decisions?
Transparency will be central to the new frameworks. Visitors and stakeholders have the right to know when they're interacting with AI systems, and organisations must be able to explain how decisions are reached. This doesn't mean exposing proprietary algorithms; it means documenting processes, maintaining human oversight, and ensuring that AI augments rather than replaces professional judgment.
Data governance presents another challenge. Heritage organisations often work with sensitive cultural materials, historical records, and community knowledge. The intersection of AI regulation with existing data protection laws means thinking carefully about consent, provenance, and appropriate use. Training an AI model on archival photographs, for instance, requires consideration of copyright, privacy rights, and cultural protocols, particularly when working with materials from Indigenous or marginalised communities.
Actionable Steps for SMEs
Small and medium-sized enterprises face a particular challenge: they're expected to comply with the same fundamental principles as large corporations, but with far fewer resources. The key is to start with foundations rather than attempting comprehensive compliance overnight.
Begin with an AI audit.
Map where and how you're currently using AI, even in seemingly minor ways. This includes third-party tools, automated systems, and any processes involving machine learning. Understanding your current footprint is essential before you can assess risk or plan compliance measures.
Document your processes.
Regulators will expect organisations to demonstrate due diligence. This doesn't require elaborate systems, clear, accessible documentation of how AI tools are selected, implemented, and monitored is often sufficient. Record the rationale behind AI deployment decisions, the safeguards in place, and how human oversight functions.
Establish governance structures.
Even small organisations benefit from clarity about who oversees AI use, how concerns are raised, and what processes exist for reviewing AI-driven decisions. This might be as simple as designating responsibility within existing roles rather than creating new positions.
Invest in skills, not just tools.
Regulatory compliance isn't purely technical; it's organisational. Staff need to understand AI capabilities and limitations, recognise potential risks, and know when to escalate concerns. Training doesn't have to be extensive, awareness and critical thinking matter more than technical expertise.
Engage with sector-specific guidance.
As regulators develop their approaches, sector-specific guidance will emerge. Heritage organisations should watch for materials from bodies like The National Archives, Arts Council England, and relevant professional associations. These resources will translate general principles into practical, context-appropriate advice.
Anxiety and Opportunity
There's understandable anxiety around compliance, particularly the fear that regulation will stifle innovation or impose impossible burdens on smaller organisations. Yet, well-designed regulation can create opportunities. Clear frameworks build public trust, making people more comfortable engaging with AI-enhanced services. Compliance standards can level the playing field, ensuring that responsible organisations aren't undercut by those cutting corners. And the process of preparing for regulation often surfaces valuable insights about how AI actually functions within your organisation.
The heritage sector, in particular, has much to contribute to these conversations. Museums, archives, and cultural organisations have centuries of experience balancing preservation with access, navigating sensitive materials, and serving diverse communities. These skills translate directly to the ethical challenges AI regulation seeks to address.
Final Thought
Regulation should not be seen as an obstacle but as a framework for responsible innovation. The organisations that thrive in 2026 and beyond will be those that view compliance not as a checkbox exercise but as an opportunity to strengthen practice, build trust, and ensure their AI use genuinely serves their mission. Start now, start small, but start with intention.