What Can Go Wrong?
Learning from AI Failures in 2025
A practical review of the year's most telling AI missteps and what SMEs and heritage organisations should carry into 2026.
2025 was the year AI enthusiasm met reality.
Across sectors, from creative studios to local authorities to heritage bodies, organisations began deploying AI more boldly. But as adoption scaled, so did the mistakes. Some were small and recoverable. Others were reputationally or financially damaging. All illuminated a clear truth:
👉 AI failure isn’t usually about bad technology, it’s about weak process, poor assumptions, or misaligned expectations.
This year-end review distils the most significant AI failures and “near-misses” affecting UK SMEs and cultural organisations and offers grounded guidance for 2026 investment decisions.
1. Over-automation Backfires in Creative SMEs
A common pattern emerged across design studios, media production companies, and content teams:
AI tools were brought in to “speed up” workflows, only to slow them down again.
Typical failure modes included:
AI-generated assets needing more human revision than manual production
Loss of brand coherence due to inconsistent outputs
Misuse of models without adequate prompt records, leading to unrepeatable results
Confidential client data accidentally fed into public models
The most damaging cases involved SMEs promising “AI-powered rapid delivery” and then failing to meet deadlines because the tools were unpredictable, or because staff lacked the expertise to control and evaluate outputs.
Lesson for 2026:
AI should support creative work, not replace it.
Pilot first, measure objectively, and never promise a deliverable that depends on model behaviour you can’t control.
2. Heritage Organisations Misusing Generative Models
Generative reconstruction still brings significant risk, and 2025 exposed the consequences.
Several high-profile cases saw museums or archives share AI-enhanced images that:
fabricated missing artefact details
imposed inappropriate cultural features
failed to distinguish clearly between authentic and reconstructed elements
unintentionally reinforced colonial perspectives embedded in training datasets
In one “near miss,” a major UK institution paused publication of a reconstruction after community partners flagged that the AI-generated elements contradicted oral histories.
Lesson for 2026:
Heritage AI must be transparent, participatory, and provenance-first.
Always label reconstructed elements, track model contributions, and involve cultural stakeholders early.
3. SMEs Burned by Expensive AI Subscriptions
Many businesses rushed into costly cloud AI services believing they would deliver immediate ROI. Instead, they discovered:
unpredictable usage-based costs
difficulty exporting data or switching providers
models that were too generalised to solve their niche challenges
hidden energy and compute fees that escalated month by month
Some organisations cancelled projects mid-year because running costs quietly doubled.
Lesson for 2026:
Avoid lock-in.
Use hybrid or cloud-agnostic AI infrastructure and run small-scale trials before committing to long-term contracts.
4. Local Authorities Facing Compliance Gaps
Councils accelerating digital transformation faced a new pressure in 2025: aligning with the EU AI Act (for cross-border services) and the UK’s emerging regulatory frameworks.
Failed or stalled projects often shared features such as:
unexplainable automated decisions
missing human oversight
unclear data governance
procurement choices made without risk classification
vendors unable to demonstrate model transparency
Lesson for 2026:
Public services require explainability, traceability, and auditability.
Buy from suppliers who provide documentation, dashboards, and ongoing support, not mystery models.
5. XR Pilots That Didn’t Scale
Several heritage XR pilots launched with enthusiasm but struggled to move beyond prototype stage.
Common causes:
lack of integration with existing visitor journeys
insufficient content strategies to sustain long-term use
underestimating hardware and accessibility constraints
impressive visuals, but unclear purpose or impact metrics
Lesson for 2026:
Pilot with a plan for continuity.
XR succeeds when it solves a real operational or educational problem, not when it exists for novelty alone.
Final Thought: Fail Fast, Learn Carefully
2025 taught us that AI failure isn’t a sign to avoid the technology.
It’s a sign to adopt it more intelligently.
The most resilient organisations this year were the ones that:
started small
measured clearly
invested in staff skills
insisted on transparency
kept humans at the centre of creative and cultural work
If SMEs and heritage organisations carry these lessons into 2026, they will make smarter, safer, more strategic AI investments and build lasting capability, not hype-driven risk.