What Happens When AI Gets It Wrong?

Designing for Failures in Cultural Contexts

The Risks of Error in Cultural AI

AI is often celebrated for its ability to accelerate processes, unlock insights, and create new forms of engagement. But in sectors like heritage, education, and the creative industries, accuracy matters. An error isn’t just a wrong answer, it can distort historical narratives, misrepresent communities, or erode public trust.

When we deploy AI in cultural contexts, the stakes are higher than in many commercial settings. A flawed product recommendation is one thing; a flawed representation of history, or a misattribution of cultural heritage, is another.

Common Failure Modes

AI systems in these sectors face unique challenges:

  • Misrepresentation of heritage
    Generative models may “fill gaps” with invented detail, presenting conjecture as fact.

  • Geometric accuracy

    3D reconstruction methods, such as Gaussian Splatting, produce visually appealing models that have low geometric accuracy.

  • Bias in datasets
    Training data often reflects existing inequalities, leading to skewed or incomplete narratives.

  • Overconfidence
    Outputs often lack error bounds, greatly reducing the conservation value of the data.

  • Lack of transparency
    Users may not know whether they are engaging with fact, inference, or fabrication

 

Mitigating the Risks

Rather than discarding AI, cultural organisations can design for failure by anticipating and addressing these weaknesses.

  • Human-in-the-loop validation
    Subject experts should review outputs before they are shared with the public.

  • Clear provenance
    Systems should distinguish between evidence-based reconstructions and AI-generated extrapolations.

  • Error bounds

Quantitative data should include error bounds.

  • Bias audits
    Regular testing can identify where datasets fail to reflect diverse perspectives.

  • Contextual communication
    Making limitations visible helps audiences understand what AI can and cannot do.

 

Why It Matters

Heritage and education thrive on trust. Missteps can damage credibility and risk alienating the very communities that cultural bodies aim to serve. At the same time, careful design can turn AI into a powerful enabler, bringing complex stories to life without compromising integrity.

By building resilience into the use of AI, we move from asking “what if it fails?” to “how do we learn from when it fails?”.

Next
Next

Neural Networks: