Copyright Reform in the Age of AI
What Should Come Next?
As generative AI systems become more powerful, and more commercially dominant, copyright law is once again under pressure to adapt. In the UK and across Europe, policymakers are now grappling with the legal, ethical, and economic implications of models trained on vast amounts of existing creative work.
The stakes are high. What’s decided now will shape how artists, researchers, educators, and technology developers interact with AI for years to come.
At Aralia, we’ve worked at the intersection of AI and creative IP for over a decade. Our view? The future of copyright should support innovation, yes, but not at the expense of creators, small enterprises, or cultural integrity.
Here’s where we think copyright reform needs to go next.
1. Opt-Out Alone Isn’t Enough
Recent proposals, such as those from Meta and Google, have leaned heavily on “opt-out” frameworks, giving rights holders the ability to request their work not be used to train AI models.
But opt-out is backwards. It assumes consent unless explicitly withdrawn, and in a world where content is scraped at scale, that’s nearly impossible to manage.
We believe opt-in should be the default for commercial training datasets, particularly when used to create monetizable outputs. If that’s not technically feasible, platforms must at least:
Offer transparent audit trails of training data
Support verified takedowns
Remunerate creators where their work contributes meaningfully to output
The alternative is an AI economy built on unlicensed value extraction, something copyright was designed to prevent.
2. Not All Use Is “Fair Use”
Some AI advocates argue that training on publicly available data falls under fair use (or text and data mining exemptions in the EU). But access ≠ permission.
In reality:
Public availability is not the same as public domain
Many datasets include copyrighted, commercially valuable content
Small creators and heritage institutions often lack the legal firepower to challenge misuse
We believe copyright frameworks must clarify where the line is between legitimate analysis and automated appropriation, especially when training leads to commercial gain.
3. The Role of Attribution and Provenance
Today’s AI systems rarely show their working. Text, images, or 3D assets can be generated with no reference to original sources, even if those sources heavily influenced the outcome.
But attribution matters. In creative industries, academia, and heritage, knowing where content comes from is critical to trust.
Future copyright reform should require:
Clear metadata trails for AI-generated content
Disclosures of data sources and training frameworks
Technical mechanisms to trace derivative works
This isn’t about stopping AI.
It’s about protecting authorship, creative identity, and the cultural value of provenance.
4. Reform Must Work for SMEs and the Public Sector
Too much IP policy is written with global tech giants in mind. But small businesses, local archives, museums, and educators are also content owners, and they’re often the most vulnerable to unlicensed scraping.
Any new copyright regime should:
Be enforceable at a small scale
Avoid litigation as the only recourse
Provide accessible rights registration or usage dashboards
AI innovation doesn’t just belong to Big Tech. We need frameworks that support responsible development across all levels of the creative economy.
5. Licensing Frameworks for a Mixed Economy
Some form of collective licensing, similar to how music royalties are managed, may be part of the solution. If AI companies benefit from content, they should pay into systems that return value to rights holders.
This could include:
Tiered licensing models for commercial vs. non-commercial AI
Opt-in creator registries
Revenue-sharing schemes based on dataset contribution
It’s not a perfect solution, but it moves us closer to fair compensation and sustainable innovation.
Final Thoughts: Creativity Needs Guardrails, Not Handcuffs
Copyright reform is not about stopping AI. It’s about making sure we build this next chapter of technology with creators, educators, and cultural institutions still in the room.
The goal should be clear: a system that supports innovation without erasing the authors who make it possible.
We don’t need to dismantle copyright to make AI work. We just need to update it thoughtfully, transparently, and with fairness at its core.