Rebuttal: Why Sam Altman's Vision for AGI Risks Undermining Human Intelligence

In recent weeks, OpenAI CEO Sam Altman has doubled down on the transformative promise of Artificial General Intelligence (AGI). In two public statements, including an interview with Marketing AI Institute and his own blog post, Three Observations, Altman presents AGI as both inevitable and urgently near.

“AGI will have a much bigger economic impact than the internet.”
“The world will change much more quickly than people think.”
“AI that is smarter than humans will be a force multiplier.”
— Sam Altman

At Aralia, we are pro-AI. We build with it. We innovate around it. But we also believe these kinds of statements, untethered from clear definitions or accountability, risk distorting how people, especially young people, understand what AI is and where it’s headed.

 

AGI: A Moving Target

Altman refers to AGI as if it's an engineering milestone we’re just about to cross. But what constitutes AGI is still a matter of debate, even among the most respected AI researchers.

There’s no agreed-upon definition, no reproducible benchmark, and no shared test that distinguishes AGI from the highly capable but narrow systems we currently have.

While OpenAI may be aiming to build “AI systems that are smarter than humans,” that’s not the same as achieving general intelligence, something that implies common sense reasoning, adaptive creativity, ethical context, and real-world problem solving. None of these exist in today’s models in any reliable, explainable form.

 

“Faster Than You Think”: A Convenient Timeline

Altman warns that the world is changing faster than people realise, and that AI adoption will outpace government regulation, industry understanding, and public readiness.

This may well be true. But it raises a deeper question: whose responsibility is that?

Tech leaders are not simply observers. They are agents of that change. To suggest that society needs to “adapt quickly” to technologies being released with limited oversight or public understanding is to shift the burden of caution from developers to everyone else.

In reality, accelerationist narratives often serve corporate goals. They create a sense of urgency that makes slower, more democratic processes, like regulation, ethics consultation, or public debate, look obsolete.

We disagree. Speed does not justify opacity.

 

Creativity Isn’t Just Output

In both statements, Altman singles out creativity and marketing as sectors “most impacted” by AGI, with the implication that they will be significantly replaced or augmented by smarter AI systems.

This view reduces creativity to efficiency and productivity, ignoring that much of its value lies in insight, cultural context, emotional depth, and human nuance.

AI can generate. But it does not know why. It cannot interpret its output in a way that is consistent, self-aware, or culturally grounded. It mimics form, not intent. That’s not creativity. It’s computation.

The assumption that generative AI can replace these functions misunderstands what makes them valuable in the first place.

 

Let’s Ground the Conversation

Altman’s vision of AGI might one day be partially realised, but today, it’s still speculation backed by market share, not scientific consensus.

We need to talk about:

  • The actual capabilities and limits of current AI

  • The risk of placing too much creative, economic, or ethical weight on a system that doesn’t think

  • How to empower young people to learn about AI, not just from it

We don’t believe that creativity, critical thinking, or the humanities are at risk of becoming obsolete. If anything, they will become even more vital in a world awash with synthetic content and AI-generated noise.

 

Our Message to the Next Generation

Don't buy the myth that you’ll be left behind by a programme that is smarter than you. Focus instead on mastering why those tools matter, and when to challenge them.

We should teach young people not just to keep up with AI, but to lead it—ethically, critically, and creatively.

 

📖 Missed Part 1 and 2 of our series on AI and the next generation? Read them here.

💬 Coming soon: our guide to understanding real vs. speculative AI progress—and how to navigate the difference.

Previous
Previous

The Forgotten Pioneer of 3D

Next
Next

AI, Copyright and the Next Generation - Part 2: