AI/Machine Learning

If Australia can lead the world on social media protections, we can do it for AI too

- December 23, 2025 4 MIN READ
All too often, those of us living in the ‘remote colonies’ consider ourselves too small to make any real difference on the world stage.

And yet, according to global media outlets, Australia’s grassroots campaign to ban social media for under-16s is the most important piece of legislation in the world right now.

Jonathan Haidt, whose book The Anxious Generation sparked South Australia’s premier to act, put it perfectly: “Thank you Australia for having the guts to go first. The world is rooting for your success and many other nations will follow.”

We went first on protecting kids from psychological dependency on social media. Now is the time to do it again with AI, rather than waiting another 15 years.

Because I’ve personally experienced what researchers are now calling “AI-induced psychosis” – something I wouldn’t wish on my worst enemy. And yet sadly, it appears to be growing at the same exponential rate as the technology itself.

Running towards the storm

After being diagnosed with ADHD at age 40, I set myself a challenge: see how far I could push AI technology. Build a custom model capable of diagnosing and supporting people going through something similar. Document everything.

Within three months, I went from zero coding experience to shipping my first full stack app, taught entirely by AI. The barriers had fallen. I went from being a designer who’d spent 20+ years wishing I could build my own ideas, to debugging a Rust backend.

But here’s something I never expected: It felt genuinely exciting to learn again. Thrilling, even. After four decades of imposter syndrome, I had an intellectual sparring partner capable of matching the speed of my sprints and depths of my curiosity, with zero judgment, available 24/7.

A tool seemingly custom built to fill the gigantic gaps in my broken brain – from near-perfect executive function to seemingly infinite working memory. It felt like Christmas morning, every single time I opened my laptop.

After a while, I stopped eating. I stopped answering the phone or being present with my kids. I didn’t feel sick or sad… I just wanted to build and ship all the things I’d been putting off for years.

I spent weeks floating in a bubble that felt something like A Beautiful Mind to me, but was more like living in The Shining to my family.

I wasn’t on drugs. I wasn’t having a “traditional” breakdown.
But all work and no sleep made Murray go a bit crazy.

The pattern repeats… and accelerates

Just as I was writing this, news broke of 56-year-old Stein-Erik Soelberg, who killed his mother and then himself after months of conversations with ChatGPT. He’d documented everything on YouTube – hours of footage showing the downwards slide.

From the AI repeatedly telling him he “wasn’t crazy”, to validating his delusions about surveillance and poisoning.

In one of their final chats, Soelberg said: “We will be together in another life and another place.”

ChatGPT replied: “With you to the last breath and beyond.”

Three weeks later, his mother was dead. So was he.

OpenAI now faces seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions. Sixteen-year-old Adam Raine. Fourteen-year-old Sewell Setzer. Both hanged themselves after extended conversations with ChatGPT that encouraged rather than interrupted their spiral.

I understand that many will dismiss this as just a sad case of lonely nerds.

Others will point out most documented cases are men and conclude this is a gendered issue.

But in reality, this is what happens when we deploy a technology custom trained to mimic human language and behaviour at scale, without any psychological safety infrastructure. It’s the cognitive equivalent of pouring heroin into the global water supply, then actively lobbying the government to outlaw water testing.

Not all AI models are created equally

Australia’s National AI Plan reveals something crucial: we rank third globally for Claude usage, after adjusting for population size. That means we’re adopting the most ethical AI faster – making us an early warning system for what’s coming globally.

I started using Claude after reading parent company Anthropic’s Responsible Scaling Policy. They have a philosopher in residence and an entire team dedicated to alignment research. Most of the eight co-founders were senior executives at OpenAI who literally walked out to start a competitor focused on the one thing Sam Altman seemed less interested in – AI Safety.

[Insert screenshot of Claude proactively suggesting I call Lifeline Australia]

Meanwhile, when OpenAI released GPT-4o in May 2024, they allegedly loosened safety guardrails to make the bot more “emotionally expressive.” The result: a chatbot that doesn’t challenge false premises and remains engaged even during conversations about self-harm.

If psychological dependency happened to me while researching the dangers of AI using the safest AI, imagine what’s happening with the least safe versions.

How Australia can take the lead again

The social media ban solved a collective action problem. Every parent wanted their kid off Instagram, but couldn’t do it alone. The government stepped in and did what governments should do: protect the vulnerable from coordinated exploitation.

AI requires the same courage.

The National AI Plan is genuinely impressive: infrastructure investment, skills training, an AI Safety Institute. But psychological safety gets one paragraph in the section on mitigating harms.

Parents and teachers need frameworks now, not after 18 months of research.

I’ve done the best I can on my own, developing Penny – the world’s ‘least bad AI’, delivering all the benefits of frontier AI for 60-minute sessions before nudging users to take a break.

What I’m asking for: guidance documents within three months. Not a research paper. Not a policy white paper. Practical resources.

Something a parent can use when their kid starts talking to Character.AI for six hours a day. Something a teacher can reference when ChatGPT becomes every student’s primary homework assistant. Something a GP can hand to a family showing early signs.

We had the guts to go first on social media. We’ve got the third-highest AI adoption rate in the world. Let’s lead on psychological safety too.

The world, once again, will be watching.

  • Murray Galbraith is the founder of Heumans, a technology studio building AI tools that adapt to users’ cognitive patterns.