I’m personally pro-tech and use AI every day. I’m genuinely excited by progressive tools that expand what creators and businesses can do.
But the pace of what’s happening — and how it’s being rolled out — has been eye-jarring: creators sidelined, copyright murky at best, implementation racing ahead of consent.
Right now it feels like a land-grab: who can move faster, get larger, learn more, train more — too often at society’s expense.
That’s the backdrop to the final keynote at SXSW Sydney 2025, a fireside interview where OpenAI’s chief of global affairs (ie. geopolitical risk management boss), Chris Lehane, was interviewed by the AFR’s Paul Smith.
Lehane offered a polished vision of “the age of intelligence” and a better world in 25 years. The subtext: history shows each revolution trumps the last, so adaptation is everyone’s responsibility — and if you don’t adapt, that’s on you.
Here are the biggest takeaways — and why I’m not fully buying the story when we can design better world ground up from the beginning.

OpenAI’s Chris Lehane
Lehane talked about “democratising AI,” yet OpenAI’s models are trained on the work of millions of creators without consent — writers, designers, journalists, musicians — whose skills are now quietly being automated, their livelihoods eroded. The company frames this as a mistake, or a matter of legal interpretation.
The pattern is clear: take first, apologise later.
He assured the audience that OpenAI will create more new jobs, not destroy them — that every technological leap has done so.
“Not to be Pollyannaish,” he added, before acknowledging that, yes, some people will be displaced. But not to worry, there will be new “marketplaces” for training, certifications, and “AI literacy.”
In other words, the same company that disrupted your industry will now sell you the toolkit to survive in it.
It’s a clever sleight of hand — a moral outsourcing. Governments must decide how to regulate. Societies must adapt. Workers must retrain.
OpenAI just builds the tools. And if the world struggles to catch up, well, that’s not their fault.
Lehane’s optimism stretched 25 years into the future.
By then, he said, AI will have cured cancer, solved education, and powered abundance.
Everyone will “live the good life.”
But there’s a darker question lurking beneath that vision: what happens when the next generation no longer learns how to do things? When every act of creation, repair, or discovery is reduced to a voice command or a prompt?
Will we still know how to build, teach, or write, or will those skills quietly vanish, outsourced to invisible systems we no longer understand? If AI becomes the hand that paints, the voice that sings, the mind that solves — what, then, becomes of ours?
Lehane spoke of human-led design, of putting people at the centre of this transformation. But so far, humanity hasn’t been leading; it’s been reacting.
The jobs are already disappearing. The skills gap is widening. The systems are learning faster than the people they’re supposed to serve.
As the lights dimmed, the crowd clapped and laughed, not out of awe, but irony. After Smith’s push for real answers on copyright and AI’s impact, all he got was hope.
When he closed with “we hope the world will get better,” the room knew that was all they’d been given.
OpenAI’s vision promises a better world — if we just trust them, if we just keep up. Yet every revolution has its cost. And before we let another wave of “innovation” wash over us, perhaps the real question is not what AI will create — but what it will take along the way.
- George Hedon is the founder of Pause Fest and the Pause Awards.



Daily startup news and insights, delivered to your inbox.