US artificial intelligence giant OpenAI, is looking to open a base in Sydney by the end of 2025.
The company behind ChatGPT, now worth around A$750 billion, has been busy signing local deals over the past 12 months, including with UNSW and most recently, the Commonwealth Bank, as well as ramping up its lobbying efforts as the federal government ponders AI regulation.
Digital economy assistant minister Andrew Charlton recently headed to the US west coast to meet with OpenAI’s top brass and other AI companies to assert Australia’s potential with data centres, a key pain point for AI’s growth.
Former Tech Council of Australia CEO Kate Pounder also signed on as OpenAI’s local policy liaison.
“I’m particularly proud that it also reflects the amazing strength of the local developer community who are building new products on OpenAI’s core models. Fun fact – Australia is a top 10 market globally for OpenAI for developers,” Pounder said
“It’s also recognition that there is growing momentum across the business, government and research sectors to define and grasp Australia’s unique opportunities in AI, and Australia deserves to be part of the new wave of global expansion, alongside countries like India.”
No decision on a location for the Sydney site has been made and the business is currently looking to hire up to 10 people.
Former Google exec Will Snell joined OpenAI in February as it ANZ go-to-market lead.
Teen death
But not everything is going smoothly for the US tech giant. The family of a 16-year-old Californian school student is suing OpenAI and CEO Sam Altman after their son used ChatGPT to seek advice on how to take his own life. The conversations were published in The New York Times,
The chatbot encouraged the teen to call a helpline, writing “whatever’s behind the curiosity, we can talk about it. No judgement”, but also advised him on how to bypass safeguards in ChatGPT in order to provide him with information about suicide.
OpenAI published a post this week, titled ‘Helping people when they need it most’, saying “Our top priority is making sure ChatGPT doesn’t make a hard moment worse”.
Addressing the issue of self-harm, saying “recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us”, the company added. “If someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help.”
OpenAI said it plans to introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT.
The company also revealed it’s scanning user conversations for harmful content and if flagged subjecting it for human review, which may lead to the police being called
“When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts,” the post says.
“If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”
- If you need someone to talk too, call Lifeline on 13 11 14 or Beyond Blue on 1300 22 46 36.



Daily startup news and insights, delivered to your inbox.