The Albanese government has unveiled Australia’s first National AI Plan, opting for voluntary transparency measures rather than mandatory rules on how businesses disclose their use of artificial intelligence.
The 37-page plan sets out a broad roadmap for building what the government calls “an AI-enabled economy” while maintaining a “careful balance between innovation and protection from potential risks”.
The government also describes the strategy as a “key pillar” of its broader Future Made in Australia agenda, positioning AI as central to economic capability and long-term industry development. This isn’t the first time this connection has been made.
The National AI Plan does not introduce new legal obligations for businesses using AI. Instead, it positions voluntary guidance, industry partnerships, and existing laws as the primary mechanisms for oversight.
The report states Australia’s approach will be “proportionate, targeted and responsive,” with regulation introduced only “where necessary”.
For businesses, the detail of how support will be delivered sits largely in separate programs that the plan draws together. It’s worth noting this includes SMEs, which the government repeatedly frames as central to Australia’s AI adoption challenge.
The plan encourages businesses, content creators, and AI developers to clearly label AI-generated material through visible notices, watermarking and metadata. However, it stops short of actually requiring this disclosure.
This mirrors guidance already released by the National AI Centre, including ‘being clear about AI-generated content’ resource, which outlines ‘labelling,’ ‘watermarking’ and “metadata recording” as voluntary best-practice measures.
No standalone AI Act, no new laws
The National AI Plan makes clear the government will not introduce a standalone AI Act, instead keeping regulation within Australia’s existing legal system. The report states “Australia’s robust existing legal and regulatory frameworks will remain the foundation for addressing and mitigating AI-related risks”.
Rather than pursuing a single overarching AI law, the government is committing to a sector-by-sector approach, saying regulation will be “proportionate, targeted and responsive”, and introduced only “where necessary” based on specific harms.
The plan also reinforces that responsibility will stay with existing regulators: “Regulators will retain responsibility for identifying, assessing and addressing potential AI-related harms within their respective policy and regulatory domains,” the report reads.
In the report, the government contrasts this approach with overseas models, emphasising Australia’s legal system is already “largely technology-neutral” and well-suited to managing new risks without new legislation. This effectively rules out an EU-style AI Act and formalises a light-touch, incremental model of oversight.
Minister of Industry and Innovation Tim Ayres said the plan is intended to build public trust in how AI is deployed while keeping barriers to innovation low.
“This plan is focused on capturing the economic opportunities of AI, sharing the benefits broadly, and keeping Australians safe as technology evolves,” Minister Ayres said.
Assistant Minister for Science and Technology Andrew Charlton added the government’s agenda is designed to “attract positive investment, support Australian businesses to adopt and create new AI tools, and address the real risks faced by everyday Australians”.
The plan is anchored in three goals: capturing the opportunities, spreading the benefits, and keeping Australians safe.
These include improving digital and physical infrastructure; expanding AI skills across schools, TAFEs and the workforce; and establishing guardrails for responsible innovation.
The government has also outlined three practical “next steps” flowing directly from the plan: standing up the AI Safety Institute, finalising a new GovAI Framework to guide secure and responsible adoption across the Australian Public Service.
As part of that framework, chief AI officers will be appointed in every government department.
AI Safety Institute
A centrepiece – which was announced last week – is the creation of the AI Safety Institute (AISI), backed by a modest $29.9 million to be established in early 2026.
The Institute will “monitor, test and share information on emerging AI capabilities, risks and harms” and provide advice to government, regulators, unions and industry. It will also support regulators to ensure AI companies comply with Australian law and “uphold legal standards of fairness and transparency”.
According to the plan, the AISI will focus on both “upstream AI risks”, relating to how frontier models are built and trained, and “downstream AI harms,” meaning real-world impacts on people. It will form part of Australia’s international engagement, working with partners including the new International Network of AI Safety Institutes.
Government officials present the strategy as a long-term, evolving framework. Ayres said the plan will be continuously refined: “As the technology continues to evolve, we will continue to refine and strengthen this plan to seize new opportunities and act decisively to keep Australians safe”.
Charlton framed it as a people-centred approach: “This is a plan that puts Australians first… promoting fairness and opportunity”.
The heavier financial commitments, including the pipeline of private data centre investment cited in the plan, sit largely outside the new measures. Instead, the government emphasises that it has “more than $460 million” already committed across existing AI programs, with the plan designed to coordinate and guide this work.
- This story first appeared on SmartCompany. You can read the original here.



Daily startup news and insights, delivered to your inbox.