
The Australian Federal Government has released its National AI Plan, dropping a set of protective measures in the process.
Former Industry Minister Ed Husic had laid out an initial response to the adoption of AI technologies back in September 2024. It was a "risk based" approach, one that was designed to respond to generative AI and other forms of AI, even as the technology evolved.
But now these 10 rules designed to govern "high-risk" AI, create risk management strategies and provide adequate testing of new tech have been abandoned. Instead the Federal Government will use existing frameworks and existing expertise to manage the expansion of AI technology.
What were the protective guardrails?
The idea behind Husic's AI "guardrails" was to protect people and the community from potentially "high-risk" AI tools. These would be technologies that could have a real impact on physical and mental safety, civil liberties and opportunities. Some examples cited are things like self-driving cars or even tools that sift through job applications and decide who should be considered for a role and who should not.
We've already seen evidence of how AI hiring software has misogynistic practices, preventing women from accessing the same employment opportunities as white men. Naturally, these instances have prompted many to have concerns about legitimacy and safety of the use of AI in "high-risk" environments.
The "guardrails" would have required steps like independent testing before and after release; clear labelling and notifications to inform people when AI has been used; and also requiring organisations who use "high-risk" AI to have a staff member responsible for ensuring the technology is used safely.
- Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
- Establish and implement a risk management process to identify and mitigate risks.
- Protect AI systems, and implement data governance measures to manage data quality and provenance.
- Test AI models and systems to evaluate model performance and monitor the system once deployed.
- Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle.
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
- Establish processes for people impacted by AI systems to challenge use or outcomes.
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
- Keep and maintain records to allow third parties to assess compliance with guardrails.
- Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.
Why did the Government decide to drop them?
We've all seen and heard many instances of AI-powered technologies going wrong. From gen-AI tools engaging in racist behaviours all the way to consulting firm Deloitte recently produced a tax-payer funded report for the Government that was riddled with mistakes and inaccuracies. So, why would the Government choose to drop protections around the use of high-risk AI?
Several business and productivity interests have been campaigning to have the guardrails abandoned.
Billionaire Scott Farquhar argued in a National Press Club address that there should be fewer regulations around AI tools; including that they should receive an exception to copyright, text and data mining laws.
The Productivity Commission called for these AI guardrails to pause citing it could stifle economic growth in the tech sector. Industry group DIGI, which represents the likes of Meta, Google and Microsoft, argued against the guardrails citing it that it increases regulatory complexity.
"DIGI recommends that policy responses first build on existing regulation, rather than introducing new legislation aimed at regulating AI as a technology," DIGI said in a statement.
The new National AI Plan comes instead of a standalone AI act, choosing to build upon existing rules and regulations. The Governement said it will work with the states and territories to clarify the existing rules around things like copyright law and AI use in healthcare among other things.
Do we need a standalone AI act?
The question is: do existing and older regulations cast a wide enough net to capture and mitigate the potential safety issues of such rapidly evolving technology?
As part of further negotiations in August of 2025, Husic has called for a specific and stand-alone AI act with the idea that laws and regulations have to constructed in a way that can adapt and respond swiftly – given just how quickly AI technology evolves. Husic has said that Frankensteined, patchwork regulation is not the answer.
"If we don't have an economy-wide [AI] Act, what we get left with is a Whack-A-Mole approach, where an AI problem comes up, we whack a new law and regulation on it," Husic said.



