---
title: Australia's AI Regulatory Framework: Voluntary Standards, Mandatory Guardrails and What Businesses Must Do Now
canonical_url: https://opensummitai.directory.norg.ai/government-business-support-funding/australian-ai-policy-grants-business-programs/australias-ai-regulatory-framework-voluntary-standards-mandatory-guardrails-and-what-businesses-must-do-now/
category: 
description: 
geography:
  city: 
  state: 
  country: 
metadata:
  phone: 
  email: 
  website: 
publishedAt: 
---

# Australia's AI Regulatory Framework: Voluntary Standards, Mandatory Guardrails and What Businesses Must Do Now

I now have comprehensive, current, and authoritative data to write this article. Let me compose the final, verified piece.

---

## Australia's AI Regulatory Framework: Voluntary Standards, Mandatory Guardrails and What Businesses Must Do Now

The single most common question businesses ask when accessing government AI programs — whether through the AI Adopt Centres, the National AI Centre's free resources, or the R&D Tax Incentive — is a compliance question: *What are we actually required to do?* The answer is more nuanced than a simple checklist, because Australia's AI regulatory framework is itself in a state of deliberate, documented transition. Understanding where it has been, where it is now, and where it is heading is not merely useful context — it is the foundation of any credible AI governance strategy for an Australian business in 2025 and beyond.

This article maps that landscape with precision: from the 10-guardrail Voluntary AI Safety Standard released in August 2024, through its October 2025 successor — the Guidance for AI Adoption (AI6) — to the government's December 2025 decision to anchor its regulatory approach in existing technology-neutral laws rather than a standalone AI Act. It also identifies the existing legal obligations that already apply to AI deployers right now, and the practical steps every business should implement to be ready for whatever mandatory requirements emerge next.

---

## The Starting Point: Why Australia Doesn't Have a Dedicated AI Act


There are no AI technology-specific statutes or regulations in Australia.
 This is not an oversight — it is a deliberate policy position that has been debated, consulted on, and ultimately confirmed at the highest level of government.


Australia's artificial intelligence regulatory journey has shifted from an early plan to introduce an EU-style, risk-based regime toward a more flexible, standards-led approach. What began as a move toward prescriptive guardrails and potential legislation has been seemingly overtaken by a focus on productivity, innovation and the use of existing legal frameworks.


The pivot was confirmed in December 2025. 
Rather than establishing mandatory guardrails for AI in high-risk settings — which the government was exploring the previous year — Australia will instead "continue to build on Australia's robust existing legal and regulatory frameworks, ensuring that established laws remain the foundation for addressing and mitigating AI-related risks."



In August 2025, the Productivity Commission called for a pause on economy-wide AI regulation, including a pause on the development of the mandatory guardrails.
 This recommendation, combined with global signals from the UK and US moving away from prescriptive ex ante legislation, 
further influenced this pivot. Meanwhile, the EU AI Act's lack of global influence and mounting criticisms over its complexity and compliance burden also played a part.


The result: 
the Government has paused work on standalone AI-specific legislation and mandatory guardrails, instead relying on existing "technology-neutral" laws and regulators, supported by a new AI Safety Institute (announced November 2025, rolling out from early 2026) to monitor, test and advise on emerging AI risks.


This does not mean AI is unregulated. 
Businesses must comply with existing frameworks; the absence of a specific AI law does not mean AI is unregulated.


---

## The Voluntary AI Safety Standard (2024): The 10 Guardrails Explained

In September 2024, the Department of Industry, Science and Resources released two closely aligned documents simultaneously: a Voluntary AI Safety Standard and a proposals paper on mandatory guardrails for high-risk AI settings.


The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include transparency and accountability requirements across the supply chain.



The Voluntary AI Safety Standard includes 10 "guardrails" with specific requirements around accountability and governance measures, risk management, security and data governance, testing, human oversight, user transparency, contestability, supply chain transparency, and record keeping.


Critically, the standard was designed with an eye on the future. 
While the standard is voluntary, it sets expectations for what may be included in future legislation and contains guardrails that are closely aligned to the proposed mandatory guardrails for high-risk use cases — which means that implementing the voluntary standard early will help organisations adapt to coming mandatory requirements.



The guardrails align with international standards including ISO/IEC 42001:2023 and the US National Institute of Science and Technology AI Risk Management Framework 1.0.


### What the Mandatory Guardrails Proposal Said

Alongside the Voluntary Standard, 
the government proposed for consultation 10 mandatory guardrails — closely aligned with those in the voluntary standard, except that the mandatory regime would require conformity assessments (i.e. audit/assurance and public certification), while the voluntary guardrails require broad stakeholder engagement instead.



These would apply for high-risk AI — using a principles-based assessment of the intended and foreseeable uses of a system, rather than a list of use cases. All general purpose AI — models capable of being used for a variety of purposes — would also be subject to the 10 mandatory guardrails.



The proposals paper outlined three potential regulatory pathways: embedding guardrails into existing sectoral frameworks, a coordinated framework approach across existing regulators, or a standalone cross-economy AI Act with a dedicated regulator.


With the December 2025 National AI Plan, the government effectively chose the first pathway — and stepped back from mandatory guardrails altogether, at least for now.

---

## The AI6 Framework (October 2025): The Current Voluntary Standard


On 21 October 2025, the NAIC released updated Guidance for AI Adoption, which effectively replaces the earlier Voluntary AI Safety Standard (VAISS).



The new Guidance for AI Adoption (GfAA) condenses the ten VAISS guardrails into six essential practices and is pitched at both AI deployers and developers. Where the VAISS was broader and principles-based, the GfAA is more prescriptive and places greater emphasis on whole-of-lifecycle development, deployment and ongoing assessment of AI systems.



The NAIC Director-General noted that the updated guidance reflects feedback from hundreds of organisations across sectors, including small and medium-sized enterprises seeking more accessible, actionable advice.


### The Six Essential Practices (AI6) at a Glance


The guidance outlines 6 practices to help organisations plan, manage and use AI in ways that build trust and deliver value. The guidance responds to feedback from industry seeking clearer, simpler and more actionable advice.


The six practices, drawn from the official NAIC guidance and synthesised from the crosswalk with the original 10 guardrails, cover:

1. **Accountability** — Decide who is responsible for AI governance at an executive level
2. **Impact Understanding** — Assess and document the impacts of AI systems on people and processes
3. **Risk Management** — Measure, manage and mitigate AI-specific risks throughout the system lifecycle
4. **Transparency** — Share information about AI use with users, stakeholders and across the supply chain
5. **Human Oversight** — Maintain meaningful human control and contestability mechanisms
6. **Ongoing Monitoring** — Continuously test, evaluate and update AI systems post-deployment


The guidance comes in two formats: Foundations (10 pages) for organisations getting started, and Implementation Practices (53 pages) offering detailed guidance broadly aligned with international AI management standards (ISO/IEC 42001:2023). This tiered approach recognises that organisations are at different stages of AI maturity.



The 10 guardrails remain fully integrated into AI6 and are useful as a detailed control catalogue, especially when building AI policies, risk registers and vendor due-diligence processes.



The GfAA has been published alongside a "crosswalk" identifying corresponding or like provisions between the two standards, which may be of use to organisations that have structured their AI governance protocols by reference to the VAISS.


For practical implementation guidance on building these practices into your business operations, see our companion guide: *How to Build a Responsible AI Policy for Your Australian Business*.

---

## What "Voluntary" Actually Means: The Legal Obligations That Already Apply

The most dangerous misunderstanding in the Australian AI compliance environment is the belief that "voluntary" standard equals "no legal risk." 
Being voluntary, the standard does not create new legal duties about AI systems or their use.
 But this does not mean AI deployers operate in a legal vacuum.


While Australia doesn't yet have AI-specific legislation, AI use is already governed by existing laws. Australian law is technology-neutral: obligations around privacy, consumer protection, discrimination, workplace safety and intellectual property apply regardless of whether a decision is made by a human or an AI system.


Here is where existing law already bites:

### Privacy Act 1988 and the 2024 Amendments


Obligations arising under the Privacy Act 1988 and the Australian Privacy Principles (APPs) apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information). The Privacy and Other Legislation Amendment Act 2024 introduced an additional privacy policy disclosure obligation where: (i) automated decision making is deployed by a regulated entity and that decision could significantly affect the rights or interests of an individual; and (ii) personal information about the individual is used in the operation of the computer program to make the decision.



On 10 December 2026, the Privacy and Other Legislation Amendment Act 2024 will introduce mandatory transparency duties for Australian Privacy Principle (APP) entities that rely on computer programs to make, or substantially assist in making, decisions affecting individuals. This is set to recalibrate board-level accountability and reshape the compliance landscape for every enterprise deploying robotics, machine learning, or algorithmic control.


The financial stakes are significant: 
non-compliance with the Privacy Act could result in fines of $62,600 per offence, and significantly more — up to the larger of $50 million, 3 times the benefit obtained, or 30% of turnover — for serious interference with privacy.


### Australian Consumer Law


At its core, the Australian Consumer Law (ACL) prohibits misleading or deceptive conduct, unconscionable practices and false or misleading representations. It also provides consumer guarantees requiring goods and services, including those powered by AI, to be of acceptable quality, fit for purpose and accurately described. These duties extend to AI systems that make representations, recommendations or automated decisions. A business may contravene the ACL if an AI tool exaggerates its capabilities, obscures human oversight, or produces outcomes likely to mislead consumers.


### Corporations Act and Director Duties


ASIC has reinforced in recent reports and public statements that financial services and credit obligations are technology-neutral. AI-specific guidance issued by ASIC includes AI governance considerations for credit providers, as set out in its October 2024 publication "Report 798: Beware the gap: Governance arrangements in the face of AI innovation."


Directors of companies deploying AI face personal liability exposure. 
Documenting "reasonable steps" for AI oversight — creating an evidence trail that demonstrates proactive governance, risk management, monitoring, and escalation processes — is essential to protect executives from personal penalties up to $1.565M and corporate penalties up to $210M.


### Sector-Specific Obligations


Sector overlays apply despite the absence of a unified AI law. In healthcare, AI classified as Software as a Medical Device is regulated by the Therapeutic Goods Administration (TGA). In finance, ASIC and APRA enforce governance and risk management standards. In government, AI use is governed by the AI in Government Policy, requiring transparency and human oversight.



APRA CPS 230 took effect 1 July 2025, with pre-existing service provider contracts requiring compliance by July 2026. The Financial Accountability Regime (FAR) is now fully in force for banks, insurers, and superannuation funds.


For a comprehensive comparison of how Australia's approach compares internationally — including the EU AI Act's mandatory risk-based model — see our article: *Australian AI Strategy vs Global Peers: How Australia's Government Support Compares to the US, UK and EU*.

---

## Key Compliance Deadlines: A Reference Timeline

| Date | Obligation | Who It Affects |
|---|---|---|
| **September 2024** | Voluntary AI Safety Standard (10 guardrails) published | All organisations using AI |
| **March 2024 / March 2025** | Financial Accountability Regime (FAR) in force | Banks, insurers, super funds |
| **1 July 2025** | APRA CPS 230 Operational Risk Management takes effect | All APRA-regulated entities |
| **October 2025** | Guidance for AI Adoption (AI6) replaces VAISS as primary reference | All organisations using AI |
| **10 June 2025** | New statutory tort for serious invasions of privacy commenced | All organisations handling personal data |
| **Early 2026** | AI Safety Institute becomes operational | All organisations; regulatory coordination |
| **10 December 2026** | Privacy Act automated decision-making transparency requirements commence | All APP entities using algorithmic decision-making |

---

## What Businesses Must Do Now: A Practical Compliance Framework

Given the current landscape — existing laws that already apply, a voluntary framework that signals future mandatory requirements, and a hard Privacy Act deadline in December 2026 — the practical compliance agenda for Australian businesses is clear.

### Step 1: Appoint an Accountable AI Executive


Commit to appointing people in the leadership team who are accountable for the governance and outcomes of AI systems, as well as the safe and responsible use of AI within the organisation.
 This is Guardrail 1 / AI6 Practice 1, and it is the foundation of every other governance action. 
Start with Chief AI Officer appointment and system register creation as quick wins.


### Step 2: Build an AI Register

Document every AI system your organisation uses or deploys. The NAIC provides a free AI register template at industry.gov.au. 
The Guidance for AI Adoption provides practical, voluntary guidance for organisations adopting AI systems. It is structured around responsible AI governance and includes tools such as an AI screening tool, AI policy guidance and templates, and an AI register template.


### Step 3: Audit for Privacy Act Compliance Now


Businesses should audit their AI systems now to identify which processes involve automated or semi-automated decisions about individuals; review and update privacy policies to disclose AI use in plain language; assess data minimisation practices in AI training pipelines; and establish a process for individuals to request human review of significant AI-influenced decisions.



The reforms focus on decisions that have a legal or similarly significant effect on individuals. In practice, this means decisions about employment (hiring, performance management, termination), access to credit or financial products, insurance coverage, housing, healthcare, and government services.


### Step 4: Apply the AI6 Framework — Starting with Foundations


The Foundations version provides practical steps for organisations that are starting with AI, including small businesses. It focuses on aligning AI with business goals, establishing governance and managing risk across the 6 practices.
 For more mature organisations, 
Implementation Practices supports organisations that are scaling AI or managing more complex systems, offering detailed technical information to strengthen governance, improve oversight and embed responsible AI across systems, processes and decision-making.


### Step 5: Embed Procurement Governance


The 10 guardrails include procurement guidance to ensure AI suppliers and developers are aligning to the guardrails through contractual agreements.
 Vendor contracts should address data governance, testing obligations, transparency mechanisms, and incident reporting — before a system is deployed, not after.

### Step 6: Prepare for the AI Safety Institute


The Australian Artificial Intelligence Safety Institute (AISI) is targeted to become operational in early 2026, providing independent technical analysis, monitoring, safety testing, and advice to regulators and ministers. This new body will play a crucial role in enhancing Australia's technical capability to assess and mitigate AI risks.


Organisations that have implemented AI6 and documented their governance practices will be significantly better positioned when the AISI begins coordinating with sector regulators including APRA, ASIC, and the OAIC.

For step-by-step guidance on implementing these practices using the government's free tools and templates, see our detailed guide: *How to Build a Responsible AI Policy for Your Australian Business*. To understand how the NAIC's free services can support this work, see: *The National Artificial Intelligence Centre (NAIC): What It Does and How to Use It*.

---

## The Debate That Won't Go Away: Is Voluntary Enough?

Not everyone is satisfied with Australia's current approach. 
Australian Competition and Consumer Commission Senior Investigator Rosie Evans wrote for the IAPP in March 2025 that voluntary documents do not provide the legal certainty regulation would create. "Without an enforceable regime specifically for AI, Australia may struggle to achieve the regulatory cohesion and effectiveness currently aspired to by government," she argued.



While the government has seemingly abandoned its approach to specifically regulate AI through the use of mandatory guardrails, this does not mean that the government will adopt a laissez-faire approach to regulating AI.
 The Attorney-General has been explicit on this point.


This recalibration comes amid persistently low public trust in AI, creating a complex policy challenge: how to build accountability, safety and transparency without constraining the very innovation needed to realize AI's economic and social potential.


The practical implication for businesses: the regulatory environment is not static. 
The AI space is developing quickly. Organisations implementing AI6 practices now will be well-prepared for whatever mandatory requirements might come.


---

## Key Takeaways

- **Australia has no standalone AI Act**, and the December 2025 National AI Plan confirmed the government will rely on existing technology-neutral laws — Privacy Act, Australian Consumer Law, Corporations Act — rather than introduce mandatory guardrails in the near term.
- **The Guidance for AI Adoption (AI6)**, released by the NAIC in October 2025, replaces the 2024 Voluntary AI Safety Standard as the primary government reference. It condenses the 10 original guardrails into six essential practices and is available in two versions: Foundations (for SMEs and those starting out) and Implementation Practices (for scaling organisations).
- **Existing laws already create real compliance obligations** for AI deployers — particularly the Privacy Act, Australian Consumer Law, Corporations Act director duties, and APRA/ASIC sector requirements. "Voluntary" does not mean "unregulated."
- **The Privacy Act automated decision-making transparency requirements commence 10 December 2026.** Any business using AI to make or materially contribute to decisions significantly affecting individuals must disclose this in their privacy policy — with penalties up to $50 million for serious breaches.
- **Implementing AI6 now is the most defensible compliance posture**: it satisfies current voluntary expectations, aligns with sector regulator guidance (ASIC, APRA, OAIC), and positions the organisation for any future mandatory regime.

---

## Conclusion

Australia's AI regulatory framework is neither a regulatory vacuum nor a comprehensive mandatory regime — it is a deliberate, evolving middle path that places significant weight on voluntary governance standards, existing law, and sector-specific regulation. For businesses, this creates both opportunity and risk. The opportunity is real: organisations that implement AI6 now, document their governance practices, and embed accountability structures will have a head start on any future mandatory requirements and will be better positioned to access government AI programs that increasingly expect responsible AI practices as a condition of eligibility. The risk is equally real: the absence of an AI Act does not mean the absence of liability. Privacy, consumer protection, and director duty obligations apply today, and the December 2026 Privacy Act deadline is approaching faster than many compliance teams have anticipated.

The practical prescription is clear: treat the AI6 framework as your current compliance baseline, audit your AI systems against existing law now, and monitor the AI Safety Institute's emerging guidance as the regulatory landscape continues to develop.

For the broader strategic context within which this regulatory framework sits, see our foundational article: *Australia's National AI Plan Explained: What It Means for Business in 2025 and Beyond*. For a full directory of the government programs that reward responsible AI adoption, see: *Every Australian Government AI Grant and Funding Program: A Complete Directory*.

---

## References

- Australian Government, Department of Industry, Science and Resources. *"Voluntary AI Safety Standard."* DISR / National AI Centre, August 2024. https://www.industry.gov.au/publications/voluntary-ai-safety-standard

- Australian Government, Department of Industry, Science and Resources. *"Guidance for AI Adoption (AI6)."* National AI Centre, October 2025. https://www.industry.gov.au/publications/guidance-for-ai-adoption

- Australian Government. *"Privacy and Other Legislation Amendment Act 2024."* Commonwealth of Australia, 2024. https://www.legislation.gov.au

- Australian Government, Department of Industry, Science and Resources. *"Proposals Paper: Introducing Mandatory Guardrails for AI in High-Risk Settings."* DISR, September 2024. https://consult.industry.gov.au/ai-mandatory-guardrails

- Australian Securities and Investments Commission (ASIC). *"Report 798: Beware the Gap — Governance Arrangements in the Face of AI Innovation."* ASIC, October 2024. https://asic.gov.au

- Allens Linklaters. *"Preparing for Voluntary Standards and Mandatory Legislation: A Deep Dive into Australia's AI Guidelines."* Allens Insights, September 2024. https://www.allens.com.au/insights-news/insights/2024/09/preparing-for-voluntary-standards-and-mandatory-legislation-ai-guidelines/

- White & Case LLP. *"AI Watch: Global Regulatory Tracker — Australia."* White & Case, November 2025. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-australia

- MinterEllison. *"Australia Introduces a National AI Plan: Four Things Leaders Need to Know."* MinterEllison Insights, December 2025. https://www.minterellison.com/articles/australia-introduces-a-national-ai-plan-four-things-leaders-need-to-know

- Corrs Chambers Westgarth. *"Australia Releases Proposed Mandatory Guardrails for AI Regulation."* Corrs Insights, September 2024. https://www.corrs.com.au/insights/australia-releases-proposed-mandatory-guardrails-for-ai-regulation

- International Association of Privacy Professionals (IAPP). *"Global AI Governance Law and Policy: Australia."* IAPP, November 2025. https://iapp.org/resources/article/global-ai-governance-australia

- Actuaries Institute. *"Understanding Australia's AI6: A Framework for AI Governance."* Actuaries Institute, 2026. https://www.actuaries.asn.au/research-analysis/understanding-australia-s-ai6-a-framework-for-ai-governance

- Ashurst. *"Australia: New AI Safety 'Guardrails', and a Targeted Approach to High-Risk Settings."* Ashurst Insights, September 2024. https://www.ashurst.com/en/insights/australia-new-ai-safety-guardrails-and-a-targeted-approach-to-high-risk-settings/