Business

Australia's AI Regulatory Landscape Explained: What the National AI Plan, NAIC Guidance, Privacy Act, and APRA Mean for Your Business product guide

Now I have comprehensive, verified research to write this authoritative article. Let me compose the final piece.


Australia's AI Regulatory Landscape Explained: What the National AI Plan, NAIC Guidance, Privacy Act, and APRA Mean for Your Business

If you are deploying AI agents inside an Australian business in 2026, you are operating in one of the most consequential regulatory environments the country has produced in a generation — and one of the least understood. The common assumption is that because Australia has no dedicated AI Act, there are no binding compliance obligations. That assumption is wrong, and acting on it is a source of genuine legal and reputational risk.

The reality is more nuanced and, in some respects, more demanding than a single statute would create. Even before new AI laws are introduced, existing legislation creates clear compliance obligations. Across the Privacy Act 1988, the Australian Consumer Law, sector-specific prudential standards, and workplace safety law, Australian businesses deploying AI agents face a web of obligations that regulators are actively enforcing and expanding. Layered on top are rising government expectations for transparency, oversight, and auditability that — while not always legally mandated today — are rapidly becoming the standard against which organisations will be judged.

This article maps the current and emerging compliance landscape systematically, so you can assess your regulatory readiness as a distinct dimension of your overall AI readiness. Understanding this landscape is a prerequisite for the governance structures covered in our guide on [Building an AI Governance Framework for Your Australian Business], and directly informs the sector-specific compliance overlays explored in [AI Readiness by Industry: How Australian Healthcare, Financial Services, Retail, Agriculture, and Professional Services Compare].


The December 2025 National AI Plan: What It Signals for Business

On 2 December 2025, the Australian Government unveiled the National AI Plan 2025, its most comprehensive statement to date on how it intends to support Australia to shape and manage the rapid expansion of AI technologies.

The Plan is built around three goals. These are: capture the opportunity by building smart infrastructure, backing domestic AI capability and attracting global investment; spread the benefits through widespread AI adoption, supporting and training Australian workers, and improved public services; and keep Australians safe with legislative and regulatory frameworks that mitigate AI harms, while promoting widespread responsible practices and international engagement that upholds Australia's values.

For businesses, the most operationally significant element is what the Plan does not do. In 2024, the Government had introduced voluntary guardrails to assist in the adoption of generative AI technology and promised that it would introduce mandatory guardrails for "high-risk AI systems." With the release of the Plan, the Government officially abandoned its intention to introduce those mandatory guardrails in favour of updating the existing legal and regulatory framework for AI.

Instead, the Government's current focus on AI regulation has replaced the mandatory guardrails for AI systems with a two-pronged approach involving: uplifting and clarifying existing technology-neutral laws, and issuing more guidance to promote responsible practices. In practical terms: no economy-wide AI law is coming soon. Instead, it is likely that the Government will incrementally amend existing regulation and legislation including the Privacy Act, the Australian Consumer Law, and possibly the Online Safety Act.

This does not mean the regulatory pressure has eased. Organisations should expect more public investment and procurement activity, alongside heightened expectations for responsible governance and transparency. Companies should expect regulators to ask not only whether AI is used, but how it is governed.

The Plan also established the AI Safety Institute (AISI), backed by a specific funding commitment. This includes setting up the recently announced AI Safety Institute to monitor, test and share information on emerging AI capabilities, risks and harms. Today's announcement is backed by a $29.9 million commitment to establish the AI Safety Institute in early 2026 to ensure that the government is monitoring and responding to risks, supporting agencies and regulators.


The October 2025 NAIC Guidance for AI Adoption: The New Baseline for Responsible Practice

If the National AI Plan sets the policy direction, the NAIC Guidance for AI Adoption provides the operational framework businesses are expected to follow.

Released by the National Artificial Intelligence Centre (NAIC) on 21 October 2025, the guidance replaces the 2024 Voluntary AI Safety Standard and provides a roadmap for organizations at every stage of AI maturity. Developed in collaboration with CSIRO's Data61 Privacy Technology Group, this Guidance builds on the Voluntary AI Safety Standard (VAISS) by condensing 10 guardrails into 6 essential practices and expanding the audience to developers as well as deployers.

The Six Essential Practices

The guidance outlines 6 practices to help organisations plan, manage and use AI in ways that build trust and deliver value. The NAIC has structured these across two tiers:

  1. Governance and accountability — establishing clear ownership and decision-making authority for AI systems
  2. Impact assessment — evaluating risks to individuals, communities, and the organisation before deployment
  3. Risk management — identifying, monitoring, and mitigating AI-specific operational and ethical risks
  4. Transparency — disclosing how AI systems operate, particularly where they affect individuals
  5. Testing and monitoring — ongoing evaluation of AI system behaviour after deployment
  6. Human oversight — maintaining meaningful human control over consequential automated decisions

The guidance includes Foundations, which helps businesses that are new to AI set up governance, align AI with business goals and manage risk. It also includes Implementation Practices, which provides more detailed advice for businesses to strengthen AI governance and oversight. To help businesses put responsible AI into action, the guidance provides practical tools and templates, such as an AI policy template and an AI register template.

Critically, the release of the Guidance affirms Australia's inclination toward a principles-led, advisory model for AI oversight, favouring practical guidance over immediate legislative intervention. Rather than introducing new laws, the framework complements existing regulatory instruments such as the Privacy Act 1988, Australian Consumer Law, and sector-specific regimes including those governing medical devices, critical infrastructure, financial services, and APRA prudential standards.

The Guidance is voluntary in the sense that non-compliance will not, by itself, trigger a penalty. But the data on actual practice reveals a significant gap that regulators are watching. The 2025 Responsible AI Index found that 12% of organisations are now in the Leading category for implementing responsible AI practices, up 4% from 2024. A 'saying-doing' gap remains: while 78% of respondents agreed with ethical AI performance statements, only 29% had implemented relevant responsible AI practices. Smaller organisations face challenges implementing more resource-intensive governance practices: confidence levels in responsible AI declined for those organisations with 20–99 employees.

This gap between stated intention and operational practice is precisely where regulatory risk accumulates. (For a deeper analysis of this data, see our article on [The State of AI Adoption in Australia: 2025–2026 Benchmarks, Industry Gaps, and What the Data Reveals].)


The Privacy Act and OAIC: Binding Obligations That Apply Now

While the NAIC Guidance is advisory, the Privacy Act 1988 creates binding obligations that apply to any AI deployment involving personal information — and AI agents almost always involve personal information.

What the OAIC Expects Today

The Office of the Australian Information Commissioner (OAIC) has published specific guidance on both the use of commercially available AI products and the development of generative AI models. Organisations subject to the Privacy Act have a number of transparency obligations: APP 1 requires entities to take reasonable steps to implement practices, procedures and systems to ensure they comply with the APPs, and to have a clearly expressed and up-to-date Privacy Policy. Transparency is critical to enabling individuals to understand the way that AI systems are used to produce outputs or make decisions which affect them. Without a clear understanding of the way an AI product works, it is difficult for individuals to provide meaningful consent to the handling of their personal and sensitive information, understand or challenge decisions, or request corrections to the personal information processed or generated by an AI system.

Entities must ensure that the output of AI systems, including any decisions made using AI, can be explained to individuals affected. This is not a future obligation — it applies to any AI system you are running today.

The December 2026 Automated Decision-Making (ADM) Transparency Deadline

The most significant near-term compliance deadline for businesses deploying AI agents is the new automated decision-making transparency obligation introduced by the Privacy and Other Legislation Amendment Act 2024.

The OAIC will be progressively updating guidance to provide expanded information about the new APP 1 obligations for automated decisions which commence on 10 December 2026. These APP 1 amendments will require an APP entity to include certain information in their APP Privacy Policy where the entity has arranged for a computer program to use personal information to make a decision that could reasonably be expected to significantly affect the rights or interests of an individual.

The APP Privacy Policy will need to contain information about: the kinds of personal information used in the operation of computer programs; the kinds of decisions made solely by the operation of computer programs; and the kinds of decisions for which a thing, that is substantially and directly related to making the decision, is done by the operation of such computer programs.

The scope of decisions covered is broad. The new ADM transparency rules apply to decisions which "could reasonably be expected to significantly affect the rights or interests of an individual," including decisions impacting rights under a contract, decisions impacting access to a significant service or support, and decisions about the granting or refusal of a benefit.

Penalties for breach of the Privacy Act can be significant, with the maximum penalty for a serious or repeated interference with privacy being $50 million or three times the benefit obtained, or 30% of adjusted turnover — whichever is greatest.

Practical implication: If your AI agents are making or substantially influencing decisions about customers, employees, or suppliers — approving credit, triaging service requests, generating performance assessments, or processing claims — you need to audit those systems against the December 2026 ADM transparency requirements now. The new ADM transparency rules will commence on 10 December 2026, which means that organisations will have had two years to get ready to comply. Because Privacy Policies are publicly facing documents, they are easy for the OAIC to proactively check for compliance with the new requirements, without your knowledge or involvement. Given this lengthy runway and the ease in which the OAIC can review Privacy Policies, organisations should expect that non-compliance could soon attract enforcement action under the OAIC's new infringement notice regime.


APRA CPS 230: What Financial Services Businesses Must Understand

For organisations in banking, insurance, and superannuation — and for their technology service providers — APRA's Prudential Standard CPS 230 creates the most demanding AI governance obligations currently in force in Australia.

As of 1 July 2025, the Australian Prudential Regulation Authority's Prudential Standard CPS 230 is in force. CPS 230 brings a more structured, accountable, and forward-looking approach to managing operational risk, business continuity and service provider arrangements to those parts of Australia's financial services sector that are regulated by APRA.

APRA's CPS 230 seeks to enhance operational risk management for financial institutions and safeguard Australian entities' stability by ensuring robust systems to identify, assess, manage, and mitigate operational risks, including artificial intelligence (AI) and cybersecurity related risks.

What CPS 230 Requires in Practice

The key requirements of this Prudential Standard are that an APRA-regulated entity must: identify, assess and manage its operational risks, with effective internal controls, monitoring and remediation; be able to continue to deliver its critical operations within tolerance levels through severe disruptions, with a credible business continuity plan; and manage the risks arising from service providers.

The third-party risk dimension has direct implications for AI agent deployments. A significant focus of CPS 230 is managing risks related to external suppliers. A regulated entity must ensure that contracts with its service providers contain appropriate safeguards, particularly for services supporting critical functions. Regulated institutions are now required to keep a register of all material service providers and maintain a service provider management policy. For the first time, institutions must formally document their approach to managing the risks associated with fourth party suppliers that material suppliers rely upon, which could include a wide range of cloud service, telecommunications and other IT industry suppliers.

This means that if your AI agents are built on third-party large language model (LLM) APIs, cloud-hosted automation platforms, or outsourced AI development services, those vendor relationships must be formally assessed and governed under CPS 230.

CPS 230 strengthens the role of boards and senior management in operational risk oversight. Directors and executives are now explicitly responsible for ensuring that operational resilience is embedded into their organisation's governance frameworks and decision-making processes.

Even for businesses that are not directly APRA-regulated, CPS 230 is increasingly the benchmark. Even if APRA doesn't regulate you directly, clients, investors, and stakeholders increasingly expect CPS 230-level risk management.


The Australian Consumer Law: An Often-Overlooked AI Obligation

The Australian Consumer Law (ACL) applies to AI deployments in ways many businesses have not fully mapped. The Commonwealth Government will consult with the States and Territories on opportunities to clarify existing rules and progress what appear to be minor clarifying and technical changes identified by the Department of Treasury's Review of AI and the Australian Consumer Law. This review otherwise noted that the Australian Consumer Law is broadly capable of adapting to AI products and services without significant amendment.

In practical terms, this means ACL prohibitions on misleading and deceptive conduct, unconscionable conduct, and false representations apply directly to AI-generated outputs, AI-assisted customer service, and automated pricing or recommendation systems. If your AI agent provides incorrect advice, generates misleading product descriptions, or applies discriminatory pricing — and you have not implemented adequate human oversight — you may already be in breach.


A Compliance Readiness Snapshot: Where the Key Obligations Sit

Framework Who It Applies To Current Status Key AI Obligation
National AI Plan All businesses In effect (Dec 2025) Sets expectations for responsible governance; no binding obligations
NAIC Guidance for AI Adoption (AI6) All businesses In effect (Oct 2025) Voluntary but benchmark for regulatory assessment
Privacy Act — APP 1 ADM Transparency APP entities using AI in decisions Commences 10 Dec 2026 Disclose AI use in privacy policy where decisions significantly affect individuals
Privacy Act — APP 1–13 (existing) APP entities In effect now Transparency, accuracy, security, and lawful use of personal information in AI systems
APRA CPS 230 Banks, insurers, superannuation funds In effect (1 Jul 2025) Operational risk management, business continuity, third-party AI vendor governance
Australian Consumer Law All businesses In effect now Prohibits misleading AI outputs, unfair conduct, false representations
Online Safety Act 2021 Relevant platforms and services In effect + evolving Enforceable industry codes covering AI-generated harmful content

Key Takeaways

  • No AI-specific law exists, but compliance obligations are real and enforced today. The Privacy Act, Australian Consumer Law, APRA CPS 230, and sector-specific frameworks create binding obligations for any business deploying AI that handles personal information, makes consequential decisions, or operates in regulated industries.

  • The December 2026 ADM transparency deadline is the most immediate statutory obligation for most businesses. From 10 December 2026, APP entities must disclose in their Privacy Policy how AI is used in decisions that significantly affect individuals' rights or interests. Preparation should begin now, given the OAIC's stated intention to conduct proactive compliance scans.

  • The NAIC Guidance for AI Adoption (AI6) is the practical governance benchmark. Released October 2025, it replaced the Voluntary AI Safety Standard and distilled 10 guardrails into 6 essential practices. While voluntary, alignment with AI6 is increasingly the standard against which regulators, clients, and auditors will assess your governance maturity.

  • APRA CPS 230 is in force and extends to AI vendor relationships. Financial services entities and their technology suppliers must treat AI systems — including third-party LLM APIs and automation platforms — as material operational risks requiring formal governance, continuity planning, and contractual safeguards.

  • The regulatory direction is toward more obligation, not less. The Government has signalled ongoing reviews of the Privacy Act (Tranche 2), the Australian Consumer Law, healthcare AI regulation, and online safety codes. Businesses that build governance infrastructure now are positioning for future mandatory requirements, not just current voluntary ones.


Conclusion

Australia's AI regulatory landscape in 2026 is best understood not as a vacuum, but as a set of interlocking obligations that are already enforceable and rapidly becoming more demanding. The absence of a single AI Act does not mean absence of accountability. It means accountability is distributed across multiple existing frameworks — and that the businesses most exposed to risk are those that have assumed compliance is someone else's problem.

For businesses assessing their AI readiness, regulatory compliance is not a checkbox to address after deployment. It is a dimension of readiness that must be evaluated before any AI agent is deployed into a production environment. The governance structures required to satisfy the OAIC's ADM transparency rules, APRA's CPS 230 third-party risk requirements, and the NAIC's six essential practices are substantially the same structures required to deploy AI agents safely and effectively.

To build those structures, see our companion guides: [Building an AI Governance Framework for Your Australian Business] covers the internal policies, oversight mechanisms, and audit trail requirements that map directly to AI6. [The 5 Pillars of AI Readiness] explains how regulatory compliance fits within the broader governance dimension of your readiness score. And [How to Conduct an AI Readiness Assessment for Your Australian Business] provides the step-by-step process for turning this regulatory map into an actionable compliance readiness plan.


References

  • Australian Government, Department of Industry, Science and Resources. "National AI Plan." Department of Industry, Science and Resources, 2 December 2025. https://www.industry.gov.au/publications/national-ai-plan

  • National AI Centre (NAIC), Department of Industry, Science and Resources. "Guidance for AI Adoption." Department of Industry, Science and Resources, October 2025. https://www.industry.gov.au/publications/guidance-for-ai-adoption

  • Office of the Australian Information Commissioner (OAIC). "Chapter 1: APP 1 Open and Transparent Management of Personal Information." OAIC, updated October 2025. https://www.oaic.gov.au/privacy/australian-privacy-principles/australian-privacy-principles-guidelines/chapter-1-app-1-open-and-transparent-management-of-personal-information

  • Office of the Australian Information Commissioner (OAIC). "Guidance on Privacy and the Use of Commercially Available AI Products." OAIC, 2025. https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-the-use-of-commercially-available-ai-products

  • Australian Prudential Regulation Authority (APRA). "Prudential Standard CPS 230 Operational Risk Management." APRA, effective 1 July 2025. https://handbook.apra.gov.au/standard/cps-230

  • CSIRO Data61 Privacy Technology Group. "Collaboration with the National AI Centre (NAIC) on the Development of the Guidance for AI Adoption." CSIRO, October 2025. https://research.csiro.au/isp/research/privacy_mlai/collaboration-with-the-national-ai-centre-naic-on-the-development-of-the-guidance-for-ai-adoption/

  • White & Case LLP. "Australia's National AI Plan: Big Ambitions, but Light on Details." White & Case Insights, December 2025. https://www.whitecase.com/insight-alert/australias-national-ai-plan-big-ambitions-light-details

  • Bird & Bird. "A New Era for AI Governance in Australia: What the National AI Plan Means for Industry." Bird & Bird Insights, December 2025. https://www.twobirds.com/en/insights/2025/australia/a-new-era-for-ai-governance-in-australia-what-the-national-ai-plan-means-for-industry

  • Clifford Chance. "Navigating Operational Risks: CPS 230's Influence on AI and Cybersecurity Strategies." Clifford Chance Insights, April 2025. https://www.cliffordchance.com/insights/resources/blogs/regulatory-investigations-financial-crime-insights/2025/04/cps-230-influence-on-ai-and-cybersecurity-strategies.html

  • Herbert Smith Freehills Kramer. "Automated Decision Making." HSF Insights, November 2025. https://www.hsfkramer.com/insights/2025-11/automated-decision-making

  • Maddocks. "The New National Plan for Australia's AI-Enabled Future." Maddocks Insights, December 2025. https://www.maddocks.com.au/insights/the-new-national-plan-for-australias-ai-enabled-future

  • IAPP. "Global AI Governance Law and Policy: Australia." International Association of Privacy Professionals, 2025. https://iapp.org/resources/article/global-ai-governance-australia

↑ Back to top