Business

Australia's AI Regulatory Framework: Ethics Principles, Governance Standards and What Businesses Must Know product guide

AI Summary

Product: Australia's AI Regulatory Framework: Ethics Principles, Governance Standards and Business Compliance Guide Brand: Australian Government / National AI Centre (NAIC) Category: Regulatory Compliance & AI Governance Primary Use: A structured reference mapping every layer of Australia's AI governance — from the eight AI Ethics Principles to Privacy Act obligations — for organisations deploying AI across regulated industries.

Quick Facts

  • Best For: Australian businesses in financial services, healthcare, legal services, mining, real estate, and marketing deploying or planning to deploy AI systems
  • Key Benefit: Consolidates distributed AI compliance obligations across multiple frameworks into a single actionable reference, clarifying what is enforceable today versus aspirational
  • Form Factor: Regulatory analysis and compliance guide
  • Application Method: Reference against existing AI deployments; implement NAIC AI6 framework tools (AI register, screening tool, policy guide) as governance baseline

Common Questions This Guide Answers

  1. Does Australia have a standalone AI Act? → No — the National AI Plan (December 2025) confirms no standalone AI Act will be introduced; compliance obligations are distributed across existing laws including the Privacy Act, APRA standards, and the Australian Consumer Law
  2. When do the automated decision-making transparency requirements under APP 1.7 take effect? → 10 December 2026, introduced by the Privacy and Other Legislation Amendment Act 2024 (Cth), requiring APP entities to disclose AI-influenced decision-making in their privacy policies
  3. What is the current government-endorsed AI governance standard for Australian businesses? → The NAIC Guidance for AI Adoption (published 21 October 2025), which sets out the AI6 framework — six essential governance practices — and provides an AI register template, screening tool, and 12-page policy guide

Australia's AI Regulatory Framework: Ethics Principles, Governance Standards and What Businesses Must Know

Every Australian organisation deploying AI — a bank running algorithmic credit scoring, a hospital using diagnostic imaging AI, a law firm automating contract review, a miner operating autonomous haul trucks — is navigating a regulatory environment that's simultaneously evolving and deliberately incomplete. Unlike the European Union, which passed its landmark AI Act in 2024, Australia has made a clear policy call: no standalone AI legislation, at least for now. Instead, the country is building a governance architecture from existing laws, voluntary frameworks, a new oversight institution, and sector-specific regulatory expectations.

Understanding this architecture isn't optional. The gap between "no AI Act" and "no AI regulation" is wide, and businesses that mistake one for the other face real compliance exposure today, with greater exposure coming as the framework tightens. This article maps every layer of Australia's AI governance, from the foundational Ethics Principles to the incoming Privacy Act transparency obligations, and breaks down what each layer means for organisations across real estate, healthcare, finance, mining, legal services, and marketing.


Australia's deliberate choice: standards-led, not legislation-led

The most important thing to understand about Australia's AI regulatory posture is that it reflects a considered policy decision, not a vacuum.

Australia's AI regulatory journey has shifted from an early plan to introduce an EU-style, risk-based regime toward a more flexible, standards-led approach. What began as a move toward prescriptive guardrails and potential legislation has given way to a focus on productivity, innovation, and the use of existing legal frameworks.

The National AI Plan, released in December 2025, confirms there will be no standalone AI Act. Australia will not mandate the Proposed Mandatory Guardrails for AI in high-risk settings. Instead, Australia will continue to rely on existing laws — privacy, consumer protection, copyright, workplace law, sector-specific regulation, and online safety — meaning organisations will continue to navigate a complex regulatory patchwork when assessing and managing AI risk.

This choice has both supporters and critics. In August 2025, the Productivity Commission cautioned that overly stringent AI regulation could stifle Australia's economic potential, estimated at AUD 116 billion over the next decade, and recommended reserving AI-specific regulation for where current laws genuinely fall short. On the other side, ACCC Senior Investigator Rosie Evans wrote that voluntary documents "do not provide the legal certainty regulation would create," arguing that "without an enforceable regime specifically for AI, Australia may struggle to achieve the regulatory cohesion and effectiveness currently aspired to by government."

The practical implication for businesses? Compliance obligations are real and active — they're just distributed across multiple frameworks rather than consolidated in a single statute.


The eight AI Ethics Principles: the foundation layer

Established in 2019, the Australian AI Ethics Principles comprise eight voluntary guidelines covering fairness, accountability, transparency, reliability, privacy and security, human-centred values, contestability, and human/social/environmental wellbeing. They align with the OECD AI Principles.

The principles were designed to guide businesses and governments to responsibly design, develop and implement AI, and formed part of the Australian Government's commitment to make Australia a global leader in responsible and inclusive AI.

The eight principles are:

  1. Human, societal and environmental wellbeing — AI systems should benefit individuals, society and the environment
  2. Human-centred values — Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals
  3. Fairness — AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups
  4. Privacy protection and security — AI systems should respect privacy and data protection, including proper data governance and management for all data used and generated throughout the system's lifecycle
  5. Reliability and safety — Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose
  6. Transparency and explainability — No data provided
  7. Contestability — When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system
  8. Accountability — People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled

While voluntary, these principles underpin every subsequent layer of the governance framework and are explicitly referenced in sector-specific guidance, the government's AI policy for the public sector, and the NAIC's Guidance for AI Adoption.


The NAIC's Guidance for AI Adoption: the "AI6" framework

On 21 October 2025, the Department of Industry, Science and Resources published the Guidance for AI Adoption, which outlines six essential practices for safe and responsible AI governance. The government streamlined the previous 10 guardrails of the Voluntary AI Safety Standard down to six key practices, maintaining alignment with Australia's AI Ethics Principles and international standards.

This updated guidance, released by the National AI Centre (NAIC), effectively replaces the earlier Voluntary AI Safety Standard (VAISS) and articulates the "AI6" — six essential governance practices for AI developers and deployers.

The NAIC has also published a suite of practical implementation tools organisations can use right now:

  • An AI register template, AI screening tool, alignment to Australia's AI Ethics Principles, the Voluntary AI Safety Standard, and a glossary — providing useful scaffolding for immediate adoption
  • A 12-page policy guide that provides a ready-to-adopt AI policy skeleton organisations can tailor to their context, covering purpose, scope, and pragmatic principle-based policy statements across ethics, accountability, risk assessment, quality/security, fairness, transparency, and human oversight

These tools are the most actionable entry point for any Australian business beginning its AI governance journey. For a step-by-step guide to implementing them, see our article How to Build an AI Strategy for an Australian Business.


The AI Safety Institute: Australia's new technical oversight body

On 25 November 2025, the Commonwealth Government announced it would establish a national AI Safety Institute (AISI). The AISI will strengthen testing, evaluation and oversight of advanced AI systems, coordinate with regulators such as the Office of the Australian Information Commissioner, and support risk-based regulatory responses to AI.

The National AI Plan commits just under AUD $30 million to fund the AI Safety Institute, which will become operational in early 2026.

The AISI will monitor, test and share information on emerging AI capabilities, risks and harms. Its insights will support ministers, portfolio agencies and regulators to maintain safety measures, laws and regulatory frameworks that keep pace with rapid technological change.

Critically, the Government has emphasised that the AISI will complement existing legal and regulatory frameworks that already protect Australians' rights and safety, rather than replace them. Australia will also join the International Network of AI Safety Institutes, aligning local practice with comparable efforts in the US, UK, Canada, South Korea and Japan.

For businesses, the AISI signals a clear shift in the regulatory trajectory: regulators will increasingly have in-house capacity to interrogate and test models, rather than relying solely on high-level principles or industry self-assessment. Systems with serious or systemic risk potential — security-relevant capabilities, critical infrastructure, influence operations, large-scale decision-making — can expect heightened scrutiny and more prescriptive expectations.


The Privacy Act 1988: the most immediately actionable obligation

The Privacy Act 1988 (Cth) remains the primary law regulating the handling of personal information in Australia. The Act is principles-based and is currently undergoing significant reform following the government's multiyear review, which commenced before the rise of generative AI.

The most significant AI-relevant change is already law. Amendments to the Privacy Act passed in December 2024 include new transparency requirements that will apply where entities bound by the Privacy Act use a computer in relation to certain decision-making involving personal information. The Privacy and Other Legislation Amendment Act 2024 (Cth) will introduce greater transparency for individuals affected by automated decision-making in provisions scheduled to commence on 10 December 2026.

Under the new APP 1.7, if an APP entity uses automated decision-making, it must include certain information in its privacy policy. Organisations using AI to make or materially contribute to decisions that significantly affect individuals must disclose this use and provide meaningful information about how the AI works. This is not a blanket ban on automated decisions — it's a transparency and accountability obligation.

The reforms focus on decisions that have a legal or similarly significant effect on individuals — in practice, decisions about employment, access to credit or financial products, insurance coverage, housing, healthcare, and government services.

Australia's privacy regulator, the Office of the Australian Information Commissioner (OAIC), has been proactive in interpreting the Act in AI contexts and is actively regulating AI through interpretation and enforcement rather than waiting for dedicated legislation. The OAIC's enforcement record already includes landmark determinations against Clearview AI, Bunnings Group, and Kmart Australia for unlawful collection of biometric information using facial recognition technology.


APRA prudential standards: binding obligations for financial services

For organisations in financial services — banks, insurers, superannuation funds, and credit unions — the Australian Prudential Regulation Authority (APRA) operates a binding regulatory framework that is directly relevant to AI deployment.

APRA CPS 234 is a mandatory information security standard for Australian financial institutions. It requires strong controls, clear governance, third-party oversight and continuous testing. Any APRA-regulated financial institution and any material service provider must comply. The standard applies to cloud providers, with entities remaining responsible for ensuring equivalent controls in outsourced environments.

CPS 234 is not merely a cybersecurity standard — it functions as an AI governance instrument. Any AI system that processes, stores, or generates decisions based on financial or personal data is an "information asset" under CPS 234 and must be classified, risk-assessed, and protected accordingly.

CPS 230, which came into effect on 1 July 2025, is now also active for APRA-regulated entities. CPS 230 focuses on the operational resilience of financial institutions, covering business continuity, risk management frameworks, and testing resilience strategies.

APRA's explicit connection of CPS 234 compliance to the Financial Accountability Regime (FAR) means that information security — and by extension, AI governance — is no longer solely a technical or operational matter. It is a named executive accountability.

For a detailed analysis of how these standards apply to AI in banking, robo-advice, and fraud detection, see our article AI in Australian Financial Services: Fraud Detection, Credit Decisioning and Wealth Management Automation.


Sector-specific regulatory overlays: what each industry must know

Australia's "existing laws" approach means AI compliance obligations differ materially by sector. The table below summarises the primary regulatory instruments applicable to AI across the six target industries:

Industry Primary AI-Relevant Regulator(s) Key Instruments
Financial Services APRA, ASIC CPS 234, CPS 230, responsible lending obligations, market integrity rules
Healthcare TGA, OAIC AI medical device guidance, My Health Record Act, Privacy Act
Legal Services Law Council, state Law Societies Professional conduct rules, Privacy Act, ACL
Mining Safe Work Australia, state regulators Work health and safety laws, Privacy Act (workforce data)
Real Estate ASIC, state fair trading ACL, Privacy Act, anti-discrimination law
Marketing ACCC, OAIC Australian Consumer Law, Privacy Act (ADM transparency)

Sector-specific obligations include: ASIC requiring AI use in lending, trading and advice to align with responsible lending and market integrity obligations; APRA applying additional standards to AI in risk management and critical infrastructure oversight; the TGA requiring AI medical devices to comply with therapeutic goods regulation; and the Fair Work Commission overseeing algorithmic decision-making in recruitment and HR for compliance with employment and discrimination laws.

The Treasury's Final Report on AI and the Australian Consumer Law recommends targeted clarifications to the ACL's application to AI systems — especially regarding definitions of goods/services, manufacturer liability and algorithmic representations — with an emphasis on reinforcing the ACL's existing principles rather than introducing AI-specific laws.


The government's own obligations: a model for the private sector

From 15 December 2025, the updated Policy for the Responsible Use of AI in Government came into effect, strengthening how agencies across the Australian Public Service govern the use of AI and reinforcing safeguards that support safe, transparent and trusted adoption.

Under this policy, agencies must maintain an internal register of all in-scope AI use cases and assign an accountable owner for each one. Prior to deployment, agencies must also complete an AI impact assessment for each in-scope use case.

Foundational AI training is now mandatory for all staff across the Australian Public Service, building a consistent baseline of understanding of responsible AI use.

This government framework is instructive for private sector organisations. The AI register, impact assessment, accountable owner, and mandatory training requirements that apply to government agencies represent the governance baseline that regulators and auditors are increasingly expecting from private organisations as well. If you're not building toward this standard now, you're already behind.


The trust deficit: why governance matters beyond compliance

No analysis of Australia's AI regulatory framework is complete without acknowledging the public trust context in which it operates.

Australia faces a pronounced trust deficit in AI adoption. According to a 2025 study by the University of Melbourne and KPMG, only 30% of Australians believe the benefits of AI outweigh its risks; just 36% of citizens trust AI systems more broadly. Approximately 78% of respondents expressed concern about negative outcomes from AI, and only 30% believe current laws and safeguards are adequate.

This trust deficit isn't merely a public relations problem — it is a strategic risk. Australia's regulatory recalibration comes amid persistently low public trust in AI, creating a genuine policy challenge: how to build accountability, safety and transparency without constraining the very innovation needed to realise AI's economic and social potential.

For organisations in healthcare, financial services, and legal services — where AI decisions directly affect individuals' lives, finances, and rights — demonstrating governance alignment with the AI Ethics Principles and the NAIC's AI6 framework is not just a compliance exercise. It is a prerequisite for customer trust and market licence. For a deeper examination of the risks driving this trust deficit, see our article AI Risks and Ethical Challenges Facing Australian Industries: Bias, Accountability and Trust.


Key takeaways

  • No standalone AI Act, but compliance obligations are live. Australia currently has no binding, AI-specific statutes or regulations. The government's approach remains largely voluntary and consultative, emphasising ethical guidance now, with targeted reforms expected later. However, existing laws — especially the Privacy Act, APRA standards, and the ACL — create real, enforceable obligations today. Don't let the absence of an AI Act lull you into inaction.

  • The Privacy Act ADM transparency requirement is a hard deadline. From 10 December 2026, amendments introduced by the Privacy and Other Legislation Amendment Act 2024 (Cth) will commence, imposing new transparency obligations on APP entities — particularly in relation to the use of automated decision-making involving personal information. Organisations must audit and disclose AI-influenced decision processes before this date.

  • The NAIC's AI6 framework is the current governance standard. In October 2025, the National AI Centre published updated Guidance for AI Adoption, which sets out six essential practices (AI6) and is now the primary government guidance for responsible AI governance and adoption. Aligning with AI6 now positions organisations ahead of any future mandatory requirements.

  • The AI Safety Institute changes the enforcement trajectory. The AI Safety Institute, which will become operational in early 2026, is intended to help government keep pace with rapid AI developments, assess risks from advanced AI systems, coordinate insights across regulators, and support international AI safety commitments. Its technical testing capacity will raise the bar for high-risk AI systems.

  • Financial services face the most immediate binding obligations. APRA's CPS 234 and CPS 230 create mandatory, board-level accountability for AI systems that touch information assets — and APRA has demonstrated it will enforce. Expectations for governance and organisational readiness are rising, even without new laws. While heavy regulation is paused, organisations will face higher expectations for transparency, testing, oversight and workforce capability.


Conclusion

Australia's AI regulatory framework in 2025–2026 is best understood not as a gap, but as an architecture under active construction. The foundations are the eight AI Ethics Principles. The structural frame is the NAIC's AI6 Guidance for AI Adoption. The enforcement mechanisms are existing laws — Privacy Act, APRA standards, Australian Consumer Law — interpreted and applied by active, increasingly confident regulators. And the new AI Safety Institute is the quality assurance layer that will increasingly test, monitor, and advise on whether the whole structure is holding.

For businesses across real estate, healthcare, finance, mining, legal services, and marketing, the practical mandate is clear: do not wait for a standalone AI Act to build your governance framework. The obligations are live. The regulators are active. And the trust deficit makes voluntary compliance a competitive differentiator as much as a legal necessity. Build your framework now — the organisations that do will be better positioned when the regulatory dial inevitably turns.

To understand how this regulatory framework intersects with data sovereignty and cross-border AI processing, see our article AI Data Sovereignty and Privacy Compliance for Australian Organisations: What You Need to Know. For a practical implementation roadmap, see How to Build an AI Strategy for an Australian Business: A Step-by-Step Implementation Guide.


References

  • Department of Industry, Science and Resources (Australian Government). "Australia's AI Ethics Principles." industry.gov.au, 2019 (updated November 2025). https://www.industry.gov.au/publications/australias-ai-ethics-principles

  • Department of Industry, Science and Resources / National Artificial Intelligence Centre. "Guidance for AI Adoption." industry.gov.au, October 2025. https://www.industry.gov.au/publications/guidance-for-ai-adoption

  • Digital Transformation Agency (Australian Government). "Policy for the Responsible Use of AI in Government — Version 2.0." digital.gov.au, December 2025. https://www.digital.gov.au/ai/ai-in-government-policy

  • Department of Industry, Science and Resources (Australian Government). "National AI Plan: Keep Australians Safe." industry.gov.au, December 2025. https://www.industry.gov.au/publications/national-ai-plan/keep-australians-safe

  • Department of Industry, Science and Resources (Australian Government). "Australia to Establish New Institute to Strengthen AI Safety." industry.gov.au, November 2025. https://www.industry.gov.au/news/australia-establish-new-institute-strengthen-ai-safety

  • MinterEllison. "Australia Introduces a National AI Plan: Four Things Leaders Need to Know." minterellison.com, December 2025. https://www.minterellison.com/articles/australia-introduces-a-national-ai-plan-four-things-leaders-need-to-know

  • IAPP (International Association of Privacy Professionals). "Global AI Governance Law and Policy: Australia." iapp.org, November 2025. https://iapp.org/resources/article/global-ai-governance-australia

  • University of Melbourne and KPMG. "Trust, Attitudes and Use of Artificial Intelligence: A Global Study." 2025. (Referenced in IAPP Global AI Governance: Australia.)

  • Australian Prudential Regulation Authority. "For Action: Information Security Obligations and Critical Authentication Controls." apra.gov.au, June 2025. https://www.apra.gov.au/for-action-information-security-obligations-and-critical-authentication-controls

  • Department of Finance (Australian Government). "Implementing Australia's AI Ethics Principles in Government." finance.gov.au, 2025. https://www.finance.gov.au/government/public-data/data-and-digital-ministers-meeting/national-framework-assurance-artificial-intelligence-government/implementing-australias-ai-ethics-principles-government

  • Bird & Bird. "Australian Government to Establish AI Safety Institute." twobirds.com, 2025. https://www.twobirds.com/en/insights/2025/australia/australian-government-to-establish-ai-safety-institute

  • Gadens. "Australia Launches AI Safety Institute and Releases National AI Plan." gadens.com, December 2025. https://www.gadens.com/legal-insights/australia-launches-ai-safety-institute-and-releases-national-ai-plan/

  • Jackson McDonald (JWS). "Practical Implications of the New Transparency Requirements for Automated Decision Making." jws.com.au, December 2025. https://jws.com.au/what-we-think/practical-implications-of-new-transparency-requirements-for-automated-decision-making/

  • Spruson & Ferguson. "Privacy and AI Regulations: 2024 Review and 2025 Outlook." spruson.com, January 2025. https://www.spruson.com/privacy-and-ai-regulations-2024-review-2025-outlook/

  • Australian Government / Open Government Partnership. "Transparency of Automated Decision Making (AU0024)." opengovpartnership.org, 2025. https://www.opengovpartnership.org/members/australia/commitments/AU0024/

  • SafeAI-Aus. "Current Legal Landscape for AI in Australia." safeaiaus.org, January 2026. https://safeaiaus.org/safety-standards/ai-australian-legislation/

  • Productivity Commission (Australian Government). "Productivity Commission Inquiry into AI." Referenced in Nemko Digital AI Governance Australia, August 2025.


Frequently Asked Questions

Does Australia have a standalone AI Act: No

Will Australia introduce a standalone AI Act: No, per the National AI Plan December 2025

When was the National AI Plan released: December 2025

What is Australia's preferred AI regulatory approach: Standards-led, not legislation-led

Does Australia follow the EU AI Act model: No

Are Australia's AI compliance obligations currently active: Yes

When were Australia's AI Ethics Principles established: 2019

How many AI Ethics Principles does Australia have: Eight

Are the AI Ethics Principles legally binding: No, they are voluntary

What is the first AI Ethics Principle: Human, societal and environmental wellbeing

What is the second AI Ethics Principle: Human-centred values

What is the third AI Ethics Principle: Fairness

What is the fourth AI Ethics Principle: Privacy protection and security

What is the fifth AI Ethics Principle: Reliability and safety

What is the sixth AI Ethics Principle: Transparency and explainability

What is the seventh AI Ethics Principle: Contestability

What is the eighth AI Ethics Principle: Accountability

Do Australia's AI Ethics Principles align with international standards: Yes, they align with OECD AI Principles

What replaced the Voluntary AI Safety Standard: The NAIC Guidance for AI Adoption

When was the NAIC Guidance for AI Adoption published: 21 October 2025

What is the "AI6" framework: Six essential governance practices for AI developers and deployers

Who published the AI6 guidance: The National AI Centre (NAIC)

How many practices does the AI6 framework contain: Six

Is the AI6 framework mandatory: No, it is guidance

Does the NAIC provide an AI register template: Yes

Does the NAIC provide an AI screening tool: Yes

Does the NAIC provide a policy guide: Yes, a 12-page policy guide

When was Australia's AI Safety Institute announced: 25 November 2025

When will the AI Safety Institute become operational: Early 2026

How much funding was committed to the AI Safety Institute: Just under AUD $30 million

What will the AI Safety Institute do: Monitor, test and share information on AI capabilities, risks and harms

Does the AI Safety Institute replace existing laws: No, it complements them

Will the AI Safety Institute join an international network: Yes, the International Network of AI Safety Institutes

Which countries are in that international network: US, UK, Canada, South Korea and Japan

What is the primary law regulating personal information in Australia: The Privacy Act 1988 (Cth)

What is the key AI-related Privacy Act amendment: New automated decision-making transparency requirements

When do the automated decision-making transparency requirements commence: 10 December 2026

What legislation introduced the ADM transparency requirements: Privacy and Other Legislation Amendment Act 2024 (Cth)

What is APP 1.7: A new provision requiring disclosure of automated decision-making in privacy policies

Does APP 1.7 ban automated decisions: No, it requires transparency and disclosure

What types of decisions trigger APP 1.7 obligations: Decisions with legal or similarly significant effect on individuals

Name one sector where APP 1.7 applies: Employment decisions

Name a second sector where APP 1.7 applies: Credit and financial product decisions

Name a third sector where APP 1.7 applies: Healthcare decisions

Who is Australia's privacy regulator: The Office of the Australian Information Commissioner (OAIC)

Has the OAIC taken enforcement action on AI: Yes

Which companies did the OAIC act against for facial recognition: Clearview AI, Bunnings Group, and Kmart Australia

What did those companies do wrong: Unlawfully collected biometric information using facial recognition

What is APRA CPS 234: A mandatory information security standard for financial institutions

Is CPS 234 voluntary: No, it is mandatory

Who must comply with CPS 234: APRA-regulated financial institutions and material service providers

Does CPS 234 apply to cloud providers: Yes

When did CPS 230 come into effect: 1 July 2025

What does CPS 230 focus on: Operational resilience of financial institutions

Is CPS 230 binding: Yes, for APRA-regulated entities

What is the Financial Accountability Regime: A regime linking information security to named executive accountability

Which regulator oversees AI in financial services: APRA and ASIC

Which regulator oversees AI medical devices: The Therapeutic Goods Administration (TGA)

Which regulator oversees AI in recruitment and HR: The Fair Work Commission

What percentage of Australians believe AI benefits outweigh risks: 30%

What percentage of Australians trust AI systems: 36%

What percentage of Australians are concerned about AI negative outcomes: 78%

What percentage believe current AI laws are adequate: 30%

Who conducted the 2025 AI trust study: University of Melbourne and KPMG

What does the government's AI policy require agencies to maintain: An internal register of all in-scope AI use cases

What must agencies complete before AI deployment: An AI impact assessment

Is AI training mandatory for Australian Public Service staff: Yes

When did the updated government AI policy come into effect: 15 December 2025

What economic value does AI represent for Australia over the next decade: AUD 116 billion

Who estimated Australia's AI economic potential at AUD 116 billion: The Productivity Commission

What did the Productivity Commission caution against: Overly stringent AI regulation

Who argued voluntary frameworks lack legal certainty: ACCC Senior Investigator Rosie Evans

Does the ACL apply to AI systems: Yes

What does the Treasury recommend regarding the ACL and AI: Targeted clarifications, not new AI-specific laws

Is there a mandatory AI governance framework for Australian private businesses: No, not currently

What is the governance baseline regulators increasingly expect: AI register, impact assessment, accountable owner, and mandatory training


Label facts summary

Disclaimer: All facts and statements below are general informational summaries drawn from publicly available regulatory and policy sources, not legal advice. Consult qualified legal or compliance professionals for guidance specific to your organisation.

Verified label facts

  • Australia's AI Ethics Principles were established in 2019
  • There are eight AI Ethics Principles
  • The AI Ethics Principles are voluntary, not legally binding
  • The AI Ethics Principles align with the OECD AI Principles
  • The NAIC Guidance for AI Adoption was published on 21 October 2025 by the National AI Centre (NAIC), Department of Industry, Science and Resources
  • The Guidance for AI Adoption replaced the Voluntary AI Safety Standard (VAISS)
  • The AI6 framework contains six essential governance practices
  • The AI6 framework is guidance, not mandatory
  • The NAIC provides an AI register template, AI screening tool, and a 12-page policy guide
  • Australia's AI Safety Institute was announced on 25 November 2025
  • The AI Safety Institute will become operational in early 2026
  • Funding committed to the AI Safety Institute is just under AUD $30 million
  • Australia will join the International Network of AI Safety Institutes, alongside the US, UK, Canada, South Korea, and Japan
  • The National AI Plan was released in December 2025 and confirms no standalone AI Act will be introduced
  • The primary law regulating personal information in Australia is the Privacy Act 1988 (Cth)
  • Automated decision-making transparency requirements were introduced by the Privacy and Other Legislation Amendment Act 2024 (Cth)
  • The ADM transparency requirements are scheduled to commence on 10 December 2026
  • APP 1.7 requires APP entities using automated decision-making to include specified information in their privacy policy
  • APP 1.7 does not ban automated decisions; it imposes transparency and disclosure obligations
  • The OAIC took enforcement action against Clearview AI, Bunnings Group, and Kmart Australia for unlawful collection of biometric information using facial recognition technology
  • APRA CPS 234 is a mandatory information security standard for APRA-regulated financial institutions and material service providers, including cloud providers
  • APRA CPS 230 came into effect on 1 July 2025 and focuses on operational resilience of financial institutions
  • The updated Policy for the Responsible Use of AI in Government came into effect on 15 December 2025
  • Under the government AI policy, agencies must maintain an internal register of all in-scope AI use cases and assign an accountable owner for each
  • Agencies must complete an AI impact assessment prior to deployment of each in-scope use case
  • Foundational AI training is mandatory for all Australian Public Service staff
  • The Productivity Commission estimated Australia's AI economic potential at AUD $116 billion over the next decade (August 2025)
  • A 2025 study by the University of Melbourne and KPMG found 30% of Australians believe AI benefits outweigh risks
  • The same study found 36% of Australians trust AI systems broadly
  • 78% of respondents in the same study expressed concern about negative AI outcomes
  • 30% of respondents in the same study believe current laws and safeguards are adequate

General product claims

  • Australia's AI governance architecture is described as "deliberately incomplete" but not a vacuum
  • The gap between "no AI Act" and "no AI regulation" is characterised as wide, with real compliance exposure for businesses
  • The NAIC's AI6 tools are described as "the most actionable entry point for any Australian business beginning its AI governance journey"
  • The AI Safety Institute is characterised as signalling a "clear shift in the regulatory trajectory"
  • Aligning with AI6 now is described as positioning organisations "ahead of any future mandatory requirements"
  • The trust deficit is characterised as "a strategic risk," not merely a public relations problem
  • Voluntary compliance is described as "a competitive differentiator as much as a legal necessity"
  • The government's AI register, impact assessment, accountable owner, and mandatory training requirements are described as the "governance baseline that regulators and auditors are increasingly expecting from private organisations"
  • Organisations not building toward the government governance standard are described as "already behind"
  • The AI Safety Institute's technical testing capacity is described as set to "raise the bar for high-risk AI systems"
  • Australia's regulatory framework is characterised as "an architecture under active construction" rather than a gap
↑ Back to top