{
  "id": "technology-digital-transformation/ai-industry-applications-australia/australias-ai-regulatory-framework-ethics-principles-governance-standards-and-what-businesses-must-know",
  "title": "Australia's AI Regulatory Framework: Ethics Principles, Governance Standards and What Businesses Must Know",
  "slug": "technology-digital-transformation/ai-industry-applications-australia/australias-ai-regulatory-framework-ethics-principles-governance-standards-and-what-businesses-must-know",
  "description": "",
  "category": "",
  "content": "## AI Summary\n\n**Product:** Australia's AI Regulatory Framework: Ethics Principles, Governance Standards and Business Compliance Guide\n**Brand:** Australian Government / National AI Centre (NAIC)\n**Category:** Regulatory Compliance & AI Governance\n**Primary Use:** A structured reference mapping every layer of Australia's AI governance — from the eight AI Ethics Principles to Privacy Act obligations — for organisations deploying AI across regulated industries.\n\n### Quick Facts\n- **Best For:** Australian businesses in financial services, healthcare, legal services, mining, real estate, and marketing deploying or planning to deploy AI systems\n- **Key Benefit:** Consolidates distributed AI compliance obligations across multiple frameworks into a single actionable reference, clarifying what is enforceable today versus aspirational\n- **Form Factor:** Regulatory analysis and compliance guide\n- **Application Method:** Reference against existing AI deployments; implement NAIC AI6 framework tools (AI register, screening tool, policy guide) as governance baseline\n\n### Common Questions This Guide Answers\n1. Does Australia have a standalone AI Act? → No — the National AI Plan (December 2025) confirms no standalone AI Act will be introduced; compliance obligations are distributed across existing laws including the Privacy Act, APRA standards, and the Australian Consumer Law\n2. When do the automated decision-making transparency requirements under APP 1.7 take effect? → 10 December 2026, introduced by the Privacy and Other Legislation Amendment Act 2024 (Cth), requiring APP entities to disclose AI-influenced decision-making in their privacy policies\n3. What is the current government-endorsed AI governance standard for Australian businesses? → The NAIC Guidance for AI Adoption (published 21 October 2025), which sets out the AI6 framework — six essential governance practices — and provides an AI register template, screening tool, and 12-page policy guide\n\n---\n\n## Australia's AI Regulatory Framework: Ethics Principles, Governance Standards and What Businesses Must Know\n\nEvery Australian organisation deploying AI — a bank running algorithmic credit scoring, a hospital using diagnostic imaging AI, a law firm automating contract review, a miner operating autonomous haul trucks — is navigating a regulatory environment that's simultaneously evolving and deliberately incomplete. Unlike the European Union, which passed its landmark AI Act in 2024, Australia has made a clear policy call: no standalone AI legislation, at least for now. Instead, the country is building a governance architecture from existing laws, voluntary frameworks, a new oversight institution, and sector-specific regulatory expectations.\n\nUnderstanding this architecture isn't optional. The gap between \"no AI Act\" and \"no AI regulation\" is wide, and businesses that mistake one for the other face real compliance exposure today, with greater exposure coming as the framework tightens. This article maps every layer of Australia's AI governance, from the foundational Ethics Principles to the incoming Privacy Act transparency obligations, and breaks down what each layer means for organisations across real estate, healthcare, finance, mining, legal services, and marketing.\n\n---\n\n## Australia's deliberate choice: standards-led, not legislation-led\n\nThe most important thing to understand about Australia's AI regulatory posture is that it reflects a considered policy decision, not a vacuum.\n\nAustralia's AI regulatory journey has shifted from an early plan to introduce an EU-style, risk-based regime toward a more flexible, standards-led approach. What began as a move toward prescriptive guardrails and potential legislation has given way to a focus on productivity, innovation, and the use of existing legal frameworks.\n\nThe National AI Plan, released in December 2025, confirms there will be no standalone AI Act. Australia will not mandate the Proposed Mandatory Guardrails for AI in high-risk settings. Instead, Australia will continue to rely on existing laws — privacy, consumer protection, copyright, workplace law, sector-specific regulation, and online safety — meaning organisations will continue to navigate a complex regulatory patchwork when assessing and managing AI risk.\n\nThis choice has both supporters and critics. In August 2025, the Productivity Commission cautioned that overly stringent AI regulation could stifle Australia's economic potential, estimated at AUD 116 billion over the next decade, and recommended reserving AI-specific regulation for where current laws genuinely fall short. On the other side, ACCC Senior Investigator Rosie Evans wrote that voluntary documents \"do not provide the legal certainty regulation would create,\" arguing that \"without an enforceable regime specifically for AI, Australia may struggle to achieve the regulatory cohesion and effectiveness currently aspired to by government.\"\n\nThe practical implication for businesses? Compliance obligations are real and active — they're just distributed across multiple frameworks rather than consolidated in a single statute.\n\n---\n\n## The eight AI Ethics Principles: the foundation layer\n\nEstablished in 2019, the Australian AI Ethics Principles comprise eight voluntary guidelines covering fairness, accountability, transparency, reliability, privacy and security, human-centred values, contestability, and human/social/environmental wellbeing. They align with the OECD AI Principles.\n\nThe principles were designed to guide businesses and governments to responsibly design, develop and implement AI, and formed part of the Australian Government's commitment to make Australia a global leader in responsible and inclusive AI.\n\nThe eight principles are:\n\n1. **Human, societal and environmental wellbeing** — AI systems should benefit individuals, society and the environment\n2. **Human-centred values** — Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals\n3. **Fairness** — AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups\n4. **Privacy protection and security** — AI systems should respect privacy and data protection, including proper data governance and management for all data used and generated throughout the system's lifecycle\n5. **Reliability and safety** — Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose\n6. **Transparency and explainability** — No data provided\n7. **Contestability** — When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system\n8. **Accountability** — People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled\n\nWhile voluntary, these principles underpin every subsequent layer of the governance framework and are explicitly referenced in sector-specific guidance, the government's AI policy for the public sector, and the NAIC's Guidance for AI Adoption.\n\n---\n\n## The NAIC's Guidance for AI Adoption: the \"AI6\" framework\n\nOn 21 October 2025, the Department of Industry, Science and Resources published the Guidance for AI Adoption, which outlines six essential practices for safe and responsible AI governance. The government streamlined the previous 10 guardrails of the Voluntary AI Safety Standard down to six key practices, maintaining alignment with Australia's AI Ethics Principles and international standards.\n\nThis updated guidance, released by the National AI Centre (NAIC), effectively replaces the earlier Voluntary AI Safety Standard (VAISS) and articulates the \"AI6\" — six essential governance practices for AI developers and deployers.\n\nThe NAIC has also published a suite of practical implementation tools organisations can use right now:\n\n- An AI register template, AI screening tool, alignment to Australia's AI Ethics Principles, the Voluntary AI Safety Standard, and a glossary — providing useful scaffolding for immediate adoption\n- A 12-page policy guide that provides a ready-to-adopt AI policy skeleton organisations can tailor to their context, covering purpose, scope, and pragmatic principle-based policy statements across ethics, accountability, risk assessment, quality/security, fairness, transparency, and human oversight\n\nThese tools are the most actionable entry point for any Australian business beginning its AI governance journey. For a step-by-step guide to implementing them, see our article *How to Build an AI Strategy for an Australian Business*.\n\n---\n\n## The AI Safety Institute: Australia's new technical oversight body\n\nOn 25 November 2025, the Commonwealth Government announced it would establish a national AI Safety Institute (AISI). The AISI will strengthen testing, evaluation and oversight of advanced AI systems, coordinate with regulators such as the Office of the Australian Information Commissioner, and support risk-based regulatory responses to AI.\n\nThe National AI Plan commits just under AUD $30 million to fund the AI Safety Institute, which will become operational in early 2026.\n\nThe AISI will monitor, test and share information on emerging AI capabilities, risks and harms. Its insights will support ministers, portfolio agencies and regulators to maintain safety measures, laws and regulatory frameworks that keep pace with rapid technological change.\n\nCritically, the Government has emphasised that the AISI will complement existing legal and regulatory frameworks that already protect Australians' rights and safety, rather than replace them. Australia will also join the International Network of AI Safety Institutes, aligning local practice with comparable efforts in the US, UK, Canada, South Korea and Japan.\n\nFor businesses, the AISI signals a clear shift in the regulatory trajectory: regulators will increasingly have in-house capacity to interrogate and test models, rather than relying solely on high-level principles or industry self-assessment. Systems with serious or systemic risk potential — security-relevant capabilities, critical infrastructure, influence operations, large-scale decision-making — can expect heightened scrutiny and more prescriptive expectations.\n\n---\n\n## The Privacy Act 1988: the most immediately actionable obligation\n\nThe Privacy Act 1988 (Cth) remains the primary law regulating the handling of personal information in Australia. The Act is principles-based and is currently undergoing significant reform following the government's multiyear review, which commenced before the rise of generative AI.\n\nThe most significant AI-relevant change is already law. Amendments to the Privacy Act passed in December 2024 include new transparency requirements that will apply where entities bound by the Privacy Act use a computer in relation to certain decision-making involving personal information. The Privacy and Other Legislation Amendment Act 2024 (Cth) will introduce greater transparency for individuals affected by automated decision-making in provisions scheduled to commence on 10 December 2026.\n\nUnder the new APP 1.7, if an APP entity uses automated decision-making, it must include certain information in its privacy policy. Organisations using AI to make or materially contribute to decisions that significantly affect individuals must disclose this use and provide meaningful information about how the AI works. This is not a blanket ban on automated decisions — it's a transparency and accountability obligation.\n\nThe reforms focus on decisions that have a legal or similarly significant effect on individuals — in practice, decisions about employment, access to credit or financial products, insurance coverage, housing, healthcare, and government services.\n\nAustralia's privacy regulator, the Office of the Australian Information Commissioner (OAIC), has been proactive in interpreting the Act in AI contexts and is actively regulating AI through interpretation and enforcement rather than waiting for dedicated legislation. The OAIC's enforcement record already includes landmark determinations against Clearview AI, Bunnings Group, and Kmart Australia for unlawful collection of biometric information using facial recognition technology.\n\n---\n\n## APRA prudential standards: binding obligations for financial services\n\nFor organisations in financial services — banks, insurers, superannuation funds, and credit unions — the Australian Prudential Regulation Authority (APRA) operates a binding regulatory framework that is directly relevant to AI deployment.\n\nAPRA CPS 234 is a mandatory information security standard for Australian financial institutions. It requires strong controls, clear governance, third-party oversight and continuous testing. Any APRA-regulated financial institution and any material service provider must comply. The standard applies to cloud providers, with entities remaining responsible for ensuring equivalent controls in outsourced environments.\n\nCPS 234 is not merely a cybersecurity standard — it functions as an AI governance instrument. Any AI system that processes, stores, or generates decisions based on financial or personal data is an \"information asset\" under CPS 234 and must be classified, risk-assessed, and protected accordingly.\n\nCPS 230, which came into effect on 1 July 2025, is now also active for APRA-regulated entities. CPS 230 focuses on the operational resilience of financial institutions, covering business continuity, risk management frameworks, and testing resilience strategies.\n\nAPRA's explicit connection of CPS 234 compliance to the Financial Accountability Regime (FAR) means that information security — and by extension, AI governance — is no longer solely a technical or operational matter. It is a named executive accountability.\n\nFor a detailed analysis of how these standards apply to AI in banking, robo-advice, and fraud detection, see our article *AI in Australian Financial Services: Fraud Detection, Credit Decisioning and Wealth Management Automation*.\n\n---\n\n## Sector-specific regulatory overlays: what each industry must know\n\nAustralia's \"existing laws\" approach means AI compliance obligations differ materially by sector. The table below summarises the primary regulatory instruments applicable to AI across the six target industries:\n\n| Industry | Primary AI-Relevant Regulator(s) | Key Instruments |\n|---|---|---|\n| **Financial Services** | APRA, ASIC | CPS 234, CPS 230, responsible lending obligations, market integrity rules |\n| **Healthcare** | TGA, OAIC | AI medical device guidance, My Health Record Act, Privacy Act |\n| **Legal Services** | Law Council, state Law Societies | Professional conduct rules, Privacy Act, ACL |\n| **Mining** | Safe Work Australia, state regulators | Work health and safety laws, Privacy Act (workforce data) |\n| **Real Estate** | ASIC, state fair trading | ACL, Privacy Act, anti-discrimination law |\n| **Marketing** | ACCC, OAIC | Australian Consumer Law, Privacy Act (ADM transparency) |\n\nSector-specific obligations include: ASIC requiring AI use in lending, trading and advice to align with responsible lending and market integrity obligations; APRA applying additional standards to AI in risk management and critical infrastructure oversight; the TGA requiring AI medical devices to comply with therapeutic goods regulation; and the Fair Work Commission overseeing algorithmic decision-making in recruitment and HR for compliance with employment and discrimination laws.\n\nThe Treasury's Final Report on AI and the Australian Consumer Law recommends targeted clarifications to the ACL's application to AI systems — especially regarding definitions of goods/services, manufacturer liability and algorithmic representations — with an emphasis on reinforcing the ACL's existing principles rather than introducing AI-specific laws.\n\n---\n\n## The government's own obligations: a model for the private sector\n\nFrom 15 December 2025, the updated Policy for the Responsible Use of AI in Government came into effect, strengthening how agencies across the Australian Public Service govern the use of AI and reinforcing safeguards that support safe, transparent and trusted adoption.\n\nUnder this policy, agencies must maintain an internal register of all in-scope AI use cases and assign an accountable owner for each one. Prior to deployment, agencies must also complete an AI impact assessment for each in-scope use case.\n\nFoundational AI training is now mandatory for all staff across the Australian Public Service, building a consistent baseline of understanding of responsible AI use.\n\nThis government framework is instructive for private sector organisations. The AI register, impact assessment, accountable owner, and mandatory training requirements that apply to government agencies represent the governance baseline that regulators and auditors are increasingly expecting from private organisations as well. If you're not building toward this standard now, you're already behind.\n\n---\n\n## The trust deficit: why governance matters beyond compliance\n\nNo analysis of Australia's AI regulatory framework is complete without acknowledging the public trust context in which it operates.\n\nAustralia faces a pronounced trust deficit in AI adoption. According to a 2025 study by the University of Melbourne and KPMG, only 30% of Australians believe the benefits of AI outweigh its risks; just 36% of citizens trust AI systems more broadly. Approximately 78% of respondents expressed concern about negative outcomes from AI, and only 30% believe current laws and safeguards are adequate.\n\nThis trust deficit isn't merely a public relations problem — it is a strategic risk. Australia's regulatory recalibration comes amid persistently low public trust in AI, creating a genuine policy challenge: how to build accountability, safety and transparency without constraining the very innovation needed to realise AI's economic and social potential.\n\nFor organisations in healthcare, financial services, and legal services — where AI decisions directly affect individuals' lives, finances, and rights — demonstrating governance alignment with the AI Ethics Principles and the NAIC's AI6 framework is not just a compliance exercise. It is a prerequisite for customer trust and market licence. For a deeper examination of the risks driving this trust deficit, see our article *AI Risks and Ethical Challenges Facing Australian Industries: Bias, Accountability and Trust*.\n\n---\n\n## Key takeaways\n\n- **No standalone AI Act, but compliance obligations are live.** Australia currently has no binding, AI-specific statutes or regulations. The government's approach remains largely voluntary and consultative, emphasising ethical guidance now, with targeted reforms expected later. However, existing laws — especially the Privacy Act, APRA standards, and the ACL — create real, enforceable obligations today. Don't let the absence of an AI Act lull you into inaction.\n\n- **The Privacy Act ADM transparency requirement is a hard deadline.** From 10 December 2026, amendments introduced by the Privacy and Other Legislation Amendment Act 2024 (Cth) will commence, imposing new transparency obligations on APP entities — particularly in relation to the use of automated decision-making involving personal information. Organisations must audit and disclose AI-influenced decision processes before this date.\n\n- **The NAIC's AI6 framework is the current governance standard.** In October 2025, the National AI Centre published updated Guidance for AI Adoption, which sets out six essential practices (AI6) and is now the primary government guidance for responsible AI governance and adoption. Aligning with AI6 now positions organisations ahead of any future mandatory requirements.\n\n- **The AI Safety Institute changes the enforcement trajectory.** The AI Safety Institute, which will become operational in early 2026, is intended to help government keep pace with rapid AI developments, assess risks from advanced AI systems, coordinate insights across regulators, and support international AI safety commitments. Its technical testing capacity will raise the bar for high-risk AI systems.\n\n- **Financial services face the most immediate binding obligations.** APRA's CPS 234 and CPS 230 create mandatory, board-level accountability for AI systems that touch information assets — and APRA has demonstrated it will enforce. Expectations for governance and organisational readiness are rising, even without new laws. While heavy regulation is paused, organisations will face higher expectations for transparency, testing, oversight and workforce capability.\n\n---\n\n## Conclusion\n\nAustralia's AI regulatory framework in 2025–2026 is best understood not as a gap, but as an architecture under active construction. The foundations are the eight AI Ethics Principles. The structural frame is the NAIC's AI6 Guidance for AI Adoption. The enforcement mechanisms are existing laws — Privacy Act, APRA standards, Australian Consumer Law — interpreted and applied by active, increasingly confident regulators. And the new AI Safety Institute is the quality assurance layer that will increasingly test, monitor, and advise on whether the whole structure is holding.\n\nFor businesses across real estate, healthcare, finance, mining, legal services, and marketing, the practical mandate is clear: do not wait for a standalone AI Act to build your governance framework. The obligations are live. The regulators are active. And the trust deficit makes voluntary compliance a competitive differentiator as much as a legal necessity. Build your framework now — the organisations that do will be better positioned when the regulatory dial inevitably turns.\n\nTo understand how this regulatory framework intersects with data sovereignty and cross-border AI processing, see our article *AI Data Sovereignty and Privacy Compliance for Australian Organisations: What You Need to Know*. For a practical implementation roadmap, see *How to Build an AI Strategy for an Australian Business: A Step-by-Step Implementation Guide*.\n\n---\n\n## References\n\n- Department of Industry, Science and Resources (Australian Government). *\"Australia's AI Ethics Principles.\"* industry.gov.au, 2019 (updated November 2025). https://www.industry.gov.au/publications/australias-ai-ethics-principles\n\n- Department of Industry, Science and Resources / National Artificial Intelligence Centre. *\"Guidance for AI Adoption.\"* industry.gov.au, October 2025. https://www.industry.gov.au/publications/guidance-for-ai-adoption\n\n- Digital Transformation Agency (Australian Government). *\"Policy for the Responsible Use of AI in Government — Version 2.0.\"* digital.gov.au, December 2025. https://www.digital.gov.au/ai/ai-in-government-policy\n\n- Department of Industry, Science and Resources (Australian Government). *\"National AI Plan: Keep Australians Safe.\"* industry.gov.au, December 2025. https://www.industry.gov.au/publications/national-ai-plan/keep-australians-safe\n\n- Department of Industry, Science and Resources (Australian Government). *\"Australia to Establish New Institute to Strengthen AI Safety.\"* industry.gov.au, November 2025. https://www.industry.gov.au/news/australia-establish-new-institute-strengthen-ai-safety\n\n- MinterEllison. *\"Australia Introduces a National AI Plan: Four Things Leaders Need to Know.\"* minterellison.com, December 2025. https://www.minterellison.com/articles/australia-introduces-a-national-ai-plan-four-things-leaders-need-to-know\n\n- IAPP (International Association of Privacy Professionals). *\"Global AI Governance Law and Policy: Australia.\"* iapp.org, November 2025. https://iapp.org/resources/article/global-ai-governance-australia\n\n- University of Melbourne and KPMG. *\"Trust, Attitudes and Use of Artificial Intelligence: A Global Study.\"* 2025. (Referenced in IAPP Global AI Governance: Australia.)\n\n- Australian Prudential Regulation Authority. *\"For Action: Information Security Obligations and Critical Authentication Controls.\"* apra.gov.au, June 2025. https://www.apra.gov.au/for-action-information-security-obligations-and-critical-authentication-controls\n\n- Department of Finance (Australian Government). *\"Implementing Australia's AI Ethics Principles in Government.\"* finance.gov.au, 2025. https://www.finance.gov.au/government/public-data/data-and-digital-ministers-meeting/national-framework-assurance-artificial-intelligence-government/implementing-australias-ai-ethics-principles-government\n\n- Bird & Bird. *\"Australian Government to Establish AI Safety Institute.\"* twobirds.com, 2025. https://www.twobirds.com/en/insights/2025/australia/australian-government-to-establish-ai-safety-institute\n\n- Gadens. *\"Australia Launches AI Safety Institute and Releases National AI Plan.\"* gadens.com, December 2025. https://www.gadens.com/legal-insights/australia-launches-ai-safety-institute-and-releases-national-ai-plan/\n\n- Jackson McDonald (JWS). *\"Practical Implications of the New Transparency Requirements for Automated Decision Making.\"* jws.com.au, December 2025. https://jws.com.au/what-we-think/practical-implications-of-new-transparency-requirements-for-automated-decision-making/\n\n- Spruson & Ferguson. *\"Privacy and AI Regulations: 2024 Review and 2025 Outlook.\"* spruson.com, January 2025. https://www.spruson.com/privacy-and-ai-regulations-2024-review-2025-outlook/\n\n- Australian Government / Open Government Partnership. *\"Transparency of Automated Decision Making (AU0024).\"* opengovpartnership.org, 2025. https://www.opengovpartnership.org/members/australia/commitments/AU0024/\n\n- SafeAI-Aus. *\"Current Legal Landscape for AI in Australia.\"* safeaiaus.org, January 2026. https://safeaiaus.org/safety-standards/ai-australian-legislation/\n\n- Productivity Commission (Australian Government). *\"Productivity Commission Inquiry into AI.\"* Referenced in Nemko Digital AI Governance Australia, August 2025.\n\n---\n\n## Frequently Asked Questions\n\n**Does Australia have a standalone AI Act:** No\n\n**Will Australia introduce a standalone AI Act:** No, per the National AI Plan December 2025\n\n**When was the National AI Plan released:** December 2025\n\n**What is Australia's preferred AI regulatory approach:** Standards-led, not legislation-led\n\n**Does Australia follow the EU AI Act model:** No\n\n**Are Australia's AI compliance obligations currently active:** Yes\n\n**When were Australia's AI Ethics Principles established:** 2019\n\n**How many AI Ethics Principles does Australia have:** Eight\n\n**Are the AI Ethics Principles legally binding:** No, they are voluntary\n\n**What is the first AI Ethics Principle:** Human, societal and environmental wellbeing\n\n**What is the second AI Ethics Principle:** Human-centred values\n\n**What is the third AI Ethics Principle:** Fairness\n\n**What is the fourth AI Ethics Principle:** Privacy protection and security\n\n**What is the fifth AI Ethics Principle:** Reliability and safety\n\n**What is the sixth AI Ethics Principle:** Transparency and explainability\n\n**What is the seventh AI Ethics Principle:** Contestability\n\n**What is the eighth AI Ethics Principle:** Accountability\n\n**Do Australia's AI Ethics Principles align with international standards:** Yes, they align with OECD AI Principles\n\n**What replaced the Voluntary AI Safety Standard:** The NAIC Guidance for AI Adoption\n\n**When was the NAIC Guidance for AI Adoption published:** 21 October 2025\n\n**What is the \"AI6\" framework:** Six essential governance practices for AI developers and deployers\n\n**Who published the AI6 guidance:** The National AI Centre (NAIC)\n\n**How many practices does the AI6 framework contain:** Six\n\n**Is the AI6 framework mandatory:** No, it is guidance\n\n**Does the NAIC provide an AI register template:** Yes\n\n**Does the NAIC provide an AI screening tool:** Yes\n\n**Does the NAIC provide a policy guide:** Yes, a 12-page policy guide\n\n**When was Australia's AI Safety Institute announced:** 25 November 2025\n\n**When will the AI Safety Institute become operational:** Early 2026\n\n**How much funding was committed to the AI Safety Institute:** Just under AUD $30 million\n\n**What will the AI Safety Institute do:** Monitor, test and share information on AI capabilities, risks and harms\n\n**Does the AI Safety Institute replace existing laws:** No, it complements them\n\n**Will the AI Safety Institute join an international network:** Yes, the International Network of AI Safety Institutes\n\n**Which countries are in that international network:** US, UK, Canada, South Korea and Japan\n\n**What is the primary law regulating personal information in Australia:** The Privacy Act 1988 (Cth)\n\n**What is the key AI-related Privacy Act amendment:** New automated decision-making transparency requirements\n\n**When do the automated decision-making transparency requirements commence:** 10 December 2026\n\n**What legislation introduced the ADM transparency requirements:** Privacy and Other Legislation Amendment Act 2024 (Cth)\n\n**What is APP 1.7:** A new provision requiring disclosure of automated decision-making in privacy policies\n\n**Does APP 1.7 ban automated decisions:** No, it requires transparency and disclosure\n\n**What types of decisions trigger APP 1.7 obligations:** Decisions with legal or similarly significant effect on individuals\n\n**Name one sector where APP 1.7 applies:** Employment decisions\n\n**Name a second sector where APP 1.7 applies:** Credit and financial product decisions\n\n**Name a third sector where APP 1.7 applies:** Healthcare decisions\n\n**Who is Australia's privacy regulator:** The Office of the Australian Information Commissioner (OAIC)\n\n**Has the OAIC taken enforcement action on AI:** Yes\n\n**Which companies did the OAIC act against for facial recognition:** Clearview AI, Bunnings Group, and Kmart Australia\n\n**What did those companies do wrong:** Unlawfully collected biometric information using facial recognition\n\n**What is APRA CPS 234:** A mandatory information security standard for financial institutions\n\n**Is CPS 234 voluntary:** No, it is mandatory\n\n**Who must comply with CPS 234:** APRA-regulated financial institutions and material service providers\n\n**Does CPS 234 apply to cloud providers:** Yes\n\n**When did CPS 230 come into effect:** 1 July 2025\n\n**What does CPS 230 focus on:** Operational resilience of financial institutions\n\n**Is CPS 230 binding:** Yes, for APRA-regulated entities\n\n**What is the Financial Accountability Regime:** A regime linking information security to named executive accountability\n\n**Which regulator oversees AI in financial services:** APRA and ASIC\n\n**Which regulator oversees AI medical devices:** The Therapeutic Goods Administration (TGA)\n\n**Which regulator oversees AI in recruitment and HR:** The Fair Work Commission\n\n**What percentage of Australians believe AI benefits outweigh risks:** 30%\n\n**What percentage of Australians trust AI systems:** 36%\n\n**What percentage of Australians are concerned about AI negative outcomes:** 78%\n\n**What percentage believe current AI laws are adequate:** 30%\n\n**Who conducted the 2025 AI trust study:** University of Melbourne and KPMG\n\n**What does the government's AI policy require agencies to maintain:** An internal register of all in-scope AI use cases\n\n**What must agencies complete before AI deployment:** An AI impact assessment\n\n**Is AI training mandatory for Australian Public Service staff:** Yes\n\n**When did the updated government AI policy come into effect:** 15 December 2025\n\n**What economic value does AI represent for Australia over the next decade:** AUD 116 billion\n\n**Who estimated Australia's AI economic potential at AUD 116 billion:** The Productivity Commission\n\n**What did the Productivity Commission caution against:** Overly stringent AI regulation\n\n**Who argued voluntary frameworks lack legal certainty:** ACCC Senior Investigator Rosie Evans\n\n**Does the ACL apply to AI systems:** Yes\n\n**What does the Treasury recommend regarding the ACL and AI:** Targeted clarifications, not new AI-specific laws\n\n**Is there a mandatory AI governance framework for Australian private businesses:** No, not currently\n\n**What is the governance baseline regulators increasingly expect:** AI register, impact assessment, accountable owner, and mandatory training\n\n---\n\n## Label facts summary\n\n> **Disclaimer:** All facts and statements below are general informational summaries drawn from publicly available regulatory and policy sources, not legal advice. Consult qualified legal or compliance professionals for guidance specific to your organisation.\n\n### Verified label facts\n\n- Australia's AI Ethics Principles were established in 2019\n- There are eight AI Ethics Principles\n- The AI Ethics Principles are voluntary, not legally binding\n- The AI Ethics Principles align with the OECD AI Principles\n- The NAIC Guidance for AI Adoption was published on 21 October 2025 by the National AI Centre (NAIC), Department of Industry, Science and Resources\n- The Guidance for AI Adoption replaced the Voluntary AI Safety Standard (VAISS)\n- The AI6 framework contains six essential governance practices\n- The AI6 framework is guidance, not mandatory\n- The NAIC provides an AI register template, AI screening tool, and a 12-page policy guide\n- Australia's AI Safety Institute was announced on 25 November 2025\n- The AI Safety Institute will become operational in early 2026\n- Funding committed to the AI Safety Institute is just under AUD $30 million\n- Australia will join the International Network of AI Safety Institutes, alongside the US, UK, Canada, South Korea, and Japan\n- The National AI Plan was released in December 2025 and confirms no standalone AI Act will be introduced\n- The primary law regulating personal information in Australia is the Privacy Act 1988 (Cth)\n- Automated decision-making transparency requirements were introduced by the Privacy and Other Legislation Amendment Act 2024 (Cth)\n- The ADM transparency requirements are scheduled to commence on 10 December 2026\n- APP 1.7 requires APP entities using automated decision-making to include specified information in their privacy policy\n- APP 1.7 does not ban automated decisions; it imposes transparency and disclosure obligations\n- The OAIC took enforcement action against Clearview AI, Bunnings Group, and Kmart Australia for unlawful collection of biometric information using facial recognition technology\n- APRA CPS 234 is a mandatory information security standard for APRA-regulated financial institutions and material service providers, including cloud providers\n- APRA CPS 230 came into effect on 1 July 2025 and focuses on operational resilience of financial institutions\n- The updated Policy for the Responsible Use of AI in Government came into effect on 15 December 2025\n- Under the government AI policy, agencies must maintain an internal register of all in-scope AI use cases and assign an accountable owner for each\n- Agencies must complete an AI impact assessment prior to deployment of each in-scope use case\n- Foundational AI training is mandatory for all Australian Public Service staff\n- The Productivity Commission estimated Australia's AI economic potential at AUD $116 billion over the next decade (August 2025)\n- A 2025 study by the University of Melbourne and KPMG found 30% of Australians believe AI benefits outweigh risks\n- The same study found 36% of Australians trust AI systems broadly\n- 78% of respondents in the same study expressed concern about negative AI outcomes\n- 30% of respondents in the same study believe current laws and safeguards are adequate\n\n### General product claims\n\n- Australia's AI governance architecture is described as \"deliberately incomplete\" but not a vacuum\n- The gap between \"no AI Act\" and \"no AI regulation\" is characterised as wide, with real compliance exposure for businesses\n- The NAIC's AI6 tools are described as \"the most actionable entry point for any Australian business beginning its AI governance journey\"\n- The AI Safety Institute is characterised as signalling a \"clear shift in the regulatory trajectory\"\n- Aligning with AI6 now is described as positioning organisations \"ahead of any future mandatory requirements\"\n- The trust deficit is characterised as \"a strategic risk,\" not merely a public relations problem\n- Voluntary compliance is described as \"a competitive differentiator as much as a legal necessity\"\n- The government's AI register, impact assessment, accountable owner, and mandatory training requirements are described as the \"governance baseline that regulators and auditors are increasingly expecting from private organisations\"\n- Organisations not building toward the government governance standard are described as \"already behind\"\n- The AI Safety Institute's technical testing capacity is described as set to \"raise the bar for high-risk AI systems\"\n- Australia's regulatory framework is characterised as \"an architecture under active construction\" rather than a gap",
  "geography": {},
  "metadata": {},
  "publishedAt": "",
  "workspaceId": "a3c8bfbc-1e6e-424a-a46b-ce6966e05ac0",
  "_links": {
    "canonical": "https://opensummitai.directory.norg.ai/technology-digital-transformation/ai-industry-applications-australia/australias-ai-regulatory-framework-ethics-principles-governance-standards-and-what-businesses-must-know/"
  }
}