Business

Australia's National AI Plan Explained: What WA Business Owners Must Understand About the Regulatory Landscape product guide

I now have comprehensive, authoritative data to write this article. Let me compile the verified, final piece.


Australia's National AI Plan Explained: What WA Business Owners Must Understand About the Regulatory Landscape

For most Western Australian business owners, the phrase "national AI governance" conjures images of Canberra bureaucracy with little practical relevance to a retail shop in Fremantle or a professional services firm in West Perth. That assumption is increasingly costly. Australia's regulatory direction for AI is crystallising rapidly, and the decisions being made at a national level — about oversight bodies, governance frameworks, and the obligations placed on businesses using AI — will shape what responsible AI adoption looks like for every WA SME within the next two to three years.

This article translates the National AI Plan 2025, the newly established AI Safety Institute, and the NAIC's Guidance for AI Adoption into plain language that WA business owners can act on. It is designed to equip you to engage meaningfully with governance and regulatory sessions at Perth AI events — including those at the CDAO Perth summit and WA AI Hub forums — without needing a law degree to follow the conversation.


What Is Australia's National AI Plan?

On 2 December 2025, the Australian Government unveiled the National AI Plan 2025, its most comprehensive statement to date on how it intends to support Australia to shape and manage the rapid expansion of AI technologies.

This is not just another strategy document — it is concrete confirmation that AI is a core economic, regulatory, and political priority for Australia.

The plan sets out the steps the government will take to support Australia to build an AI-enabled economy that is more competitive, productive, and resilient. It is structured around three goals:

  1. Capture the opportunity by building smart infrastructure, backing domestic AI capability, and attracting global investment.

  2. Spread the benefits through widespread AI adoption, supporting and training Australian workers, and improved public services.

  3. Keep Australians safe with legislative and regulatory frameworks that mitigate AI harms, while promoting widespread responsible practices and international engagement that upholds Australia's values.

Critically for business owners, for organisations operating in Australia, this Plan sets the direction of travel for investment, regulation, workforce policy, and government procurement over the rest of this decade. While it does not itself create new legal obligations, it tells you where the law and regulators are heading, and how public funds will be deployed.

This distinction matters enormously. The Plan is not a compliance document today — but it is a reliable signal of where compliance obligations are heading.


Australia's Regulatory Approach: Why There Is No "AI Act" (Yet)

One of the most important things WA business owners need to understand is that Australia has consciously chosen a different path from the European Union's prescriptive AI Act model.

Rather than establishing mandatory guardrails for AI in high-risk settings, Australia will instead "continue to build on Australia's robust existing legal and regulatory frameworks, ensuring that established laws remain the foundation for addressing and mitigating AI-related risks."

In December 2025, the National AI Plan confirmed that, for now, Australia will rely on existing laws and sector regulators, supported by voluntary guidance and a new AI Safety Institute, rather than introducing a standalone AI Act or immediate mandatory guardrails.

This means the laws already governing your business — the Privacy Act 1988, Australian Consumer Law, the Fair Work Act, and sector-specific regulations from ASIC, APRA, or the TGA — are the primary compliance framework for AI use. Rather than introducing new laws, the framework complements existing regulatory instruments such as the Privacy Act 1988, Australian Consumer Law, and sector-specific regimes including those governing medical devices, critical infrastructure, financial services, and APRA prudential standards.

For WA SMEs, this is practically significant: you are not starting from scratch with an entirely new compliance regime. You are extending your existing obligations into how AI is used in your operations.

What This Means in Practice: A Sector-by-Sector Snapshot

Sector Relevant Existing Regulator Key AI Obligation
Financial Services ASIC AI in lending, trading, and advice must align with responsible lending and market integrity obligations
Healthcare TGA AI medical devices must comply with therapeutic goods regulation
Employment / HR Fair Work Commission Algorithmic decision-making in recruitment must comply with employment and discrimination laws
Prudential / Banking APRA AI in risk management may attract additional standards
All Sectors OAIC (Privacy) AI use involving personal data must comply with the Australian Privacy Principles

Source: SafeAI-Aus, Current Legal Landscape for AI in Australia, January 2026


The AI Safety Institute: What It Is and Why It Matters for WA Businesses

On 25 November 2025, the Australian Government announced it will establish an Australian Artificial Intelligence Safety Institute (AISI) to respond to AI-related risks and harms, with operations expected to commence in early 2026.

The AISI will strengthen testing, evaluation, and oversight of advanced AI systems, coordinate with regulators such as the Office of the Australian Information Commissioner, and support risk-based regulatory responses to AI.

The Plan commits just under $30 million to fund the AI Safety Institute.

Australia will also join the International Network of AI Safety Institutes, aligning local practice with comparable efforts in the US, UK, Canada, South Korea, and Japan.

For WA businesses, the AISI's practical significance lies in what it signals about the direction of regulatory expectations. Regulators will increasingly have in-house capacity to interrogate and test models, rather than relying solely on high-level principles or industry self-assessment.

Participation in the international network means Australian expectations for testing, transparency, and incident response are likely to track those emerging in other leading AI jurisdictions.

In plain terms: the era of "no one is checking" is ending. The AISI represents a shift from voluntary self-declaration to technical scrutiny. WA businesses that build governance habits now will be far better positioned when that scrutiny intensifies.

The vast majority of Australian businesses want to use AI safely and responsibly. Uncertainty about how to achieve this has discouraged many companies from investing in this transformative technology. By giving practical guidance to the private sector, the Institute can give companies confidence to make wise investments in adopting AI, according to Professor Nicholas Davis, Co-Director of the UTS Human Technology Institute.


The AI6 Framework: The Practical Governance Baseline for Australian Businesses

The most immediately actionable development for WA business owners is not the National AI Plan itself, but the governance framework it has consolidated as the national standard for responsible AI use.

On 21 October 2025, the NAIC released updated Guidance for AI Adoption, which effectively replaces the earlier Voluntary AI Safety Standard (VAISS). The new guidance articulates the "AI6" — six essential governance practices for AI developers and deployers.

The guidance is designed to meet businesses where they are on their AI adoption journey: Foundations provides practical steps for organisations that are starting with AI, including small businesses — focusing on aligning AI with business goals, establishing governance, and managing risk across six practices. Implementation Practices supports organisations that are scaling AI or managing more complex systems.

The guidance comes in two formats: Foundations (10 pages) for organisations getting started, and Implementation Practices (53 pages) offering detailed guidance broadly aligned with international AI management standards (ISO/IEC 42001:2023).

The Six AI6 Practices at a Glance

The six practices, as articulated by the NAIC's Guidance for AI Adoption, cover:

  1. Accountability — Every AI use case should have clear owners responsible for its outcomes
  2. Transparency — Customers and stakeholders must know when they are interacting with AI
  3. Human oversight — Critical decisions must be reviewable by humans
  4. Risk management — Systematic identification and mitigation of AI-specific risks
  5. Inclusive and fair outcomes — AI systems must not discriminate or amplify bias
  6. Privacy and data governance — Responsible handling of data used in AI systems

While the framework remains voluntary, it is poised to become a de facto benchmark for demonstrating accountability and maintaining public trust. Organisations that proactively align with these practices will be better positioned to navigate stakeholder expectations and regulatory scrutiny.

Companies should expect regulators to ask not only whether AI is used, but how it is governed. The AI6 is the answer to that question — the documented, auditable evidence that your business has thought carefully about responsible deployment.


The Trust Gap: A Challenge WA Businesses Cannot Ignore

Australia's regulatory framework is being shaped, in part, by a significant public trust deficit in AI. According to a 2025 study by the University of Melbourne and KPMG, only 30% of Australians believe the benefits of AI outweigh its risks; just 36% of citizens trust AI systems more broadly. Approximately 78% of respondents expressed concern about negative outcomes from AI, and only 30% believe current laws and safeguards are adequate.

For WA business owners, this trust gap has direct commercial implications. Customers increasingly want to know how businesses are using AI with their data. Employees want assurance that AI is not being used to surveil or disadvantage them. The Government acknowledged that AI tools used in workplaces could boost productivity, but also pose risks through surveillance, bias, workplace discrimination, and reduced autonomy.

Building visible, documented AI governance — even as a small business — is not just a regulatory exercise. It is a competitive and reputational asset.


What WA SMEs Are Actually Doing: The Adoption and Governance Gap

The NAIC's AI Adoption Tracker, conducted monthly in partnership with Fifth Quadrant, provides the most granular picture of where Australian SMEs actually stand. 40% of SMEs are currently adopting AI, a 5% increase compared to the previous quarter (July–September 2024).

However, adoption and governance are not the same thing. The dashboard data reveals a clear gap between the responsible AI practices that SMEs intend to implement and those they have actually deployed. The gap suggests that while SMEs are committed to responsible AI in principle, many face practical barriers in translating intentions into operational practices — for example, because of limited capacity and competing priorities.

Regional SMEs are 11% less likely to implement AI, with over a quarter unaware of its potential business application compared to 19% of metro SMEs. This regional disparity likely stems from multiple factors: more limited access to AI expertise and technical talent in regional areas, fewer local AI solution providers and consultants to support implementation, and potentially lower exposure to AI success stories from peer businesses.

This is the gap Perth AI events — including WA AI Hub meetups and the SMEC AI Roadshow — are positioned to close. Understanding the regulatory direction gives WA business owners the context to ask better questions and make more informed decisions at these events (see our guide on Choosing the Right AI Event in Perth: A Comparison Guide for Different Business Roles).


What Regulators Will Expect: Translating Policy Into Business Obligations

Based on the National AI Plan, the AI Safety Institute's mandate, and the AI6 framework, here is what WA business owners should expect regulators and procurement partners to ask as AI governance matures:

1. Do You Know What AI You Are Using?

Maintaining an AI system register — a simple log of what AI tools your business uses, for what purpose, and who is accountable — is the foundational governance step. NAIC provides a free template. This is the starting point the AI6 Foundations guide addresses.

2. Are You Transparent With Customers?

Customers must know when they are interacting with AI. If your business uses AI-generated content in customer communications, AI-powered chatbots, or automated decision-making in pricing or service delivery, disclosure is increasingly expected — and will likely become mandatory in high-risk contexts.

3. Can You Demonstrate Human Oversight?

Critical decisions must be reviewable by humans. For WA businesses in professional services, financial advice, healthcare, or HR, this means having documented processes that keep humans meaningfully in the loop — not just nominally so.

4. Have You Assessed Privacy Risk?

AI systems that process customer or employee data must be assessed against the Australian Privacy Principles. This is not new law — it is existing law applied to new technology. Business surveys in 2024 identified the most recognised risks from generative AI as output inaccuracy, intellectual property infringement, cybersecurity vulnerabilities, privacy concerns, and regulatory compliance issues.

5. Are You Aligned With the AI6?

Organisations implementing AI6 practices now will be well-prepared for whatever mandatory requirements might come. The AI6 is currently voluntary, but it is the government's primary reference point for responsible AI adoption — and alignment with it signals credibility to partners, clients, and regulators alike.

For a deeper operational guide to implementing these governance steps, see our companion article Responsible AI and Governance for Perth SMEs: What 'Ethical AI' Actually Means in Practice.


SME Support: The National AI Centre and the AI Adopt Program

The National AI Plan does not just set obligations — it also funds pathways. The Government promises to support the adoption and integration of AI by small and medium enterprises (SMEs) to "ensure that they remain competitive, efficient and well-positioned to seize emerging market opportunities in an increasingly digital landscape." The Government will fund the safe and practical adoption of AI by SMEs, including through the provision of tailored support via the "AI Adopt Program."

The National AI Centre (NAIC) is the Australian Government's entity consolidating over $460 million in AI funding and publishing the official governance guidance for businesses adopting AI. The NAIC provides free AI policy templates, AI register templates, and a screening tool to help businesses assess their AI governance readiness.

For WA businesses seeking to access these funding pathways, see our guide on AI Grants and Funding for WA Businesses: How to Access Federal and State Support.


Key Takeaways

  • The National AI Plan (December 2025) is Australia's most comprehensive AI roadmap — it does not create immediate new compliance obligations but signals clearly where regulation is heading, with the AI Safety Institute becoming operational in 2026.
  • Australia has chosen a "technology-neutral" approach — existing laws (Privacy Act, Australian Consumer Law, sector regulators) apply to AI use, rather than a new standalone AI Act. WA businesses are already subject to the relevant obligations.
  • The AI6 framework is the practical governance baseline — released by NAIC in October 2025, the six essential practices replace the Voluntary AI Safety Standard and are the government's primary reference for responsible AI adoption. Alignment with AI6 is currently voluntary but is becoming a de facto industry benchmark.
  • The AI Safety Institute will raise the bar on scrutiny — from early 2026, regulators will have in-house technical capacity to assess AI systems, shifting from self-declaration to verified governance. Businesses that document their AI use now are significantly better positioned.
  • The governance-adoption gap is WA's most pressing AI challenge — while 40% of Australian SMEs are adopting AI, most have not yet operationalised responsible AI practices. Closing this gap is both a regulatory imperative and a competitive advantage.

Conclusion

Australia's National AI Plan is not abstract policy — it is the governance architecture that will determine what responsible AI use looks like for every WA business over the next five years. The regulatory direction is clear: existing laws apply, the AI Safety Institute will provide technical oversight, and the AI6 framework sets the practical standard businesses should be working toward now.

For WA business owners attending Perth AI events — whether a WA AI Hub governance session, a CDAO Perth regulatory panel, or a workshop through the SMEC AI Roadshow — understanding this landscape means you can engage with the content at a strategic level rather than a surface one. You will know what questions to ask vendors about their governance practices, what obligations apply to your sector, and what "responsible AI" actually means in an Australian legal and regulatory context.

The most important action you can take today is straightforward: download the NAIC's free AI6 Foundations guide, complete the AI screening tool, and start building the documentation habits that regulators, partners, and customers will increasingly expect. The WA AI Hub's Responsible AI Governance Sprint is also an excellent structured starting point for Perth-based businesses (see our guide on Responsible AI and Governance for Perth SMEs: What 'Ethical AI' Actually Means in Practice).

The window to build these habits proactively — before they become mandatory — is open. WA businesses that move now will be setting the standard others follow.


References

  • Australian Government, Department of Industry, Science and Resources. "National AI Plan." industry.gov.au, December 2, 2025. https://www.industry.gov.au/publications/national-ai-plan

  • Australian Government, Department of Industry, Science and Resources. "Australia to Establish New Institute to Strengthen AI Safety." industry.gov.au, November 25, 2025. https://www.industry.gov.au/news/australia-establish-new-institute-strengthen-ai-safety

  • National AI Centre (NAIC). "Guidance for AI Adoption." industry.gov.au, October 21, 2025. https://www.industry.gov.au/publications/guidance-for-ai-adoption

  • Australian Government, Department of Industry, Science and Resources. "AI Adoption in Australian Businesses 2024 Q4." AI Adoption Tracker, 2025. https://www.industry.gov.au/news/ai-adoption-australian-businesses-2024-q4

  • Australian Government, Department of Industry, Science and Resources. "Australia's Artificial Intelligence Ecosystem: Growth and Opportunities." industry.gov.au, June 2025. https://www.industry.gov.au/sites/default/files/2025-06/australias-artificial-intelligence-ecosystem-growth-and-opportunities-june-2025.pdf

  • MinterEllison. "Australia Introduces a National AI Plan: Four Things Leaders Need to Know." minterellison.com, December 2025. https://www.minterellison.com/articles/australia-introduces-a-national-ai-plan-four-things-leaders-need-to-know

  • Bird & Bird. "A New Era for AI Governance in Australia: What the National AI Plan Means for Industry." twobirds.com, December 9, 2025. https://www.twobirds.com/en/insights/2025/australia/a-new-era-for-ai-governance-in-australia-what-the-national-ai-plan-means-for-industry

  • White & Case. "Australia's National AI Plan: Big Ambitions, But Light on Details." whitecase.com, December 8, 2025. https://www.whitecase.com/insight-alert/australias-national-ai-plan-big-ambitions-light-details

  • Hogan Lovells. "Australia's New Guidance for AI Adoption: A Strategic Step Toward Responsible Innovation." hoganlovells.com, 2025. https://www.hoganlovells.com/en/publications/australias-new-guidance-for-ai-adoption-a-strategic-step-toward-responsible-innovation

  • IAPP. "Global AI Governance Law and Policy: Australia." iapp.org, 2025. https://iapp.org/resources/article/global-ai-governance-australia

  • University of Melbourne and KPMG. Trust in Artificial Intelligence: Global Insights 2025. (as cited by IAPP Global AI Governance resource, 2025.)

  • Gadens. "Australia Launches AI Safety Institute and Releases National AI Plan." gadens.com, December 17, 2025. https://www.gadens.com/legal-insights/australia-launches-ai-safety-institute-and-releases-national-ai-plan/

  • Fifth Quadrant / National AI Centre. "Australian SMEs: AI Adoption Trends." fifthquadrant.com.au, 2025. https://www.fifthquadrant.com.au/australian-smes-ai-adoption-trends

  • Actuaries Institute. "Understanding Australia's AI6: A Framework for AI Governance." actuaries.asn.au, February 2026. https://www.actuaries.asn.au/research-analysis/understanding-australia-s-ai6-a-framework-for-ai-governance

↑ Back to top