Business

AI Data Privacy and Sovereignty: Why Australian Regulations Change the Build vs Buy Calculus product guide

Now I have sufficient research to write a comprehensive, well-cited article. Let me compose the final piece.


Why Compliance Isn't an Afterthought — It's the Starting Point

Most Australian businesses approach the build vs buy AI decision as a technology and budget problem. They compare feature sets, estimate development timelines, and model total cost of ownership. What they frequently underestimate — or discover too late — is that Australian law has already made several of those decisions for them.

The regulatory environment governing how Australian businesses collect, store, process, and transfer personal data is not static background noise. It is an active, rapidly evolving force that directly constrains which AI deployment models are legally permissible, which vendor relationships are defensible, and which architectures create unacceptable liability exposure. For businesses in regulated sectors — financial services, healthcare, insurance, superannuation — the compliance calculus is even more decisive.

This article maps the specific Australian regulatory instruments that affect AI deployment decisions, explains what they require in practice, and shows how those requirements shift the build vs buy equation in ways that generic vendor comparisons cannot capture.


The Privacy Act 1988 and the Australian Privacy Principles: The Foundational Layer

The Privacy Act 1988 (Cth) remains the primary law regulating the handling of personal information in Australia. Its thirteen Australian Privacy Principles (APPs) govern how organisations collect, use, disclose, and store personal information — and every AI system that touches personal data must operate within this framework.

Requirements for handling personal and sensitive data within AI systems are included in the Privacy Act, the Australian Privacy Principles, the Privacy and Other Legislation Amendment Act 2024, and the Handling Personal Information guidance.

For AI use cases, the most relevant APPs are APP 1 (open and transparent management of personal information), APP 5 (collection notices), APP 8 (cross-border disclosures), APP 10 (data quality) and APP 11 (security of personal information). These principles govern not just what you must tell individuals, but how accurately you must handle their data, how you secure it, and what you must do before sending it offshore or into third-party AI tools.

What the 2024 Reforms Changed — and Why It Matters for AI Procurement

The Privacy and Other Legislation Amendment Bill 2024 passed both houses on 29 November 2024 and received royal assent on 10 December 2024. This legislation introduced the most significant changes to Australian privacy law in decades, with direct implications for AI deployment choices.

The first tranche of reforms, passed in 2024, introduced new transparency obligations around automated decision-making that will take effect in December 2026. Specifically, obligations arising under the Privacy Act 1988 and the Australian Privacy Principles apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information). The Privacy and Other Legislation Amendment Act 2024 introduced an additional privacy policy disclosure obligation where automated decision making is deployed by a regulated entity and that decision could significantly affect the rights or interests of an individual.

Where automated decision-making could significantly affect an individual's rights or interests, organisations will, from 10 December 2026, be required to disclose in their privacy policy that personal information is used in the computer program making that decision, and the kinds of personal information involved. This ties algorithmic decision-making directly to explicit transparency duties.

The practical implication: if your business uses an off-the-shelf AI tool to make or substantially inform decisions about credit, insurance, employment, or service eligibility, you must be able to disclose how that system works. If the vendor's model is a black box — which describes the majority of commercial foundation model APIs — you may not be able to satisfy this requirement without building your own interpretable layer on top, or building the decision system entirely in-house.

The Shift from Declarative to Evidentiary Compliance

The reforms assume that organisations can explain how personal information is handled within live systems, including how data moves across services, how it is disclosed to third parties, and how it is used in automated decision making. This represents a shift from declarative compliance to evidentiary compliance.

Organisations must be able to demonstrate, with specificity, how their systems behave in practice. This includes being able to reconstruct data flows, identify points of risk, and respond to regulatory or legal scrutiny with factual clarity.

This is a critical distinction for the build vs buy decision. When you buy an off-the-shelf AI tool, you inherit its architecture — including the parts you cannot audit, modify, or explain. When you build, you own the evidentiary trail. The question becomes: does the vendor you are evaluating give you sufficient transparency to meet this standard?


APP 8 and Cross-Border Data Transfers: The Offshore AI Trap

The single most consequential regulatory constraint on off-the-shelf AI adoption in Australia is Australian Privacy Principle 8, which governs cross-border disclosure of personal information.

Under Australia's Privacy Act 1988, personal data cannot be disclosed to a recipient outside Australia unless "reasonable steps" are taken to ensure the recipient adheres to the Australian Privacy Principles.

Under APP 8, organisations remain legally responsible for how personal information is handled overseas, even when that data is processed by third-party SaaS platforms, cloud providers, analytics services, or AI vendors. Liability now follows the data, not the contract.

This is the "offshore AI trap" that catches many Australian businesses by surprise. Sending customer data through a US-hosted AI API — even one with strong contractual protections — does not transfer your legal liability. Liability now follows the data itself, not contractual agreements, requiring enterprises to ensure that personal information is handled in compliance with Australian privacy obligations across all downstream systems.

Why Contracts Alone Are No Longer Sufficient

In modern architectures, personal data flows continuously through APIs, integrations, and automated workflows, often across multiple jurisdictions. These transfers are dynamic and difficult to track using traditional compliance methods based on policies and contracts. Effective compliance therefore requires runtime visibility into how data actually moves, enabling enterprises to monitor cross-border flows, detect unauthorised disclosures, and demonstrate control based on real system behaviour rather than declared intent.

The shift introduced by the 2024 reforms requires enterprises to reassess how cross-border privacy risk is managed. Treating overseas disclosure as a legal or procurement issue is no longer sufficient. Compliance now depends on collaboration between legal, security, and engineering teams.

The 2024 reforms also introduced a pathway for simplification: the amendments provide for ministerial powers to "whitelist" countries that provide substantially similar privacy protections, potentially simplifying international data flows to approved jurisdictions. However, until such determinations are made, businesses engaging in cross-border data transfers should review and strengthen their contractual safeguards and conduct appropriate risk assessments.

Until that whitelist is established — and as of 2026, it has not been — every Australian business feeding personal data into a US, UK, or EU-hosted AI platform carries residual APP 8 liability. This reality alone is pushing many regulated-sector businesses toward either local deployment of off-the-shelf tools or building AI systems on Australian-hosted infrastructure.

The New Statutory Tort: When Individuals Can Sue Directly

One of the most significant changes introduced by the 2024 reforms is the creation of a statutory tort for serious invasions of privacy. This marks a fundamental shift in how privacy rights are enforced in Australia.

In practical terms, many future claims are likely to arise from failures in complex digital environments. These include data leaks through application interfaces, unauthorised sharing with third-party services, misuse of data by automated systems, or unintended exposure of sensitive information through analytics and monitoring tools.

For businesses relying on multi-tenant SaaS AI platforms, this creates a new litigation vector. If a vendor's AI system mishandles Australian personal data, the Australian business that deployed it — not the vendor — faces the primary legal exposure.


Sector-Specific Regulations That Compound the Compliance Burden

Beyond the Privacy Act, several sector-specific regulatory regimes impose additional constraints on AI deployment that the build vs buy decision must account for.

Financial Services: APRA CPS 234 and the Third-Party AI Problem

CPS 234 is a mandatory information security regulation issued by the Australian Prudential Regulatory Authority (APRA) that took effect on July 1, 2019. It requires organisations in the financial and insurance sectors to strengthen their information security framework in order to protect themselves and their customers from the growing threat of cyber attacks.

CPS 234 applies to all legal entities regulated by APRA: accredited deposit-taking institutions (including foreign and non-business holding companies licensed under Australian banking law), general insurance companies, life insurance companies, private health insurance companies, and organisations wherever an organisation regulated by APRA manages information via a third party — the CPS 234 regulation also applies to that third party.

This last clause is the critical one for AI procurement. As numerous organisations have moved — and continue to move — critical business operations and sensitive data to SaaS platforms, the need to examine SaaS providers' CPS 234 compliance is crucial. The data entrusted to SaaS providers forms part of your organisation's business operating capability. As such, the information security capability of those SaaS providers must be evaluated, measured, and continuously assessed.

CPS 234 Information Security requires regulated institutions to: clearly define information-security related roles and responsibilities; maintain an information security capability commensurate with the size and extent of threats to their information assets; implement controls to protect information assets and undertake regular testing and assurance of the effectiveness of controls; and promptly notify APRA of material information security incidents.

For an APRA-regulated entity considering an off-the-shelf AI platform, this means the vendor's security posture is not just a procurement consideration — it is a regulatory one. CPS 234 aims to reduce cyber risk and improve cybersecurity by requiring that APRA-regulated entities maintain an information security capability commensurate with their information security vulnerabilities and threats, and employ vendor risk management practices to reduce the likelihood and impact of incidents involving related or third-parties.

The practical outcome: banks, insurers, and superannuation funds that adopt AI tools from offshore vendors must conduct and document comprehensive security assessments of those vendors — and must be prepared to demonstrate that control to APRA. This due diligence burden is non-trivial, and for many institutions, it tips the calculus toward building internally or deploying on Australian-hosted, auditable infrastructure.

Healthcare: The TGA, My Health Records, and the AI Medical Device Question

Healthcare AI sits at the intersection of multiple regulatory frameworks, and the classification question is genuinely complex.

The TGA regulates Artificial Intelligence (AI) as a medical device when it is used for diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of disease, injury or disability. AI would generally be a medical device if it is intended to be used for: diagnosis, prevention, monitoring, prediction, prognosis, or treatment of a disease, injury, or disability; alleviation of, or compensation for, an injury or disability; or investigation of the anatomy or of a physiological process.

Australia's medical device regulatory framework is technology-agnostic, which means the TGA regulates products based on their intended purpose, not the technology they use. The intended purpose — what the product is used for — is defined by the manufacturer and will determine whether the product meets the definition of a medical device under the Act regardless of the platform used — whether a watch, phone, tablet, cloud service, laptop, or hardware device.

This has created a significant compliance trap for healthcare businesses adopting off-the-shelf AI tools. Under the Therapeutic Goods Act 1989, some advanced AI tools — including digital scribes that suggest diagnoses or treatments — may be regulated as medical devices. Developers and suppliers must ensure compliance, including registration in the Australian Register of Therapeutic Goods (ARTG) where required.

In addition to general software requirements, the manufacturer of software that uses AI or machine learning is required to possess evidence that is sufficiently transparent to enable evaluation of safety and performance of the product.

The My Health Record dimension adds another layer. The Office of the Australian Information Commissioner is responsible for federal laws and the administration of the privacy provisions contained in the My Health Record Act and the Healthcare Identifiers Act 2010 (Cth). Any AI system that accesses, processes, or generates content for inclusion in My Health Records must comply with the My Health Records Act 2012, which restricts use of health records to authorised purposes and prohibits disclosure outside the healthcare context — constraints that many general-purpose commercial AI tools are not designed to honour.


The Government's AI Ethics Framework and the Explainability Expectation

Australia's privacy regulator, the Office of the Australian Information Commissioner, has been proactive in interpreting the act in AI contexts and is actively regulating AI through interpretation and enforcement rather than waiting for dedicated legislation.

On 21 October 2024, the OAIC published two new guidelines on privacy and artificial intelligence. The first explains organisations' obligations when using personal information from commercially available AI products, such as chatbots, content-generation tools, productivity assistants, and transcription tools. The second targets regulated entities using personal information to train or fine-tune generative AI models.

The AI Guidance concludes that the governance-first approach to AI is the ideal way to manage privacy risks, which in practice means embedding privacy-by-design into the design and development of an AI product that collects and uses personal information and implementing an ongoing process to monitor AI use of personal information throughout the product lifecycle.

In October 2025, the National AI Centre released updated Guidance for AI Adoption, which condenses the 10 VAISS guardrails into 6 essential practices (AI6), expands guidance to cover both AI deployers and developers, and provides more detailed, actionable implementation guidance and supporting tools. The Guidance for AI Adoption is now the primary source of voluntary governance guidance for Australian organisations.

While this guidance remains voluntary for most private sector entities, the regulatory trajectory is clear. In December 2025, the National AI Plan confirmed that, for now, Australia will rely on existing laws and sector regulators, supported by voluntary guidance and a new AI Safety Institute, rather than introducing a standalone AI Act or immediate mandatory guardrails. However, a second tranche of Privacy Act reforms is already in development. While the 2024 Act is the first tranche, government messaging indicates that a second tranche is intended and politically supported. In a July 2025 interview, Michelle Rowland explicitly characterised the next phase as the "second tranche of privacy reforms," linked it to concerns about exploitation and protection of personal information, and rejected the framing that privacy protection and innovation are mutually exclusive.

The direction of travel is toward greater explainability requirements, stricter accountability, and potentially mandatory guardrails for high-risk AI. Businesses that build AI systems today with explainability and auditability baked in are positioning themselves for the regulatory environment of 2027 and beyond — not just 2025.


The Practical Build vs Buy Implications: A Compliance-Led Decision Matrix

The regulatory landscape described above translates into a set of concrete decision rules that should precede any technology evaluation.

When Compliance Favours Building (or Local Hosting)

  • You operate in a regulated sector (financial services, healthcare, insurance, superannuation) and your AI system will process personal data subject to APRA CPS 234, the My Health Records Act, or TGA medical device regulation
  • Your AI system makes or substantially informs consequential decisions about individuals (credit, insurance, employment, healthcare) — triggering the automated decision-making transparency obligations under the 2024 Privacy Act reforms
  • Your data cannot be transferred offshore because it includes health records, sensitive financial data, or information subject to sector-specific data residency requirements
  • You require full auditability of model behaviour, training data, and decision logic — a standard that most black-box commercial AI tools cannot meet
  • Your use case involves training on proprietary Australian data — which, under OAIC Guidance 2, requires careful governance controls that most off-the-shelf tools do not provide by default

When Compliance Can Be Satisfied Through a Buy Path

  • Your use case is genuinely horizontal (document summarisation, scheduling, internal productivity) and does not involve consequential personal data decisions
  • The vendor offers Australian data residency — major cloud AI platforms including Microsoft Azure, Google Cloud, and AWS offer Australian region hosting, though businesses must verify that model inference, training pipelines, and support functions also remain onshore
  • The vendor can provide sufficient transparency documentation to satisfy your OAIC obligations and sector-specific audit requirements
  • Your sector regulator has issued specific guidance approving the vendor's compliance posture (as APRA has done for certain cloud providers via CPS 234 mappings)

(For a structured framework to apply these criteria to your specific situation, see our guide on Build vs Buy AI: A Decision Framework Tailored for Australian SMEs.)


What Businesses Get Wrong: Three Common Compliance Misconceptions

Misconception 1: "We have a contract with the vendor, so we're covered." Under APP 8, contracts describe intent — they do not establish runtime control. Privacy policies and vendor contracts describe intent, but they do not establish control over runtime behaviour. Effective APP 8 compliance requires visibility into how personal data actually flows through systems in production.

Misconception 2: "The vendor is responsible for their own compliance." This misreads the accountability structure entirely. Under section 16C of the Privacy Act, if an overseas recipient breaches the APPs in relation to Australian personal data, the Australian entity that disclosed the data is treated as having committed the breach. The vendor's non-compliance becomes your liability.

Misconception 3: "We only need to worry about this if we're a large enterprise."

A notable feature of Australian privacy law has historically been the small business exemption, which freed companies with annual turnover below AU$3 million from compliance requirements — covering about 95% of Australian businesses. However, this exemption has significant carve-outs (health service providers are always covered regardless of turnover), and its removal is among the proposals under active consideration for the second tranche of reforms. SMEs in healthcare, financial services, and any sector handling sensitive personal data should not rely on this exemption as a long-term compliance strategy.


Key Takeaways

  • APP 8 creates direct liability for offshore AI processing. Under the 2024 reforms, Australian businesses remain accountable for how personal data is handled by overseas AI vendors — liability follows the data, not the contract. This is the single biggest compliance risk in the 'buy' path for offshore-hosted tools.

  • The 2024 Privacy Act reforms shift compliance from declarative to evidentiary. Organisations must demonstrate how AI systems behave in practice — reconstructing data flows and explaining automated decisions — not merely assert that they have compliant policies. Off-the-shelf black-box AI tools frequently cannot support this standard.

  • Sector-specific regulations add mandatory layers. APRA CPS 234 requires APRA-regulated entities to assess and continuously monitor the security posture of every AI vendor they use. The TGA regulates AI tools as medical devices when they have a therapeutic purpose. The My Health Records Act restricts AI use of health data to authorised purposes. These are not optional frameworks.

  • Automated decision-making transparency obligations take effect December 2026. Any AI system making consequential decisions about individuals must be disclosed in privacy policies with specificity from that date — a requirement that may be impossible to satisfy with opaque commercial models.

  • The regulatory trajectory favours building or local hosting for high-risk use cases. A second tranche of Privacy Act reforms is in development, mandatory AI guardrails for high-risk settings remain under active consideration, and the OAIC is enforcing proactively. Businesses that build explainable, locally-hosted AI systems now are building toward the regulatory future, not just the present.


Conclusion

Compliance is not a constraint that sits alongside the build vs buy AI decision — it is a primary input that shapes the decision space before any other factor is considered. Australian businesses that approach AI procurement as a pure technology or budget problem, without first mapping their regulatory obligations, risk building or buying systems that are legally indefensible.

The Privacy Act 1988, as reformed in 2024, the Australian Privacy Principles, APRA CPS 234, TGA medical device regulations, and the My Health Records Act collectively define what is permissible, what is auditable, and what creates liability exposure. For regulated-sector businesses, these frameworks often make the build path — or at minimum, a hybrid approach using locally hosted, auditable infrastructure — the only defensible choice.

For businesses outside regulated sectors, the compliance calculus is less prescriptive but still consequential. The direction of Australian regulation is clearly toward greater accountability, explainability, and data sovereignty. Building AI governance frameworks now that can accommodate this trajectory — whether through custom development, carefully scoped vendor selection, or hybrid architectures — is a strategic investment, not a compliance overhead.

For a sector-by-sector breakdown of how these compliance requirements translate into specific build vs buy recommendations, see our Australian Industry Sector Guide: Build vs Buy AI Recommendations for Finance, Healthcare, Retail, and Beyond. For guidance on evaluating vendor contracts and data portability before signing, see AI Vendor Lock-In in Australia: How to Evaluate, Negotiate, and Mitigate Dependency Risk.


References

  • Office of the Australian Information Commissioner (OAIC). "Guidance on Privacy and the Use of Commercially Available AI Products." OAIC, October 2024. https://www.oaic.gov.au

  • Office of the Australian Information Commissioner (OAIC). "Chapter 8: APP 8 — Cross-border Disclosure of Personal Information." Australian Privacy Principles Guidelines, updated October 2025. https://www.oaic.gov.au/privacy/australian-privacy-principles/australian-privacy-principles-guidelines/chapter-8-app-8-cross-border-disclosure-of-personal-information

  • Australian Government. "Privacy and Other Legislation Amendment Act 2024." Federal Register of Legislation, December 2024. https://www.legislation.gov.au

  • Australian Prudential Regulation Authority (APRA). "Prudential Standard CPS 234: Information Security." APRA, effective July 2019. https://www.apra.gov.au

  • Therapeutic Goods Administration (TGA). "Artificial Intelligence (AI) and Medical Device Software Regulation." TGA, updated February 2026. https://www.tga.gov.au/products/medical-devices/software-and-artificial-intelligence-ai

  • Department of Health, Disability and Ageing. "Safe and Responsible Artificial Intelligence in Health Care — Legislation and Regulation Review: Final Report." Australian Government, January 2026. https://www.tga.gov.au/sites/default/files/2026-01/safe-and-responsible-artificial-intelligence-in-health-care-legislation-and-regulation-review-final-report.pdf

  • International Association of Privacy Professionals (IAPP). "Global AI Governance Law and Policy: Australia." IAPP Resource Centre, 2025. https://iapp.org/resources/article/global-ai-governance-australia

  • National AI Centre. "Guidance for AI Adoption (AI6)." Australian Government, October 2025. https://www.industry.gov.au/ai

  • White & Case LLP. "AI Watch: Global Regulatory Tracker — Australia." White & Case, updated 2025. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-australia

  • DLA Piper. "Data Protection Laws of the World: Australia." DLA Piper Data Protection, updated March 2026. https://www.dlapiperdataprotection.com/index.html?c=AU

  • Information Technology and Innovation Foundation (ITIF). "Australia's Cross-Border Data Transfer Regulation." ITIF, February 2025. https://itif.org/publications/2025/02/27/australias-cross-border-data-transfer-regulation/

↑ Back to top