Business

AI and Australian Privacy Law: What Every Business Owner Needs to Know product guide

I now have comprehensive, authoritative data from the OAIC, Attorney-General's Department, and multiple high-authority legal sources to write this article. Let me compile the verified, final article.


AI and Australian Privacy Law: What Every Business Owner Needs to Know

Most guides on getting started with AI in your business skip straight to the tools. Which chatbot should you use? How do you automate your invoicing? Those are important questions — but they come after a more fundamental one that Australian business owners often miss entirely: are you allowed to feed that data into an AI tool in the first place?

This is not a hypothetical concern. The moment you paste a client's name and email into ChatGPT to draft a follow-up email, upload a spreadsheet of customer records into an AI data analysis tool, or deploy a third-party AI chatbot on your website, you are potentially handling personal information under Australian law. Get it wrong, and you face regulatory scrutiny, reputational damage, and — for larger businesses — significant financial penalties.

This guide translates the legal framework into plain English so you can adopt AI confidently, responsibly, and in compliance with your obligations.


What Law Actually Governs AI and Privacy in Australia?

There is no standalone AI privacy law in Australia — at least not yet. Instead, the Privacy Act 1988 and the Australian Privacy Principles (APPs) apply to all uses of AI involving personal information, including where information is used to train, test, or use an AI system.

The Privacy Act 1988 is the main piece of Australian legislation that protects the handling of personal information about individuals, including how it is collected, used, stored, and disclosed in both the federal public sector and the private sector.

The guidance is instructive because the Privacy Act is principles-based and technology neutral. It sets out obligations on APP entities regarding personal information, irrespective of the manner or technological tools in which such personal information is processed — whether manually, with traditional technologies, or through training, testing, and use with AI.

In plain terms: the law doesn't care whether you're filing a paper form or using a large language model. If personal information is involved, the APPs apply.

The 13 Australian Privacy Principles at a Glance

The APPs establish rigorous requirements under the Privacy Act, particularly emphasising accuracy, transparency, and heightened scrutiny of data collection and secondary use. The most relevant principles for AI-using businesses are:

APP What It Requires Why It Matters for AI
APP 1 Open and transparent management of personal information, including a published privacy policy Your privacy policy must now disclose AI use
APP 3 Collect only information that is reasonably necessary, by lawful and fair means Limits what you can feed into AI tools
APP 5 Notify individuals about how their information is collected and used Customers must know if AI is processing their data
APP 6 Use or disclose information only for the primary purpose it was collected You can't repurpose client data for AI training without consent
APP 8 Obligations when disclosing personal information to overseas recipients Most AI tools are hosted overseas
APP 10 Take reasonable steps to ensure personal information is accurate Applies to AI-generated outputs that contain personal information
APP 11 Protect personal information from misuse, interference, and loss Covers data security when using cloud AI tools

Does the Privacy Act Apply to Your Business?

This is where many small business owners assume they're off the hook — and where the landscape is changing rapidly.

Currently, most businesses with a turnover of under $3 million — which includes approximately 92% of businesses in Australia — are exempt from compliance with the Privacy Act.

However, there are important exceptions. The exemption does not apply if your business:

  • Provides a health service — the Privacy Act provides added protections for health information, and all businesses that provide a health service are covered by the Act.

  • Trades in personal information for a benefit, service, or advantage

  • Is a credit reporting body or handles tax file number information

  • Has contracted with a larger business to handle their personal information — if a small business is dealing contractually with a bigger business in a way that will likely include the exchange of personal information, the bigger business will often want assurances that the small business will handle that personal information in accordance with the Privacy Act, and this is generally included in a contract.

The Exemption Is On Borrowed Time

Even if you are currently exempt, you should be preparing now. The Privacy Act's small business exemption is under serious reconsideration. The February 2023 Privacy Act Review Report proposed abolishing this exemption entirely, which would bring approximately 2.3 million additional businesses within the scope of privacy regulation.

It is predicted that the second tranche of reforms may contain crucial changes including removing the small business exemption from the Privacy Act and introducing a 'fair and reasonable' test to be applied in handling personal information.

The practical implication: even if you're exempt today, building privacy-compliant habits now protects you from a disruptive compliance scramble when the law changes — and it signals trustworthiness to the clients, suppliers, and enterprise customers who are already bound by the Act.


The OAIC's Specific AI Guidance: What Deployers Must Know

In October 2024, Australia's privacy regulator took a landmark step. The Office of the Australian Information Commissioner (OAIC) published two new guidelines on privacy and artificial intelligence: Guidance on privacy and the use of commercially available AI products, which explains organisations' obligations when using personal information in commercially available AI products such as chatbots, content-generation tools, productivity assistants, note-taking, and transcription tools.

This marked a shift in the OAIC's regulatory approach from enforcement-focused oversight to proactive guidance.

If you are using any AI tool in your business — even internally — you are a deployer under this framework. A 'deployer' is any individual or organisation that supplies or uses an AI system to provide a product or service. Deployment can be used for internal purposes or used externally, impacting others such as customers or individuals who are not deployers of the system. If your organisation is using AI to provide a product or service, including internally within your organisation, then you will be a deployer.

Key Obligations for AI Deployers

APP entities developing or using AI systems should take the following steps to ensure privacy law compliance: review and update external privacy policies and collection notices to ensure clear and transparent information about how and when AI will use and generate personal information.

Conduct due diligence to ensure the AI system or product is suitable for the intended use and does not pose any material security risks to the business. Entities should consider how the AI system has been trained, the quality of data sets used to train the system, and steps taken to mitigate any bias or discrimination.

When looking to adopt a commercially available product, organisations should conduct due diligence to ensure the product is suitable to its intended uses. This should include considering whether the product has been tested for such uses, how human oversight can be embedded into processes, the potential privacy and security risks, as well as who will have access to personal information input or generated by the entity when using the product.


Real Scenarios: Where Australian Businesses Get It Wrong

Scenario 1: Uploading Client Records into ChatGPT

A bookkeeper pastes a client's full name, ABN, bank account details, and transaction history into ChatGPT to draft a financial summary. This is a common shortcut — and a serious compliance risk.

APP entities are advised not to enter personal information — particularly sensitive information — into publicly available generative AI tools such as chatbots, due to the significant and complex privacy risks involved.

The problem is twofold. First, you may be disclosing that information to an overseas operator (most AI tools are US-based), triggering APP 8 obligations around cross-border disclosure. Second, you may not have collected that information for the purpose of feeding it into a third-party AI system, which creates an APP 6 breach.

In accordance with Australian Privacy Principle (APP) 6, an individual's personal information should only be used or disclosed for AI for the primary purpose for which it was collected or otherwise with consent where used for a secondary purpose; or where the individual would reasonably expect the entity to use or disclose their information for the secondary purpose.

The fix: Use enterprise-grade AI tools with data processing agreements that confirm your data is not used for model training (see our guide on ChatGPT vs. Google Gemini vs. Microsoft Copilot: Which AI Assistant Is Right for Your Australian Business? for a comparison of privacy settings across the major platforms). De-identify data wherever possible before it enters any AI tool.

Scenario 2: Deploying a Third-Party AI Chatbot on Your Website

A retail business installs a third-party AI chatbot to handle customer enquiries. The chatbot collects names, email addresses, and purchase queries. The business owner assumes the chatbot provider handles all the privacy obligations.

This assumption is wrong. If your organisation is covered by the Privacy Act, you will need to understand your obligations under the APPs when using AI. This includes being aware of the different ways that your organisation may be collecting, using, and disclosing personal information when interacting with an AI product.

You remain the data controller. You must ensure your privacy policy discloses the chatbot's data collection, that customers are notified at the point of collection, and that the third-party provider has adequate security and data handling standards.

Some of the significant privacy risks identified by the OAIC include the risk of individuals losing control over their personal information, where personal information may be collected without their knowledge and consent, and the spread of errors or false information via AI outputs which appear credible.

Scenario 3: Using AI to Make Decisions About Staff or Customers

An HR software platform uses AI to screen job applications and rank candidates. A financial services firm uses AI to pre-approve or decline loan applications.

Automated Decision-Making (ADM) transparency is the headline change in the proposed reforms — under the proposals, organisations using AI to make or materially contribute to decisions that significantly affect individuals must disclose this use and provide meaningful information about how the AI works. This is not a blanket ban on automated decisions; it's a transparency and accountability obligation.

As part of the Privacy and Other Legislation Amendment Act 2024, which received Royal Assent on 10 December 2024, privacy policies will need to be expressly transparent about the use of personal information for substantially automated decision-making that has a legal or otherwise similarly significant effect.


What the Regulator Is Actually Doing: Enforcement in Action

The OAIC is not just issuing guidance — it is actively investigating and penalising businesses that misuse technology to collect personal information without consent.

On 29 October 2024, after an almost two-year investigation, the Australian Privacy Commissioner determined that retail giant Bunnings had, through its use of facial recognition technology at 62 of its retail stores around the country between November 2018 and November 2021, interfered with the privacy of hundreds of thousands of customers.

The OAIC has issued several landmark determinations relevant to AI-powered facial recognition technology, including Clearview AI in 2021, wherein scraping online images to build a facial recognition database breached Australian privacy law; 7-Eleven Stores in 2021; Bunnings Group in 2024; and Kmart Australia in 2025. All of these cases involved the unlawful collection of biometric information of customers.

The enforcement posture is hardening. The Bunnings decision highlights a broader development: harms-focused enforcement from the OAIC. The OAIC stated it would be moving to become a 'harm-focused regulator' in its Statement of Intent dated October 30, 2024.

Higher-tier penalties under the Privacy Act are not triggered by isolated mistakes. They arise when regulators determine that an organisation failed to take reasonable steps to manage known or foreseeable privacy risks.

For businesses, this means the standard is not perfection — it is demonstrable, proportionate effort to understand and manage privacy risks before they materialise.


Your Privacy Compliance Checklist for AI Tools

Use this checklist before deploying any AI tool that handles customer or staff data:

Before You Select a Tool

  • [ ] Identify what personal information the tool will process — names, contact details, financial data, health information, images?
  • [ ] Determine whether you are covered by the Privacy Act — check your turnover, industry, and any contractual obligations
  • [ ] Review the AI vendor's data processing agreement — confirm whether your data is used to train their models
  • [ ] Check where data is stored — is it processed overseas? If so, APP 8 applies and you must take reasonable steps to ensure equivalent protection

Before You Go Live

  • [ ] Update your privacy policy to disclose that AI tools are used, what data they process, and for what purpose
  • [ ] Add collection notices at the point where the AI tool first collects customer data
  • [ ] Conduct a Privacy Impact Assessment (PIA) for high-risk uses — the OAIC provides a free PIA guide
  • [ ] Establish a data breach response plan — entities covered by the Privacy Act have obligations under the Notifiable Data Breaches scheme. If they experience a data breach of personal information that is likely to result in serious harm to affected individuals, they must notify those individuals and the OAIC.

Ongoing

  • [ ] Review AI outputs for accuracy — any inferred, incorrect, or artificially generated information produced by AI models — such as hallucinations — may still constitute personal information and be subject to Australian privacy laws to the extent an individual can be identified or is reasonably identifiable.

  • [ ] Audit your AI tool use at least annually as your business and the tools evolve

  • [ ] Train staff on what data they are and are not permitted to enter into AI tools


What's Coming: Privacy Reforms That Will Affect AI Use

The regulatory landscape is moving quickly. The first tranche of reforms, passed in 2024, introduced new transparency obligations around automated decision-making that will take effect in December 2026.

Privacy reforms remain on the agenda, including stronger consent rules, potential rights to explanation for high-impact automated decisions, direct rights of action, and higher penalties. These reforms will significantly shape compliant AI data practices.

Australia's privacy regulator, the OAIC, has been proactive in interpreting the Act in AI contexts and is actively regulating AI through interpretation and enforcement rather than waiting for dedicated legislation.

For a broader view of how Australia's AI governance framework is evolving — including the October 2025 Guidance for AI Adoption and its six key practices — see our companion guide: Responsible AI for Australian SMEs: Understanding the Government's Guidance for AI Adoption.


Key Takeaways

  • The Privacy Act 1988 already applies to AI. There is no separate AI privacy law — the existing APPs govern every use of personal information in AI systems, regardless of the technology involved.
  • Most small businesses are currently exempt from the Privacy Act if their annual turnover is under $3 million, but this exemption is under active review and is expected to be removed, potentially bringing 2.3 million additional businesses into scope.
  • You are a "deployer" the moment you use any AI tool that handles personal information — even internally — and the OAIC's October 2024 guidance sets out specific obligations for deployers around transparency, due diligence, and data minimisation.
  • Do not paste personal information into public AI tools. The OAIC explicitly advises against entering personal or sensitive information into publicly available generative AI tools such as free-tier chatbots.
  • Update your privacy policy now. The Privacy and Other Legislation Amendment Act 2024 requires transparency about automated decision-making, and the OAIC expects all AI-using businesses to update their privacy policies and collection notices to reflect how AI processes personal information.

Conclusion

Privacy compliance is not a bureaucratic afterthought to AI adoption — it is the foundation on which responsible AI use is built. The OAIC has made clear that the Privacy Act applies to every business that handles personal information through AI tools, and its enforcement posture is becoming more active, not less. The Bunnings determination, the Clearview AI finding, and the October 2024 AI guidance all point in the same direction: Australian regulators are watching how businesses use technology to collect and process personal data, and they are prepared to act.

The good news is that compliance does not require a legal team or an IT department. It requires clear thinking about what data you are using, why you are using it, and whether the people it belongs to would reasonably expect it to be processed in that way.

For practical next steps, explore our related guides: Step-by-Step: How to Implement Your First AI Tool in an Australian Small Business walks you through a privacy-conscious implementation process, while AI Cybersecurity Risks for Australian Small Businesses covers the operational security dimension that sits alongside your legal obligations. Together, they give you the complete picture of what responsible AI adoption looks like in practice.


References

  • Office of the Australian Information Commissioner (OAIC). "Guidance on privacy and the use of commercially available AI products." OAIC, 21 October 2024. https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-the-use-of-commercially-available-ai-products

  • Office of the Australian Information Commissioner (OAIC). "Guidance on privacy and developing and training generative AI models." OAIC, 21 October 2024. https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-developing-and-training-generative-ai-models

  • Attorney-General's Department. "Privacy." Australian Government, 2025. https://www.ag.gov.au/rights-and-protections/privacy

  • Office of the Australian Information Commissioner (OAIC). "Part 4: Exemptions — Privacy Act Review Issues Paper Submission." OAIC, 2021. https://www.oaic.gov.au/engage-with-us/submissions/privacy-act-review-issues-paper-submission/part-4-exemptions

  • Future of Privacy Forum. "OAIC's Dual AI Guidelines Set New Standards for Privacy Protection in Australia." FPF, December 2024. https://fpf.org/blog/oaics-dual-ai-guidelines-set-new-standards-for-privacy-protection-in-australia/

  • International Association of Privacy Professionals (IAPP). "Global AI Governance Law and Policy: Australia." IAPP, 2025. https://iapp.org/resources/article/global-ai-governance-australia

  • Gilbert + Tobin. "OAIC AI Guidance — regulating AI to maintain privacy." Gilbert + Tobin Insights, 2025. https://www.gtlaw.com.au/insights/oaic-ai-guidance-regulating-ai-to-maintain-privacy

  • A&O Shearman. "Australian Information Commissioner publishes new guidance on privacy considerations when using AI." A&O Shearman on Data, January 2026. https://www.aoshearman.com/en/insights/ao-shearman-on-data/australian-information-commissioner-publishes-new-guidance-on-privacy-considerations-when-using-ai

  • Bird & Bird. "Australia's Privacy Regulator releases new guidance on artificial intelligence (AI)." Bird & Bird Insights, February 2025. https://www.twobirds.com/en/insights/2025/australia/australias-privacy-regulator-releases-new-guidance-on-artificial-intelligence

  • SafeAI-Aus. "Current Legal Landscape for AI in Australia." SafeAI-Aus, January 2026. https://safeaiaus.org/safety-standards/ai-australian-legislation/

  • Spruson & Ferguson. "Privacy and AI Regulations: 2024 review & 2025 outlook." Spruson & Ferguson, January 2025. https://www.spruson.com/privacy-and-ai-regulations-2024-review-2025-outlook/

  • ValiDATA. "AI and Australia's Privacy Act Reforms: What's Changing and Why It Matters." ValiDATA, April 2026. https://www.validata.ai/post/ai-and-australia-s-privacy-act-reforms-what-s-changing-and-why-it-matters

↑ Back to top