Business

AI for Australian Business Compliance: Privacy Law, the Australian Privacy Act, and Data Safety product guide

I now have comprehensive, authoritative data to write this article. Let me compile the verified final article.


Does the Privacy Act Apply to Your Small Business? Understanding the Threshold

For most Australian small business owners, the first question about privacy law is also the most important one: does this even apply to me?

The Privacy Act 1988 includes a small business exemption that frees companies with annual turnover below AU$3 million from compliance requirements — a threshold that covers approximately 95% of Australian businesses. On the surface, that sounds like most small businesses are off the hook. But the reality in 2025 is considerably more nuanced — and the risk of assuming you're exempt has never been higher.

Here's why the exemption is not a safe harbour for AI-adopting small businesses:

1. The exemption is under active reform. The February 2023 Privacy Act Review Report proposed abolishing the small business exemption entirely, which would bring approximately 2.3 million additional businesses within the scope of privacy regulation.

It is predicted that the second tranche of reforms may contain crucial changes, including removing the small business exemption from the Privacy Act and introducing a 'fair and reasonable' test to be applied in handling personal information.

2. Certain small businesses are already covered. You must comply with the Privacy Act if you're an Australian Government agency, a business with annual turnover of A$3 million or more, a private health service provider regardless of turnover, or a small business that provides credit reporting services. This means every allied health practice, physiotherapist, psychologist, and dental clinic — regardless of size — is fully bound by the Act and the Australian Privacy Principles (APPs) when using AI tools.

3. Contractual obligations extend coverage further. While the Privacy Act has long provided an exemption for businesses with annual revenue of less than $3 million, it is still challenging for small businesses if they contract with large organisations and become contractually bound to privacy standards that exceed the Act — including GDPR-equivalent protections.

4. The new statutory tort creates direct liability. With the statutory tort now in effect since June 2025, individuals can sue directly for serious invasions of privacy. Courts can award damages for financial loss, emotional distress, and harm to reputation — creating significant liability risk beyond regulatory penalties.

The practical upshot: even if your business is technically exempt today, using AI tools that process customer or employee data without appropriate safeguards creates legal, reputational, and commercial exposure that is growing — not shrinking.


What the Australian Privacy Principles Mean When You Use AI

The Privacy Act contains 13 Australian Privacy Principles (APPs). The APPs apply to government agencies and private sector organisations with an annual turnover of $3 million or more. The APPs are principles-based — protecting privacy while not burdening agencies and organisations with inflexible prescriptive rules. They deal with all stages of the processing of personal information, setting out standards for the collection, use, disclosure, quality, and security of personal information.

When AI enters the picture, these principles apply in ways that many business owners don't anticipate. The Privacy Act 1988 and the Australian Privacy Principles apply to all uses of AI involving personal information, including where information is used to train, test, or use an AI system. If your organisation is covered by the Privacy Act, you will need to understand your obligations under the APPs when using AI.

The three APPs most directly triggered by AI tool use are:

APP 3 — Collection of Personal Information

You may only collect personal information that is reasonably necessary for your business functions. Organisations cannot collect personal information just in case it could be useful for analytics or other AI applications in the future. Before collecting personal information, organisations should consider if de-identified or anonymous information could be used to achieve the same results.

APP 6 — Use and Disclosure of Personal Information

In accordance with Australian Privacy Principle 6, an individual's personal information should only be used or disclosed for AI for the primary purpose for which it was collected — or otherwise with consent. This means you cannot, for example, feed your customer contact database into an AI marketing tool if customers provided their details only for order fulfilment.

APP 8 — Cross-Border Disclosure

This is where AI tools create the most significant hidden risk. Under APP 8, organisations remain legally responsible for how personal information is handled overseas, even when that data is processed by third-party SaaS platforms, cloud providers, analytics services, or AI vendors. Liability now follows the data, not the contract. Every time you paste customer data into ChatGPT, Claude, or Gemini — tools hosted on servers outside Australia — APP 8 is engaged.

APP 10 — Data Quality and Accuracy

If the AI product has not been trained on Australian data sets, you should consider whether your organisation's use of the product is likely to comply with your accuracy obligations under APP 10. This has direct relevance when using AI for customer-facing communications, medical or legal summaries, or financial advice — outputs must be accurate, and you remain responsible for them.

APP 11 — Security of Personal Information

Australian Privacy Principle 11 requires organisations to take reasonable steps to protect personal information held from misuse, interference and loss, as well as unauthorised access, modification or disclosure, and to destroy or de-identify the information when it is no longer required.


The Notifiable Data Breaches (NDB) scheme, which has been in operation since 2018, requires covered entities to notify both the OAIC and affected individuals when a data breach is likely to result in serious harm. The scale of the problem is growing rapidly.

Businesses and government agencies reported more than 1,100 data breaches to the regulator and the public in 2024 — the highest annual total since mandatory data breach notification requirements started in 2018. The OAIC was notified of 595 data breaches in the second half of 2024 alone, ending the year with a total of 1,113 notifications — a 25% increase from 893 notifications in 2023.

The average cost of a data breach in 2024 was A$4.26 million, according to IBM, reinforcing the importance of proactive risk mitigation.

AI is increasingly implicated in breach pathways. In early 2025, a contractor working for an Australian organisation uploaded personal information — including names, contact details, and health records — of people involved with a government program into an AI system. This led to a serious data spill and is considered a notifiable data breach. This is precisely the type of incident that can occur in any small business when staff use public AI tools without clear guidelines.

Under the NDB scheme, the notification clock starts immediately. The Privacy Act requires organisations to take reasonable steps to conduct a data breach assessment within 30 days of becoming aware there are grounds to suspect they may have experienced an eligible data breach. Once the organisation forms a reasonable belief that there has been an eligible data breach, they must notify affected individuals and the OAIC as soon as practicable.


What Data Should Never Be Uploaded to Public AI Platforms

The OAIC has been unambiguous on this point. The OAIC does not recommend inputting personal information into public generative AI tools.

The reason is structural. Due to limited resources and technical expertise, small businesses may lack strong data security and governance frameworks, making them vulnerable to: accidental data leaks through cloud-based AI tools, unauthorised access to sensitive customer information, and potential misuse of customer data by third-party AI providers.

Public AI tools, especially on their free tier, will use your conversations to improve and train their model. There are opt-out options in Gemini, ChatGPT, and others, but the fact that they need to be disabled by default means most users don't touch them. Even more concerning: internal documents, client details, financial information, and whatever else your team uploads could all be read by someone working for the AI company — as human review for safety and quality purposes is standard industry practice.

A Practical Data Classification Framework for Australian SMEs

Use this framework to classify what staff can and cannot input into AI tools:

Data Category Examples Public AI Tools (e.g., free ChatGPT) Enterprise AI (e.g., Microsoft Copilot, ChatGPT Team)
Prohibited Customer names + contact details, health records, TFNs, employee salaries, legal case files ❌ Never ⚠️ Only with DPA in place
Restricted Internal pricing, supplier contracts, unreleased product plans ❌ Never ⚠️ Review vendor terms first
Permitted with care De-identified customer feedback, anonymised case studies ✅ Acceptable ✅ Acceptable
Freely permitted Blog drafts, generic marketing copy, publicly available info ✅ Acceptable ✅ Acceptable

How to Assess an AI Vendor's Data Policy

When looking to adopt a commercially available product, organisations should conduct due diligence to ensure the product is suitable to its intended uses. This should include considering whether the product has been tested for such uses, how human oversight can be embedded into processes, the potential privacy and security risks, as well as who will have access to personal information input or generated by the entity when using the product.

When reviewing an AI vendor's data policy, Australian small business owners should ask the following questions:

  1. Where is data stored? Is it processed in Australia or overseas? If overseas, which country, and does that country have comparable privacy protections? This is directly relevant to your APP 8 obligations.
  2. Is your data used to train the AI model? If yes, can you opt out? Is opt-out the default or does it require manual configuration?
  3. Who has access to your inputs? Does the vendor employ human reviewers? Under what circumstances?
  4. What is the data retention period? How long are prompts and outputs stored, and can you request deletion?
  5. Is there a Data Processing Agreement (DPA) available? Using ChatGPT via an API requires a Data Processing Agreement outlining responsibilities for data protection. For any tool handling personal information, a DPA is essential.
  6. Does the vendor have an Australian privacy policy? Some global AI vendors publish terms that do not explicitly address Australian law.

The governance-first approach to AI is the ideal way to manage privacy risks, which in practice means embedding privacy-by-design into the design and development of an AI product that collects and uses personal information and implementing an ongoing process to monitor AI use of personal information throughout the product lifecycle.

For businesses using AI in accounting (see our guide on AI for Accounting and Cash Flow: How Australian SMEs Are Using Xero, MYOB, and AI-Powered Finance Tools), this vendor assessment is especially critical — your accounting data contains precisely the kind of sensitive financial information that must never enter uncontrolled AI environments.


The New Automated Decision-Making Disclosure Obligation

A significant change introduced by the Privacy and Other Legislation Amendment Act 2024 directly affects businesses using AI to make decisions about customers or staff. The Privacy and Other Legislation Amendment Act 2024 introduced an additional privacy policy disclosure obligation where: (i) automated decision-making is deployed by a regulated entity and that decision could significantly affect the rights or interests of an individual; and (ii) personal information about the individual is used in the operation of the computer program to make the decision or do the thing that is substantially and directly related to making the decision.

In plain terms: if you use AI to screen job applications, assess credit, triage customer complaints, or make pricing decisions that affect individual customers, you now have an obligation to disclose this in your privacy policy. This is not optional for covered entities — and it is a harbinger of what all businesses will face when the small business exemption is eventually removed.


How to Create an Internal AI Use Policy for Your Small Business

It is recommended that APP entities establish internal policies and procedures for the use of AI products to facilitate transparency and ensure good privacy governance. The Australian Signals Directorate's Australian Cyber Security Centre (ACSC) goes further, advising small businesses to establish an internal AI use policy or process, and clearly define what data can't be uploaded into AI platforms and systems, train and remind staff on responsible use of AI especially surrounding sensitive and proprietary information, and remove, anonymise, or change personal details when using an AI application so it cannot be used to identify or link to an individual.

A practical internal AI use policy for an Australian small business should cover the following elements:

1. Approved Tools List

Specify which AI tools are approved for use, at what subscription tier, and for which functions. Distinguish between consumer-grade tools (higher risk) and enterprise-grade tools with DPAs in place.

2. Data Classification Rules

Define where staff can use AI (for example, drafting internal documents, summarising non-confidential material) and where they must not use AI (for example, sensitive HR matters, final legal advice, highly confidential bids), which AI tools the business approves and who can use them.

3. Prompt Hygiene Standards

Require staff to de-identify or anonymise information before inputting it into AI tools. A simple rule: before pasting anything into an AI tool, ask whether the content contains names, contact details, financial figures, health information, or any data that could identify a real person. Many privacy breaches happen because someone pastes too much, too fast. A simple human oversight check can prevent this.

4. Output Review Requirements

Require human review of AI-generated content before it is used externally. Before deploying AI products, particularly customer-facing products such as chatbots, you should carefully test them to understand the risks of inaccurate or biased answers.

5. Incident Reporting Procedure

Define what constitutes a suspected AI-related privacy incident, who to notify, and what immediate steps to take. This includes what counts as suspected misuse (such as uploading client data to a public tool), who to contact if something feels off, and what to do immediately: stop, document, escalate.

6. Privacy Policy Update

Businesses should update their privacy policies and notifications with clear and transparent information about their use of AI, including ensuring that any public-facing AI tools (such as chatbots) are clearly identified as such to external users such as customers.

For businesses deploying customer-facing AI chatbots (see our guide on AI for Customer Service in Australian Small Business: Chatbots, Virtual Assistants, and After-Hours Support), this disclosure obligation is non-negotiable.


The Penalty Landscape: Why This Matters Now

Some business owners may be tempted to treat privacy compliance as a large-business problem. The penalty regime introduced by the Privacy and Other Legislation Amendment Act 2024 should dispel that assumption. High-tier violations carry maximum penalties of A$50 million, three times the value of any benefit obtained, or 30% of adjusted turnover during the breach period — whichever is greater. Even mid-tier violations carry penalties of up to A$3.3 million for companies.

That impression that Australian privacy law is less demanding must be tempered by the reality of Australia's enforcement posture and penalty regime. With maximum penalties reaching $50 million, three times the benefit obtained, or 30% of adjusted turnover, organisations face financial exposure that rivals or exceeds many of the world's most stringent privacy frameworks.

Enforcement is also becoming more active. In 2024, the Commissioner ordered Bunnings to cease using facial recognition technology in its stores, finding the retailer collected sensitive biometric information without adequate notification or consent. The OAIC has signalled that it is shifting toward a more enforcement-focused posture — not just education.


Key Takeaways

  • The small business exemption (under AU$3 million turnover) is not a permanent safe harbour. Reform proposals to remove it are active, and health businesses, those with large-company contracts, and those using AI in automated decisions may already be covered.
  • The OAIC explicitly advises against inputting personal information into public generative AI tools. APP 8 means liability follows data across borders — including to the servers of ChatGPT, Gemini, and Claude.
  • 2024 was a record year for Australian data breaches, with 1,113 notifications — a 25% increase on 2023. AI-related data spills are an emerging and documented cause.
  • Every business using AI that processes personal information should have a written internal AI use policy that classifies data, specifies approved tools, mandates prompt hygiene, and defines an incident reporting process.
  • The new automated decision-making disclosure obligation introduced by the Privacy and Other Legislation Amendment Act 2024 requires covered entities to disclose AI-driven decisions that could significantly affect individuals' rights or interests in their privacy policy.

Conclusion

Data and legal risk is consistently cited as the primary barrier to confident AI adoption among Australian small business owners — and for good reason. The regulatory environment is evolving rapidly, the OAIC has published clear expectations, and the penalty regime is now genuinely consequential. But compliance is not the enemy of AI adoption; it is the foundation of sustainable AI adoption.

The businesses that will get the most from AI in the years ahead are those that build appropriate governance structures now — before the second tranche of Privacy Act reforms arrives, before the small business exemption is removed, and before a staff member inadvertently pastes a client's health record into a free chatbot.

Understanding your obligations under the Privacy Act 1988, the Australian Privacy Principles, and the Notifiable Data Breaches scheme is not a legal exercise separate from your AI strategy — it is your AI strategy. A well-structured internal AI use policy, a disciplined vendor assessment process, and a clear data classification framework will allow your business to use AI confidently, compliantly, and competitively.

For a broader view of the responsible governance dimensions of AI adoption — including how to manage staff concerns and detect AI output bias — see our guide on Responsible AI for Australian Small Business: Ethics, Bias, Staff Impact, and Building an AI Policy. And if you're just beginning to evaluate which tools to adopt within these guardrails, our Best AI Tools for Australian Small Business in 2025: Compared by Use Case and Budget provides a compliance-aware comparison of the leading platforms available to Australian SMEs.


References

  • Office of the Australian Information Commissioner (OAIC). "Guidance on Privacy and the Use of Commercially Available AI Products." OAIC, October 2024 (updated January 2025). https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-the-use-of-commercially-available-ai-products

  • Office of the Australian Information Commissioner (OAIC). "Guidance on Privacy and Developing and Training Generative AI Models." OAIC, October 2024. https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-developing-and-training-generative-ai-models

  • Office of the Australian Information Commissioner (OAIC). "Notifiable Data Breaches Report: July to December 2024." OAIC, May 2025. https://www.oaic.gov.au/privacy/notifiable-data-breaches/notifiable-data-breaches-publications/notifiable-data-breaches-report-july-to-december-2024

  • Office of the Australian Information Commissioner (OAIC). "OAIC Stats Show Record Year for Data Breaches." OAIC, 2025. https://www.oaic.gov.au/news/media-centre/oaic-stats-show-record-year-for-data-breaches

  • Attorney-General's Department, Australian Government. "Privacy." AG.gov.au, 2024–2025. https://www.ag.gov.au/rights-and-protections/privacy

  • Australian Signals Directorate's Australian Cyber Security Centre (ASD's ACSC). "Artificial Intelligence for Small Business." Cyber.gov.au, January 2026. https://www.cyber.gov.au/business-government/secure-design/artificial-intelligence/artificial-intelligence-for-small-business

  • Norton Rose Fulbright. "Data and AI in the Digital Economy: An Australian Perspective." Norton Rose Fulbright, 2024. https://www.nortonrosefulbright.com/en/knowledge/publications/2f720e4f/data-and-ai-in-the-digital-economy-an-australian-perspective

  • Bird & Bird. "Australia's Privacy Regulator Releases New Guidance on Artificial Intelligence (AI)." Two Birds, 2025. https://www.twobirds.com/en/insights/2025/australia/australias-privacy-regulator-releases-new-guidance-on-artificial-intelligence

  • White & Case LLP. "AI Watch: Global Regulatory Tracker — Australia." White & Case, November 2025. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-australia

  • IBM Security. "Cost of a Data Breach Report 2024." IBM, 2024. https://www.ibm.com/reports/data-breach

  • Holding Redlich. "The Privacy Law Reforms Finally Passed in 2024 Set the Priorities for 2025." Holding Redlich, 2025. https://www.holdingredlich.com/the-privacy-law-reforms-finally-passed-in-2024-set-the-priorities-for-2025

↑ Back to top