{
  "id": "business-technology-digital-transformation/ai-for-australian-small-business/ai-for-australian-business-compliance-privacy-law-the-australian-privacy-act-and-data-safety",
  "title": "AI for Australian Business Compliance: Privacy Law, the Australian Privacy Act, and Data Safety",
  "slug": "business-technology-digital-transformation/ai-for-australian-small-business/ai-for-australian-business-compliance-privacy-law-the-australian-privacy-act-and-data-safety",
  "description": "",
  "category": "",
  "content": "I now have comprehensive, authoritative data to write this article. Let me compile the verified final article.\n\n---\n\n## Does the Privacy Act Apply to Your Small Business? Understanding the Threshold\n\nFor most Australian small business owners, the first question about privacy law is also the most important one: *does this even apply to me?*\n\n\nThe Privacy Act 1988 includes a small business exemption that frees companies with annual turnover below AU$3 million from compliance requirements — a threshold that covers approximately 95% of Australian businesses.\n On the surface, that sounds like most small businesses are off the hook. But the reality in 2025 is considerably more nuanced — and the risk of assuming you're exempt has never been higher.\n\nHere's why the exemption is not a safe harbour for AI-adopting small businesses:\n\n**1. The exemption is under active reform.** \nThe February 2023 Privacy Act Review Report proposed abolishing the small business exemption entirely, which would bring approximately 2.3 million additional businesses within the scope of privacy regulation.\n \nIt is predicted that the second tranche of reforms may contain crucial changes, including removing the small business exemption from the Privacy Act and introducing a 'fair and reasonable' test to be applied in handling personal information.\n\n\n**2. Certain small businesses are already covered.** \nYou must comply with the Privacy Act if you're an Australian Government agency, a business with annual turnover of A$3 million or more, a private health service provider regardless of turnover, or a small business that provides credit reporting services.\n This means every allied health practice, physiotherapist, psychologist, and dental clinic — regardless of size — is fully bound by the Act and the Australian Privacy Principles (APPs) when using AI tools.\n\n**3. Contractual obligations extend coverage further.** \nWhile the Privacy Act has long provided an exemption for businesses with annual revenue of less than $3 million, it is still challenging for small businesses if they contract with large organisations and become contractually bound\n to privacy standards that exceed the Act — including GDPR-equivalent protections.\n\n**4. The new statutory tort creates direct liability.** \nWith the statutory tort now in effect since June 2025, individuals can sue directly for serious invasions of privacy. Courts can award damages for financial loss, emotional distress, and harm to reputation — creating significant liability risk beyond regulatory penalties.\n\n\nThe practical upshot: even if your business is technically exempt today, using AI tools that process customer or employee data without appropriate safeguards creates legal, reputational, and commercial exposure that is growing — not shrinking.\n\n---\n\n## What the Australian Privacy Principles Mean When You Use AI\n\n\nThe Privacy Act contains 13 Australian Privacy Principles (APPs). The APPs apply to government agencies and private sector organisations with an annual turnover of $3 million or more. The APPs are principles-based — protecting privacy while not burdening agencies and organisations with inflexible prescriptive rules. They deal with all stages of the processing of personal information, setting out standards for the collection, use, disclosure, quality, and security of personal information.\n\n\nWhen AI enters the picture, these principles apply in ways that many business owners don't anticipate. \nThe Privacy Act 1988 and the Australian Privacy Principles apply to all uses of AI involving personal information, including where information is used to train, test, or use an AI system. If your organisation is covered by the Privacy Act, you will need to understand your obligations under the APPs when using AI.\n\n\nThe three APPs most directly triggered by AI tool use are:\n\n### APP 3 — Collection of Personal Information\nYou may only collect personal information that is reasonably necessary for your business functions. \nOrganisations cannot collect personal information just in case it could be useful for analytics or other AI applications in the future. Before collecting personal information, organisations should consider if de-identified or anonymous information could be used to achieve the same results.\n\n\n### APP 6 — Use and Disclosure of Personal Information\n\nIn accordance with Australian Privacy Principle 6, an individual's personal information should only be used or disclosed for AI for the primary purpose for which it was collected\n — or otherwise with consent. This means you cannot, for example, feed your customer contact database into an AI marketing tool if customers provided their details only for order fulfilment.\n\n### APP 8 — Cross-Border Disclosure\nThis is where AI tools create the most significant hidden risk. \nUnder APP 8, organisations remain legally responsible for how personal information is handled overseas, even when that data is processed by third-party SaaS platforms, cloud providers, analytics services, or AI vendors. Liability now follows the data, not the contract.\n Every time you paste customer data into ChatGPT, Claude, or Gemini — tools hosted on servers outside Australia — APP 8 is engaged.\n\n### APP 10 — Data Quality and Accuracy\n\nIf the AI product has not been trained on Australian data sets, you should consider whether your organisation's use of the product is likely to comply with your accuracy obligations under APP 10.\n This has direct relevance when using AI for customer-facing communications, medical or legal summaries, or financial advice — outputs must be accurate, and you remain responsible for them.\n\n### APP 11 — Security of Personal Information\n\nAustralian Privacy Principle 11 requires organisations to take reasonable steps to protect personal information held from misuse, interference and loss, as well as unauthorised access, modification or disclosure, and to destroy or de-identify the information when it is no longer required.\n\n\n---\n\n## The Notifiable Data Breaches Scheme: What AI-Related Breaches Look Like\n\nThe Notifiable Data Breaches (NDB) scheme, which has been in operation since 2018, requires covered entities to notify both the OAIC and affected individuals when a data breach is likely to result in serious harm. The scale of the problem is growing rapidly.\n\n\nBusinesses and government agencies reported more than 1,100 data breaches to the regulator and the public in 2024 — the highest annual total since mandatory data breach notification requirements started in 2018. The OAIC was notified of 595 data breaches in the second half of 2024 alone, ending the year with a total of 1,113 notifications — a 25% increase from 893 notifications in 2023.\n\n\n\nThe average cost of a data breach in 2024 was A$4.26 million, according to IBM, reinforcing the importance of proactive risk mitigation.\n\n\nAI is increasingly implicated in breach pathways. \nIn early 2025, a contractor working for an Australian organisation uploaded personal information — including names, contact details, and health records — of people involved with a government program into an AI system. This led to a serious data spill and is considered a notifiable data breach.\n This is precisely the type of incident that can occur in any small business when staff use public AI tools without clear guidelines.\n\nUnder the NDB scheme, the notification clock starts immediately. \nThe Privacy Act requires organisations to take reasonable steps to conduct a data breach assessment within 30 days of becoming aware there are grounds to suspect they may have experienced an eligible data breach. Once the organisation forms a reasonable belief that there has been an eligible data breach, they must notify affected individuals and the OAIC as soon as practicable.\n\n\n---\n\n## What Data Should Never Be Uploaded to Public AI Platforms\n\nThe OAIC has been unambiguous on this point. \nThe OAIC does not recommend inputting personal information into public generative AI tools.\n\n\nThe reason is structural. \nDue to limited resources and technical expertise, small businesses may lack strong data security and governance frameworks, making them vulnerable to: accidental data leaks through cloud-based AI tools, unauthorised access to sensitive customer information, and potential misuse of customer data by third-party AI providers.\n\n\n\nPublic AI tools, especially on their free tier, will use your conversations to improve and train their model. There are opt-out options in Gemini, ChatGPT, and others, but the fact that they need to be disabled by default means most users don't touch them.\n Even more concerning: \ninternal documents, client details, financial information, and whatever else your team uploads could all be read by someone working for the AI company\n — as human review for safety and quality purposes is standard industry practice.\n\n### A Practical Data Classification Framework for Australian SMEs\n\nUse this framework to classify what staff can and cannot input into AI tools:\n\n| Data Category | Examples | Public AI Tools (e.g., free ChatGPT) | Enterprise AI (e.g., Microsoft Copilot, ChatGPT Team) |\n|---|---|---|---|\n| **Prohibited** | Customer names + contact details, health records, TFNs, employee salaries, legal case files | ❌ Never | ⚠️ Only with DPA in place |\n| **Restricted** | Internal pricing, supplier contracts, unreleased product plans | ❌ Never | ⚠️ Review vendor terms first |\n| **Permitted with care** | De-identified customer feedback, anonymised case studies | ✅ Acceptable | ✅ Acceptable |\n| **Freely permitted** | Blog drafts, generic marketing copy, publicly available info | ✅ Acceptable | ✅ Acceptable |\n\n---\n\n## How to Assess an AI Vendor's Data Policy\n\n\nWhen looking to adopt a commercially available product, organisations should conduct due diligence to ensure the product is suitable to its intended uses. This should include considering whether the product has been tested for such uses, how human oversight can be embedded into processes, the potential privacy and security risks, as well as who will have access to personal information input or generated by the entity when using the product.\n\n\nWhen reviewing an AI vendor's data policy, Australian small business owners should ask the following questions:\n\n1. **Where is data stored?** Is it processed in Australia or overseas? If overseas, which country, and does that country have comparable privacy protections? This is directly relevant to your APP 8 obligations.\n2. **Is your data used to train the AI model?** If yes, can you opt out? Is opt-out the default or does it require manual configuration?\n3. **Who has access to your inputs?** Does the vendor employ human reviewers? Under what circumstances?\n4. **What is the data retention period?** How long are prompts and outputs stored, and can you request deletion?\n5. **Is there a Data Processing Agreement (DPA) available?** \nUsing ChatGPT via an API requires a Data Processing Agreement outlining responsibilities for data protection.\n For any tool handling personal information, a DPA is essential.\n6. **Does the vendor have an Australian privacy policy?** Some global AI vendors publish terms that do not explicitly address Australian law.\n\n\nThe governance-first approach to AI is the ideal way to manage privacy risks, which in practice means embedding privacy-by-design into the design and development of an AI product that collects and uses personal information and implementing an ongoing process to monitor AI use of personal information throughout the product lifecycle.\n\n\nFor businesses using AI in accounting (see our guide on *AI for Accounting and Cash Flow: How Australian SMEs Are Using Xero, MYOB, and AI-Powered Finance Tools*), this vendor assessment is especially critical — your accounting data contains precisely the kind of sensitive financial information that must never enter uncontrolled AI environments.\n\n---\n\n## The New Automated Decision-Making Disclosure Obligation\n\nA significant change introduced by the *Privacy and Other Legislation Amendment Act 2024* directly affects businesses using AI to make decisions about customers or staff. \nThe Privacy and Other Legislation Amendment Act 2024 introduced an additional privacy policy disclosure obligation where: (i) automated decision-making is deployed by a regulated entity and that decision could significantly affect the rights or interests of an individual; and (ii) personal information about the individual is used in the operation of the computer program to make the decision or do the thing that is substantially and directly related to making the decision.\n\n\nIn plain terms: if you use AI to screen job applications, assess credit, triage customer complaints, or make pricing decisions that affect individual customers, you now have an obligation to disclose this in your privacy policy. This is not optional for covered entities — and it is a harbinger of what all businesses will face when the small business exemption is eventually removed.\n\n---\n\n## How to Create an Internal AI Use Policy for Your Small Business\n\n\nIt is recommended that APP entities establish internal policies and procedures for the use of AI products to facilitate transparency and ensure good privacy governance.\n The Australian Signals Directorate's Australian Cyber Security Centre (ACSC) goes further, advising small businesses to \nestablish an internal AI use policy or process, and clearly define what data can't be uploaded into AI platforms and systems, train and remind staff on responsible use of AI especially surrounding sensitive and proprietary information, and remove, anonymise, or change personal details when using an AI application so it cannot be used to identify or link to an individual.\n\n\nA practical internal AI use policy for an Australian small business should cover the following elements:\n\n### 1. Approved Tools List\nSpecify which AI tools are approved for use, at what subscription tier, and for which functions. Distinguish between consumer-grade tools (higher risk) and enterprise-grade tools with DPAs in place.\n\n### 2. Data Classification Rules\n\nDefine where staff can use AI (for example, drafting internal documents, summarising non-confidential material) and where they must not use AI (for example, sensitive HR matters, final legal advice, highly confidential bids), which AI tools the business approves and who can use them.\n\n\n### 3. Prompt Hygiene Standards\nRequire staff to de-identify or anonymise information before inputting it into AI tools. A simple rule: before pasting anything into an AI tool, ask whether the content contains names, contact details, financial figures, health information, or any data that could identify a real person. \nMany privacy breaches happen because someone pastes too much, too fast. A simple human oversight check can prevent this.\n\n\n### 4. Output Review Requirements\nRequire human review of AI-generated content before it is used externally. \nBefore deploying AI products, particularly customer-facing products such as chatbots, you should carefully test them to understand the risks of inaccurate or biased answers.\n\n\n### 5. Incident Reporting Procedure\nDefine what constitutes a suspected AI-related privacy incident, who to notify, and what immediate steps to take. \nThis includes what counts as suspected misuse (such as uploading client data to a public tool), who to contact if something feels off, and what to do immediately: stop, document, escalate.\n\n\n### 6. Privacy Policy Update\n\nBusinesses should update their privacy policies and notifications with clear and transparent information about their use of AI, including ensuring that any public-facing AI tools (such as chatbots) are clearly identified as such to external users such as customers.\n\n\nFor businesses deploying customer-facing AI chatbots (see our guide on *AI for Customer Service in Australian Small Business: Chatbots, Virtual Assistants, and After-Hours Support*), this disclosure obligation is non-negotiable.\n\n---\n\n## The Penalty Landscape: Why This Matters Now\n\nSome business owners may be tempted to treat privacy compliance as a large-business problem. The penalty regime introduced by the *Privacy and Other Legislation Amendment Act 2024* should dispel that assumption. \nHigh-tier violations carry maximum penalties of A$50 million, three times the value of any benefit obtained, or 30% of adjusted turnover during the breach period — whichever is greater.\n Even mid-tier violations carry penalties of up to A$3.3 million for companies.\n\n\nThat impression that Australian privacy law is less demanding must be tempered by the reality of Australia's enforcement posture and penalty regime. With maximum penalties reaching $50 million, three times the benefit obtained, or 30% of adjusted turnover, organisations face financial exposure that rivals or exceeds many of the world's most stringent privacy frameworks.\n\n\nEnforcement is also becoming more active. \nIn 2024, the Commissioner ordered Bunnings to cease using facial recognition technology in its stores, finding the retailer collected sensitive biometric information without adequate notification or consent.\n The OAIC has signalled that it is shifting toward a more enforcement-focused posture — not just education.\n\n---\n\n## Key Takeaways\n\n- **The small business exemption (under AU$3 million turnover) is not a permanent safe harbour.** Reform proposals to remove it are active, and health businesses, those with large-company contracts, and those using AI in automated decisions may already be covered.\n- **The OAIC explicitly advises against inputting personal information into public generative AI tools.** APP 8 means liability follows data across borders — including to the servers of ChatGPT, Gemini, and Claude.\n- **2024 was a record year for Australian data breaches**, with 1,113 notifications — a 25% increase on 2023. AI-related data spills are an emerging and documented cause.\n- **Every business using AI that processes personal information should have a written internal AI use policy** that classifies data, specifies approved tools, mandates prompt hygiene, and defines an incident reporting process.\n- **The new automated decision-making disclosure obligation** introduced by the *Privacy and Other Legislation Amendment Act 2024* requires covered entities to disclose AI-driven decisions that could significantly affect individuals' rights or interests in their privacy policy.\n\n---\n\n## Conclusion\n\nData and legal risk is consistently cited as the primary barrier to confident AI adoption among Australian small business owners — and for good reason. The regulatory environment is evolving rapidly, the OAIC has published clear expectations, and the penalty regime is now genuinely consequential. But compliance is not the enemy of AI adoption; it is the foundation of sustainable AI adoption.\n\nThe businesses that will get the most from AI in the years ahead are those that build appropriate governance structures now — before the second tranche of Privacy Act reforms arrives, before the small business exemption is removed, and before a staff member inadvertently pastes a client's health record into a free chatbot.\n\nUnderstanding your obligations under the Privacy Act 1988, the Australian Privacy Principles, and the Notifiable Data Breaches scheme is not a legal exercise separate from your AI strategy — it *is* your AI strategy. A well-structured internal AI use policy, a disciplined vendor assessment process, and a clear data classification framework will allow your business to use AI confidently, compliantly, and competitively.\n\nFor a broader view of the responsible governance dimensions of AI adoption — including how to manage staff concerns and detect AI output bias — see our guide on *Responsible AI for Australian Small Business: Ethics, Bias, Staff Impact, and Building an AI Policy*. And if you're just beginning to evaluate which tools to adopt within these guardrails, our *Best AI Tools for Australian Small Business in 2025: Compared by Use Case and Budget* provides a compliance-aware comparison of the leading platforms available to Australian SMEs.\n\n---\n\n## References\n\n- Office of the Australian Information Commissioner (OAIC). *\"Guidance on Privacy and the Use of Commercially Available AI Products.\"* OAIC, October 2024 (updated January 2025). https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-the-use-of-commercially-available-ai-products\n\n- Office of the Australian Information Commissioner (OAIC). *\"Guidance on Privacy and Developing and Training Generative AI Models.\"* OAIC, October 2024. https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-developing-and-training-generative-ai-models\n\n- Office of the Australian Information Commissioner (OAIC). *\"Notifiable Data Breaches Report: July to December 2024.\"* OAIC, May 2025. https://www.oaic.gov.au/privacy/notifiable-data-breaches/notifiable-data-breaches-publications/notifiable-data-breaches-report-july-to-december-2024\n\n- Office of the Australian Information Commissioner (OAIC). *\"OAIC Stats Show Record Year for Data Breaches.\"* OAIC, 2025. https://www.oaic.gov.au/news/media-centre/oaic-stats-show-record-year-for-data-breaches\n\n- Attorney-General's Department, Australian Government. *\"Privacy.\"* AG.gov.au, 2024–2025. https://www.ag.gov.au/rights-and-protections/privacy\n\n- Australian Signals Directorate's Australian Cyber Security Centre (ASD's ACSC). *\"Artificial Intelligence for Small Business.\"* Cyber.gov.au, January 2026. https://www.cyber.gov.au/business-government/secure-design/artificial-intelligence/artificial-intelligence-for-small-business\n\n- Norton Rose Fulbright. *\"Data and AI in the Digital Economy: An Australian Perspective.\"* Norton Rose Fulbright, 2024. https://www.nortonrosefulbright.com/en/knowledge/publications/2f720e4f/data-and-ai-in-the-digital-economy-an-australian-perspective\n\n- Bird & Bird. *\"Australia's Privacy Regulator Releases New Guidance on Artificial Intelligence (AI).\"* Two Birds, 2025. https://www.twobirds.com/en/insights/2025/australia/australias-privacy-regulator-releases-new-guidance-on-artificial-intelligence\n\n- White & Case LLP. *\"AI Watch: Global Regulatory Tracker — Australia.\"* White & Case, November 2025. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-australia\n\n- IBM Security. *\"Cost of a Data Breach Report 2024.\"* IBM, 2024. https://www.ibm.com/reports/data-breach\n\n- Holding Redlich. *\"The Privacy Law Reforms Finally Passed in 2024 Set the Priorities for 2025.\"* Holding Redlich, 2025. https://www.holdingredlich.com/the-privacy-law-reforms-finally-passed-in-2024-set-the-priorities-for-2025",
  "geography": {},
  "metadata": {},
  "publishedAt": "",
  "workspaceId": "a3c8bfbc-1e6e-424a-a46b-ce6966e05ac0",
  "_links": {
    "canonical": "https://opensummitai.directory.norg.ai/business-technology-digital-transformation/ai-for-australian-small-business/ai-for-australian-business-compliance-privacy-law-the-australian-privacy-act-and-data-safety/"
  }
}