{
  "id": "future-of-work/ai-employment-in-australia/australian-government-policy-on-ai-and-jobs-what-regulation-funding-and-national-strategy-mean-for-workers",
  "title": "Australian Government Policy on AI and Jobs: What Regulation, Funding, and National Strategy Mean for Workers",
  "slug": "future-of-work/ai-employment-in-australia/australian-government-policy-on-ai-and-jobs-what-regulation-funding-and-national-strategy-mean-for-workers",
  "description": "",
  "category": "",
  "content": "I'll research the current Australian government policy landscape on AI and jobs before writing this article.\nI now have comprehensive, current research to write a thoroughly verified, authoritative article. Let me compose the final piece.\n\n---\n\n## Australian Government Policy on AI and Jobs: What Regulation, Funding, and National Strategy Mean for Workers\n\nMost media coverage of AI and Australian jobs focuses on what technology *can* do — which roles it might automate, which skills it will make redundant. Far less attention goes to what the Australian government is actually *doing* about it: the strategies, funding programs, regulatory debates, and institutional mechanisms that will shape whether the AI transition is managed in workers' interests or left entirely to market forces.\n\nThis distinction matters enormously. Whether you face displacement or opportunity in an AI-enabled economy is not just a function of your occupation — it is also a function of what rules govern how employers can deploy AI, what funding is available to help you retrain, and who has a seat at the table when those decisions are made. Understanding the policy landscape is not a niche concern for policy wonks. It is practical intelligence for every Australian worker.\n\n---\n\n## The Foundational Evidence Base: What Jobs and Skills Australia Found\n\nBefore examining policy responses, it is worth understanding the evidence base the government is working from. \nJobs and Skills Australia has released a landmark report on how Generative AI is starting to transform work, skills, and the Australian labour market — the Generative AI Capacity Study, which provides the first whole-of-labour-market view of Gen AI's potential, impact to date, and what's needed to support Australia's digital and AI transition.\n\n\nThe study's headline finding is calibrated, not alarmist. \nA landmark whole-of-labour-market study from Jobs and Skills Australia found generative artificial intelligence is likely to augment the way that we work rather than replace jobs through automation.\n More specifically, \nanalysis by Jobs and Skills Australia in 2025 found that in the near-term AI is more likely to augment rather than replace most work, with only 4% of Australia's workforce in occupations with high automation exposure. JSA's analysis, based on the capabilities of GPT-4 in late 2025, indicated that large-scale job displacement is not occurring in Australia and the most significant employment effects are not expected for at least a decade.\n\n\nHowever, the government is not treating this as grounds for complacency. \nThe government recognises that there is uncertainty around the direct effects of AI on the labour market, and there are community concerns. As AI reshapes how Australians work and working conditions, continuing a tripartite dialogue with business, unions and experts to agree on a shared approach to the opportunities and challenges of AI is vital. Consultation and codesign between employers and employees can assist in capturing the benefits of AI in safe, fair and cooperative workplaces.\n\n\nThe JSA study also flags equity concerns that go beyond aggregate statistics. \nThe study notes that AI has the potential to displace people in some jobs, particularly administrative and clerical roles, and that the skills system will play an important role in equipping people to transition into new roles. The impact of AI is going to differ across geographical location, industry and occupation, and will also change over time as emergent technologies further change the way we work.\n (For a detailed breakdown of which roles face the greatest risk, see our guide on *Which Australian Jobs Are Most at Risk from AI?*)\n\n---\n\n## The National AI Plan (December 2025): Australia's Central Policy Document\n\n\nOn 2 December 2025, the Australian Government unveiled the National AI Plan 2025, its most comprehensive statement to date on how it intends to support Australia to shape and manage the rapid expansion of AI technologies. This is not just another strategy document — it is concrete confirmation that AI is a core economic, regulatory and political priority for Australia.\n\n\n\nThe Plan pursues three overarching goals: capturing the economic opportunity of AI through infrastructure, research and investment; spreading the benefits of AI adoption across industries, regions and the workforce; and keeping Australians safe by managing AI risks through existing legal frameworks rather than a standalone AI Act.\n\n\n### What the Plan Says About Workers\n\nFor workers, the most significant commitments in the National AI Plan concern consultation, skills, and the distribution of productivity gains. \nThe Plan states that workers and unions must have a strong voice in how AI is adopted across workplaces, and that the government will work with unions and industry representatives to ensure workplaces introduce AI technologies transparently, safely, and in ways that allow workers to share in the benefits.\n\n\n\nThe Plan is explicit that AI adoption must be consultative, transparent, and fair — meaning workers and unions should be involved early in decisions about AI use. Organisations are expected to consider and mitigate the impacts of AI on jobs and the workforce.\n\n\nCrucially, \nthe Plan emphasises a strong role for worker consultation and union engagement, recommending consultation wherever AI affects rostering, monitoring, performance, recruitment or work allocation.\n\n\n### What the Plan Does Not Do\n\n\nThe Plan does not create new legal obligations but signals regulatory priorities, including stronger oversight of automated decision-making, continued reliance on privacy, consumer, online safety and sectoral laws, international alignment, and the establishment of institutional capability such as the AI Safety Institute.\n\n\nThis is a significant limitation. \nThe Plan commits to involving unions in AI adoption decisions but provides few binding protections. The AI Safety Institute will begin operations in early 2026 with $29.9 million in funding to monitor AI risks and collaborate with international partners. But its advisory role without enforcement powers raises questions about effectiveness.\n\n\n---\n\n## The Senate Select Committee Report: The Strongest Worker-Centred Regulatory Push\n\n\nThe Senate Select Committee on Adopting Artificial Intelligence delivered its final report on 26 November 2024. The Committee's recommendations represent a material shift from the current voluntary, principles-based approach to AI, towards a mandatory regulatory framework with specific obligations for high-risk AI applications.\n\n\n\nThe committee recommended that high-risk AI applications — particularly those affecting employment, financial decisions, healthcare, and government services — be subject to mandatory compliance requirements.\n This is the closest Australia has come to legislating enforceable worker protections in the AI context.\n\nOn workplace surveillance and worker rights specifically, \nthe Committee expressed strong concern about the impacts of AI on workers' rights and working conditions, particularly around workforce planning, management and surveillance. The Committee said there was \"considerable risk these invasive and dehumanising uses of AI in the workplace undermine workplace consultation and workers' rights more generally.\"\n\n\n\nThe government tabled its formal response to this report in the House of Representatives on 1 April 2026\n — a response that largely accepted the spirit of worker-focused recommendations while stopping short of committing to binding legislation on the timeline the Committee envisaged.\n\n### The Regulatory Gap: From Recommendations to Law\n\n\nThe inquiry called for a dual response: an acceleration of AI adoption, particularly in government and the public sector; and mandatory guardrails for high-risk AI applications, moving beyond the current voluntary ethics framework. This is not a subtle shift — it is a recommendation to make parts of Australia's AI Ethics Framework legally enforceable.\n\n\nHowever, the government's trajectory has been away from this position. \nOn 2 December 2025, the Albanese Government released its National AI Plan, which retreated from the Government's first-term commitment to introduce \"mandatory guardrails\" and instead directs regulators to report any gaps in existing legislation with the newly formed AI Safety Institute.\n\n\n\nRather than establishing mandatory guardrails for AI in high-risk settings the government was exploring last year, Australia will instead \"continue to build on Australia's robust existing legal and regulatory frameworks, ensuring that established laws remain the foundation for addressing and mitigating AI-related risks.\"\n\n\n---\n\n## The Regulatory Debate: ACTU vs. Business Council of Australia\n\nThe central fault line in Australian AI policy is the dispute between organised labour and business over whether worker protections should be binding or voluntary.\n\n### The Union Position\n\n\nAt the federal level, unions, led by the Australian Council of Trade Unions (ACTU), are advocating for mandatory \"AI Implementation Agreements\" that would require employers to consult with staff before introducing new AI technologies.\n\n\n\nAdditional union proposals include a right for workers to refuse to use AI in certain circumstances, mandated training, reforms to surveillance laws, and expanded bargaining rights related to AI adoption.\n\n\nThe ACTU's frustration with the pace of government action is pointed. \n\"Workers are tired of being told by large tech companies that AI will bring improvements in the far distant future, when our rights and our jobs are under threat right now,\" ACTU Assistant Secretary Joseph Mitchell said.\n\n\nIn a significant development, \nMicrosoft Australia and the ACTU announced an agreement to \"develop a framework to elevate the voices and expertise of working people in the introduction of AI and other emerging technologies into Australian workplaces.\" The agreement, a first in Australia, is grounded in three core objectives: information sharing with union leaders and workers, worker voice in technology development, and collaboration on public policy and skills.\n\n\n### The Business Position\n\n\nThe ACTU called for the government to force employers to consult with staff before introducing new AI tools, while business groups warned that additional regulation could stifle adoption and reduce productivity gains.\n\n\n\nBran Black, chief executive of the Business Council of Australia, said the National AI Plan charts a clear direction for using AI to boost productivity and competitiveness\n — signalling business satisfaction with the government's decision to rely on existing frameworks rather than introduce new binding obligations.\n\n### The Government's Balancing Act\n\nThe government has explicitly sided with neither camp entirely. \nAustralian Treasurer Jim Chalmers has rejected union calls for immediate regulation of AI in Australian workplaces, saying that \"regulation will matter but we are overwhelmingly focused on capabilities and opportunities, not just guardrails.\"\n\n\n\nWhile the Australian Government appears to be moving away from a dedicated AI Act, recent supportive comments from key members of the Australian Government indicate that employers should be prepared for more targeted legislative changes which give workers and unions greater voice in the adoption of AI in the workplace.\n\n\n---\n\n## What Existing Law Currently Covers — and What It Doesn't\n\nWorkers asking \"what are my legal rights when my employer deploys AI?\" face a frustrating answer: current protections are patchwork, not purpose-built.\n\n\nAustralia does not have dedicated or overarching AI legislation. Instead, its regulatory approach relies on a combination of voluntary frameworks and existing non-AI-specific laws.\n\n\nThe most significant recent legislative development is the Privacy Act amendment. \nIn December 2024, the Australian Parliament passed the Privacy and Other Legislation Amendment Bill 2024, which among other amendments, adds a requirement for privacy policies to contain information about substantially automated\n decision-making processes. This creates a disclosure obligation — but not a right of appeal or a requirement for employer consultation before deployment.\n\nOn workplace safety, \nthe National AI Plan emphasises worker consultation and union engagement, and Safe Work Australia's best practice review suggests forthcoming WHS guidance on AI-safety such as psychosocial and monitoring risks.\n This guidance, when released, may create de facto compliance expectations for employers even without formal legislative change.\n\n(For a detailed analysis of your specific legal rights as a worker when AI affects your role, see our guide on *AI and the Australian Workplace: Your Legal Rights, Union Protections, and What Employers Must Disclose.*)\n\n---\n\n## Government Funding and Skills Programs: What Workers Can Access Now\n\nThe most concrete and immediately accessible policy commitments for workers are skills funding programs. These are not hypothetical — they are live, funded, and available.\n\n### The One Million Free AI Courses Initiative\n\n\nThe Australian Government has announced a national initiative to provide one million Australians with free AI skills training, saying the program will help workers and small businesses prepare for the increasing use of artificial intelligence.\n\n\n\nThrough the National AI Centre, and in partnership with TAFE NSW's Institute of Applied Technology – Digital, the government is offering one million fully subsidised scholarships for an online microskill course based on the government's Guidance for AI Adoption, launched in October 2025.\n\n\n\nTogether with the launch of the National AI Plan and the establishment of the Australian AI Safety Institute, the government is making sure that anyone who wants to use AI can do so safely, responsibly and effectively — no matter where they live or what industry they're in. Acting skills and training minister Amanda Rishworth said digital and AI capability must now be seen as core foundational skills.\n\n\n### The VET and TAFE Pathway\n\n\nA major theme of the National AI Plan is the need for lifelong learning and broad AI capability uplift across the workforce. Initiatives include VET and TAFE programs, microcredentials, and the Next Generation Graduates Program.\n\n\n\nThe Institute of Applied Technology offers several AI microcredential courses, such as the Responsible AI microcredential. These courses have attracted more than 150,000 enrolments to date.\n\n\n### The Next Generation Graduates Program\n\n\nThe Next Generation Graduates Program is building a pipeline of highly skilled professionals in AI and emerging technologies through industry-linked postgraduate scholarships\n — targeting those seeking to transition into AI-native roles rather than simply build foundational literacy.\n\n### The AI Adoption Tracker\n\n\nUpdated data from the National AI Centre's AI Adoption Tracker shows that small and medium Australian businesses continue to embrace artificial intelligence in their operations, along with responsible AI practices.\n \nThe AI Adoption Tracker and National AI Ecosystem Report form part of the evidence base that will guide ongoing refinement of the National AI Plan and help identify gaps and priorities.\n For workers, this matters because it tracks where AI deployment is accelerating — and therefore where skills demand and potential displacement pressures are highest.\n\n### Addressing Digital Exclusion\n\n\nThe Plan acknowledges persistent digital exclusion and uneven AI adoption across regions and communities. To address this, the Government is consolidating SME and not-for-profit support within the National AI Centre, extending First Nations support initiatives, and accelerating AI uptake across the public service through GovAI, introduction of Chief AI Officers in every agency and strengthened automated decision-making legal frameworks.\n\n\n(For a step-by-step guide to accessing these programs, see our guide on *How to Future-Proof Your Career Against AI in Australia: A Step-by-Step Upskilling Plan.*)\n\n---\n\n## The AI Safety Institute: What It Means for Worker Protection\n\n\nOn 25 November 2025, the Commonwealth Government announced it would establish a national AI Safety Institute. The AISI will strengthen testing, evaluation and oversight of advanced AI systems, coordinate with regulators such as the Office of the Australian Information Commissioner and support risk-based regulatory responses to AI. Australia will also join the International Network of AI Safety Institutes, aligning local practice with comparable efforts in the US, UK, Canada, South Korea and Japan.\n\n\nFor workers, the AISI's significance is indirect but real. Its mandate to coordinate across regulators means that AI harms identified in employment contexts — discriminatory hiring algorithms, intrusive workplace monitoring systems, AI-driven wage suppression — can be escalated to a dedicated institutional body rather than falling through the cracks between existing regulators.\n\nThe caveat is its funding scale. \nThe AI Safety Institute will begin operations in early 2026 with $29.9 million in funding to monitor AI risks and collaborate with international partners. But its advisory role without enforcement powers raises questions about effectiveness.\n\n\n---\n\n## What Policy Means for Employers: Current and Emerging Obligations\n\nWhile no comprehensive AI employment law yet exists, the policy direction creates practical obligations employers should already be preparing for:\n\n| Area | Current Status | Direction of Travel |\n|------|---------------|---------------------|\n| **Worker consultation before AI deployment** | Voluntary / recommended | Likely to become a formal obligation in enterprise agreements |\n| **AI surveillance in the workplace** | Regulated under existing WHS and privacy law | Senate Committee called for stronger, AI-specific limits |\n| **AI in hiring decisions** | No mandatory disclosure | Privacy Act amendments require disclosure of automated decision-making |\n| **Worker retraining obligations** | No statutory requirement | National AI Plan signals expectation of employer investment |\n| **High-risk AI (e.g. affecting employment)** | Voluntary guardrails only | Senate Committee recommended mandatory compliance |\n\n\nOrganisations that embrace responsible AI implementation — with robust worker protections and clear governance — are likely to see enhanced productivity and reduced operational risks in the long term. Reviewing AI implementations through a WHS lens is now recommended.\n\n\n---\n\n## The Public Trust Gap: Why Policy Credibility Matters\n\nAny policy framework operates against a backdrop of public trust — and on AI, Australia has a significant deficit. \nAccording to a 2025 study by the University of Melbourne and KPMG, only 30% of Australians believe the benefits of AI outweigh its risks; just 36% of citizens trust AI systems more broadly. Approximately 78% of respondents expressed concern about negative outcomes from AI, and only 30% believe current laws and safeguards are adequate.\n\n\nThis trust gap creates a policy dilemma: the government needs broad AI adoption to realise productivity gains, but workers and citizens are sceptical that their interests are protected. The gap between the Senate Committee's binding-regulation recommendations and the National AI Plan's voluntary-framework approach is precisely where that scepticism is concentrated.\n\n---\n\n## Key Takeaways\n\n- **Australia has a National AI Plan (December 2025) but no standalone AI law.** The government is relying on existing legal frameworks — privacy, consumer, WHS, and discrimination law — rather than introducing a dedicated AI Act. This means worker protections remain patchwork and context-dependent.\n\n- **The Senate Select Committee recommended mandatory guardrails for high-risk AI use, including employment decisions.** The government's April 2026 response accepted the spirit of these recommendations but has not yet committed to a legislative timeline.\n\n- **The ACTU is pushing for binding \"AI Implementation Agreements\" requiring mandatory employer consultation before AI deployment.** Business groups oppose this. The government has signalled it favours consultation but has rejected immediate binding regulation.\n\n- **The most accessible worker-facing policy is skills funding.** One million free AI microskill courses through TAFE NSW and the National AI Centre, plus the Next Generation Graduates Program and VET microcredentials, are live and available now.\n\n- **The AI Adoption Tracker and Jobs and Skills Australia's Generative AI Capacity Study form the evidence base for ongoing policy.** Workers and advocates can use these tools to monitor where AI deployment is accelerating and where policy gaps are most acute.\n\n---\n\n## Conclusion\n\nAustralian government policy on AI and jobs is best understood as a system under construction — with a clear direction of travel but significant gaps between aspiration and enforceable obligation. The National AI Plan sets the philosophical framework: consultative, worker-inclusive, skills-focused, and built on existing law rather than a new regulatory regime. The Senate Select Committee pushed harder, calling for mandatory compliance in high-risk AI applications including employment. The ACTU is pushing harder still, seeking binding consultation rights and surveillance limits. The Business Council of Australia is resisting, arguing that regulation will constrain the productivity gains AI can deliver.\n\nFor workers, the practical upshot is this: the government's policy posture provides real funding for skills (the one million free courses are genuine and accessible), a commitment to worker consultation that is becoming an industry norm even without legal force, and an institutional infrastructure — the AI Safety Institute, Jobs and Skills Australia, the National AI Centre — that gives worker advocates a place to surface harms. What it does not yet provide is enforceable legal protection against AI-driven displacement, surveillance, or discriminatory automated decision-making.\n\nThe policy gap is real. But it is also a moving target — and the direction of movement, driven by union advocacy, Senate scrutiny, and mounting public concern, is toward greater worker protection, not less.\n\nFor the broader context on how Australia's policy posture compares internationally, see our guide on *AI and Jobs in Australia vs. the World: How Australia Compares to the US, UK, and OECD Nations.* For what this means for your specific occupation and career decisions, see *Should You Retrain, Pivot, or Stay? How to Decide Your Best Career Move in an AI-Disrupted Australian Job Market.*\n\n---\n\n## References\n\n- Jobs and Skills Australia. *\"Our Gen AI Transition: Implications for Work and Skills.\"* Australian Government, August 2025. https://www.jobsandskills.gov.au/publications/generative-ai-capacity-study-report\n\n- Australian Government, Department of Industry, Science and Resources. *\"National AI Plan 2025.\"* December 2, 2025. https://www.industry.gov.au/publications/national-ai-plan\n\n- Australian Government, Department of Industry, Science and Resources. *\"Australian Government Response: Senate Select Committee on Adopting Artificial Intelligence (AI) Report.\"* April 1, 2026. https://www.industry.gov.au/publications/australian-government-response-senate-select-committee-adopting-artificial-intelligence-ai-report\n\n- Senate Select Committee on Adopting Artificial Intelligence (AI). *\"Final Report.\"* Parliament of Australia, November 26, 2024. https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Adopting_Artificial_Intelligence_AI/AdoptingAI\n\n- MinterEllison. *\"Increased AI Regulation: Final Senate Report Key Takeaways.\"* MinterEllison Insights, December 2024. https://www.minterellison.com/articles/increased-ai-regulation-final-senate-report-key-takeaways\n\n- Bird & Bird. *\"A New Era for AI Governance in Australia: What the National AI Plan Means for Industry.\"* December 9, 2025. https://www.twobirds.com/en/insights/2025/australia/a-new-era-for-ai-governance-in-australia-what-the-national-ai-plan-means-for-industry\n\n- IAPP (International Association of Privacy Professionals). *\"Australia Unveils AI Policy Roadmap.\"* December 2, 2025. https://iapp.org/news/a/australia-unveils-ai-policy-roadmap\n\n- Ius Laboris. *\"AI Regulation in Australian Workplaces: What Employers Need to Know.\"* February 12, 2026. https://iuslaboris.com/insights/ai-regulation-in-australian-workplaces-what-employers-need-to-know/\n\n- Australian Government, Ministers' Media Centre. *\"Future-Ready Workforce: One Million Aussies to Get Free AI Skills Training.\"* November 2025. https://ministers.dewr.gov.au/ayres/future-ready-workforce-one-million-aussies-get-free-ai-skills-training\n\n- University of Melbourne and KPMG. *\"Trust, Attitudes and Use of AI: Australian Findings.\"* 2025. (Cited in IAPP Global AI Governance: Australia, https://iapp.org/resources/article/global-ai-governance-australia)\n\n- National AI Centre, Department of Industry, Science and Resources. *\"AI Adoption in Australian Businesses: 2025 Q1.\"* March 4, 2026. https://www.industry.gov.au/news/ai-adoption-australian-businesses-2025-q1",
  "geography": {},
  "metadata": {},
  "publishedAt": "",
  "workspaceId": "a3c8bfbc-1e6e-424a-a46b-ce6966e05ac0",
  "_links": {
    "canonical": "https://opensummitai.directory.norg.ai/future-of-work/ai-employment-in-australia/australian-government-policy-on-ai-and-jobs-what-regulation-funding-and-national-strategy-mean-for-workers/"
  }
}