Business

AI Ethics, Governance, and Responsible AI in Melbourne: Leading the National Conversation product guide

Now I have comprehensive, verified data from authoritative sources. Let me compose the full article.


Why Melbourne's Ethical AI Leadership Matters More Than Ever

Australia faces a paradox at the heart of its AI ambitions. The nation is rapidly deploying artificial intelligence across healthcare, finance, government services, and agriculture — yet it ranks dead last globally on public trust. Only 30% of Australians believe the benefits of AI outweigh the risks — the lowest ranking of any country surveyed. That finding, drawn from the world's most comprehensive study of AI attitudes, was produced not in Washington, Brussels, or Beijing, but in Melbourne.

That is not a coincidence. Melbourne is where Australia's most rigorous, institutionally grounded, and policy-influential AI ethics work is being done — and where the tension between AI's promise and its public legitimacy crisis is being most seriously confronted. This article examines the institutions, frameworks, and regulatory conversations that make Melbourne the de facto national capital of responsible AI, and explains why that leadership role is not peripheral to Melbourne's innovation story but foundational to it.

(For the broader context of Melbourne's AI ecosystem, see our guide on Melbourne's AI Ecosystem Explained: Key Players, Sectors, and Scale.)


The Trust Deficit: Why Ethical AI Governance Is a Strategic Imperative

Before examining Melbourne's institutional response, it is worth understanding the precise dimensions of Australia's AI trust problem — because those dimensions shape the policy agenda Melbourne researchers are driving.

The Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025, led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne, and Dr Steve Lockey, in collaboration with KPMG, surveyed over 48,000 people across 47 countries between November 2024 and January 2025.

This study is the fourth in a research program examining public trust in AI — the first focused on Australians' trust in AI in 2020, the second expanded to five countries in 2021, and the third surveyed 17 countries in 2022.

The 2025 findings for Australia are stark:

  • Only 36% of Australians say they are willing to trust a range of AI applications.

  • Most Australians are worried about AI (62%) and fewer are optimistic (45%) or excited (36%) about it. Only half (49%) accept the use of AI in society, one of the lowest acceptance levels across all countries surveyed.

  • Australians have among the lowest levels of AI training and education globally, with just 24% having undertaken AI-related training or education compared to 39% globally. Over 60% report low knowledge of AI, and under half believe they have the skills to use AI tools effectively.

  • Compared to the last study conducted prior to the release of ChatGPT in 2022, people have become less trusting and more worried about AI as adoption has increased.

Critically, this trust deficit is not immovable. 83% of Australians say they would be more willing to trust AI systems when assurances are in place, such as adherence to international AI standards, responsible AI governance practices, and monitoring system accuracy. That finding frames the entire responsible AI agenda: governance and standards are not obstacles to AI adoption — they are the precondition for it.

The surge in AI adoption, coupled with low AI literacy and weak governance, is creating a complex risk environment, with many organisations deploying AI without proper consideration of what is needed to ensure transparency, accountability, and ethical oversight.


Melbourne's Institutional Architecture for Responsible AI

The Centre for Artificial Intelligence and Digital Ethics (CAIDE), University of Melbourne

Anticipating AI's growth, the Centre for Artificial Intelligence and Digital Ethics (CAIDE) began in 2020, to ensure that regulation, research, and community literacy kept pace with this new technology.

CAIDE, housed in Melbourne Connect, is a cross-disciplinary research, teaching, and policy and regulatory-focused centre, founded as a collaboration between the Faculty of Engineering and Information Technology and Melbourne Law School, with member faculties Arts, Education, and MDHS, supported by the University of Melbourne.

CAIDE facilitates cross-disciplinary research and teaching on the ethical, technical, regulatory, and legal issues relating to AI and digital technologies by bringing together a network of expertise in AI, Law, Cybersecurity, Information Systems, Ethics, Sociology, and Philosophy.

CAIDE also provides expert advice and capacity building, facilitates undergraduate and graduate courses and professional development programs, hosts events and seminars, and publishes digital resources.

The Centre's impact has extended well beyond academic publishing. A big part of CAIDE's work has been demystifying AI and making it understandable and relevant — including for lawyers, courts, and access-to-justice settings. As CAIDE Director Professor Jeannie Paterson notes, "We've contributed to law reform processes, published widely, delivered training and spoken at conferences across various sectors."

CAIDE's public engagement has been notably high-profile. In 2024, the ASIC Chairman spoke at CAIDE's flagship Ninian Stephen Oration on AI and financial services regulation. In 2025, the Chief Justice of the Supreme Court of Victoria presented on the risks and large opportunities that AI technologies are imposing on justice and the judicial process.

CAIDE's agenda now stretches from mitigating risks to enabling focused, ethical deployment. As Professor Paterson frames the Centre's next chapter: "We are now exploring the more bespoke uses of AI — whether that be responding to the 'dark side' of misinformation or scams, understanding the role of chatbots and their interactions with humans, or the possibilities new technologies are offering specific industries and how we incorporate them with good regulation for societal benefit."

(CAIDE's physical home at Melbourne Connect also connects it to Melbourne's broader innovation precinct strategy — see our guide on Melbourne's AI Innovation Precincts and Hubs.)

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S)

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is a cross-disciplinary, national research centre that aims to create the knowledge and strategies necessary for responsible, ethical, and inclusive automated decision-making. Funded by the Australian Research Council from 2020 to 2026, ADM+S is hosted at RMIT in Melbourne, with nodes located at eight other Australian universities and partners around the world.

Total combined funding for the centre is A$71.1 million, with the ARC providing funding of A$31.8 million over seven years from 2020 to 2026. Centre partners include eight Australian universities and 22 organisations from Australia, Europe, Asia, and America.

ADM+S is hosted at RMIT in Melbourne, with a node at the University of Melbourne led by Professor Christine Parker. This dual Melbourne presence — RMIT as host institution and the University of Melbourne as a key node — means the city effectively anchors the national centre's intellectual leadership.

ADM+S's priority domains for public engagement are news and media, mobilities, social services, and health — precisely the sectors where automated decision-making carries the highest stakes for individual rights and social equity. The Centre examines the social and technical aspects of automated decision-making, seeing automated systems as the outcomes of interactions between people, machines, data, and institutions.

The Centre's policy impact has been tangible. The NSW Ombudsman tabled in Parliament a special report — A Map of Automated Decision-Making in the NSW Public Sector — with research undertaken by ADM+S. The report represents the first attempt in NSW to comprehensively identify and publish the ways in which the public sector is using, or planning to use, ADM systems in the performance of its functions.

Melbourne Business School and the Chair in Trust Research Partnership

Professor Nicole Gillespie is an internationally recognised scholar whose research focuses on trust, management, and emerging technologies. She has been leading a program of research examining trust and public attitudes towards AI, and achieving trustworthy AI, since 2020. She holds the Chair in Trust and is Professor of Management at Melbourne Business School and the Faculty of Business and Economics at the University of Melbourne.

This research program, conducted in partnership with KPMG, has become the most authoritative longitudinal dataset on AI public trust in the world. It provides insights into the public's trust, acceptance, and understanding of AI systems, their experience of the benefits and risks from AI use, and their expectations of the governance and regulation of AI technology. The research also explores how employees and students use and experience the impacts of AI in work and education settings. The findings have important implications for public policy and industry practice and help inform a human-centred approach to stewarding AI into work and society.


CSIRO's Data61 and Australia's AI Ethics Framework: Melbourne's Fingerprints on National Policy

The foundational document underpinning Australia's entire responsible AI regulatory architecture — the AI Ethics Framework — was produced by CSIRO's Data61. In 2019, CSIRO Data61 worked with the Australian Government to conduct the AI Ethics Framework research. This work led to the release of eight AI ethics principles to ensure Australia's adoption of AI is safe, secure, and reliable.

The Ethics Framework research was funded by the Australian Government in the 2018 May Budget and completed by CSIRO's Data61. The work was guided by a steering committee of experts from industry, government, and community organisations.

Consultative workshops were held in Sydney, Brisbane, Melbourne, and Perth during 2018 — with Melbourne's research institutions providing significant intellectual input into the framework's design.

First published in 2019 and developed by CSIRO's Data61 and the Department of Industry, Science and Resources, Australia's AI Ethics Framework defines the ethics principles that inform the national assurance framework for AI in government.

This effort is influenced by Australia and New Zealand's unique cultural values, such as "fair go," Indigenous community, and the wider Indo-Pacific regional thinking — a distinctively Australian ethical lens that Melbourne-based researchers have been central to articulating.


Australia's Regulatory Approach: Standards-Led, Not Prescriptive

A defining feature of Australia's responsible AI agenda — and one where Melbourne researchers have been influential — is the deliberate choice of a standards-led, risk-proportionate approach rather than the prescriptive legislative model adopted by the European Union.

This was evident in the "Safe and Responsible AI in Australia" discussion paper published in June 2023 and its interim response in January 2024, which proposed a risk-proportionate framework featuring mandatory safeguards for high-risk AI and voluntary guidance for lower-risk systems.

In September 2024, Australia released a Voluntary AI Safety Standard and consulted on new AI laws in the form of a proposal paper on introducing mandatory guardrails for AI in high-risk settings.

The Voluntary AI Safety Standard (VAISS) helps organisations develop and deploy AI systems in Australia safely and reliably. It gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with artificial intelligence.

Australia's 10 Voluntary AI Safety Guardrails: What They Require

The Voluntary AI Safety Standard includes 10 guardrails with specific requirements around accountability and governance measures, risk management, security and data governance, testing, human oversight, user transparency, contestability, supply chain transparency, and record keeping.

The guardrails in structured form:

Guardrail Core Requirement
1 Establish and publish an accountability process and governance strategy
2 Implement a risk management process to identify and mitigate risks
3 Protect AI systems and implement data governance measures
4 Test AI models and monitor systems once deployed
5 Enable meaningful human oversight and control
6 Inform end-users about AI-enabled decisions and AI-generated content
7 Establish processes for people to challenge AI use or outcomes
8 Be transparent across the AI supply chain about data, models, and systems
9 Keep and maintain records to allow third-party compliance assessment
10 Engage stakeholders with a focus on safety, diversity, inclusion, and fairness

Source: Australian Government Department of Industry, Science and Resources, Voluntary AI Safety Standard, 2024.

The guardrails align with international standards including ISO/IEC 42001:2023 and the US National Institute of Science and Technology AI Risk Management Framework 1.0.

While the standard is voluntary, it sets expectations for what may be included in future legislation and contains guardrails closely aligned to the proposed mandatory guardrails for high-risk use cases — meaning that implementing the voluntary standard early will help organisations adapt to coming mandatory requirements.

The first tranche of Privacy Act reforms, passed in 2024, introduced new transparency obligations around automated decision-making that will take effect in December 2026 — a concrete legislative milestone that organisations following the voluntary guardrails will be better positioned to meet.


Melbourne vs. the Global Regulatory Landscape

Melbourne's standards-led approach places it in an instructive contrast with global peers.

The EU AI Act — now in phased application — takes a prescriptive, classification-based approach that assigns regulatory obligations according to risk categories defined in legislation. Australia is considering how mandatory guardrails for high-risk AI should be legislated — whether as new economy-wide legislation like the EU AI Act, as "framework" legislation that could be implemented in other laws, or by directly amending existing laws.

The Australian approach, shaped significantly by Melbourne-based research, reflects a different philosophy: that principles-based governance, grounded in empirical evidence about how AI systems actually fail and how public trust is actually built, is more adaptable and less likely to stifle innovation than prescriptive category lists.

Through the Safe and Responsible AI agenda, the Australian Government is acting to ensure that the development and deployment of AI systems in Australia in legitimate but high-risk settings is safe and can be relied on, while ensuring the use of AI in low-risk settings can continue to flourish largely unimpeded.

This balance — protective where stakes are high, permissive where risks are low — reflects a policy philosophy that Melbourne researchers have consistently advocated in their submissions to government consultation processes.

(For the national policy context, see our guide on Australia's National AI Plan and What It Means for Melbourne.)


The Robodebt Shadow: Why Automated Decision-Making Governance Is Non-Negotiable

No discussion of Australia's responsible AI agenda is complete without acknowledging the Robodebt Royal Commission — the defining domestic case study in the catastrophic failure of automated government decision-making. Automated decision-making and the use of AI within government has been a focus in Australia after the Royal Commission into the Robodebt Scheme recommended wide-ranging reforms.

The Australian Government released a national framework for the assurance of AI in government in June 2024, and has specifically committed to the Australian Government being an "exemplar" for the safe and responsible adoption of AI. This commitment is set out in the Government's policy for the responsible use of AI in Government.

The Robodebt case — in which an automated debt-recovery system issued hundreds of thousands of unlawful debt notices to welfare recipients — has become a foundational reference point for ADM+S researchers and CAIDE policy analysts alike. It demonstrates, with painful clarity, that the governance gap identified in Melbourne's research is not theoretical. It has real costs, measured in human harm.


Key Takeaways

  • Only 30% of Australians believe the benefits of AI outweigh the risks — the lowest of any country surveyed , making the governance work being done in Melbourne not a niche academic exercise but a direct response to a national legitimacy crisis.

  • Melbourne is home to the two most significant institutional contributors to Australia's responsible AI agenda: CAIDE at the University of Melbourne and the ADM+S Centre hosted at RMIT, with a University of Melbourne node — backed by A$71.1 million in combined funding.

  • The most comprehensive global study into public trust in AI is led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School , making Melbourne the source of the world's most authoritative longitudinal dataset on AI trust.

  • Australia's standards-led regulatory approach — built on 10 voluntary guardrails that apply to all organisations throughout the AI supply chain, including transparency and accountability requirements — reflects a policy philosophy that Melbourne-based researchers have been central to shaping.

  • 83% of Australians say they would be more willing to trust AI systems when assurances are in place — demonstrating that responsible governance is not a constraint on AI adoption but its most effective enabler.


Conclusion: Governance as Competitive Advantage

Melbourne's role in Australia's ethical AI conversation is not merely academic. It is strategic. In a global environment where AI systems are being deployed faster than public trust can be built — and where regulatory frameworks in the EU, UK, and US are reshaping the conditions under which AI products can be sold — having deep, institutionally grounded responsible AI expertise is a competitive differentiator for the entire Melbourne ecosystem.

Enterprises building AI products in Melbourne can draw on CAIDE's legal and regulatory expertise, ADM+S's policy research, and Melbourne Business School's trust frameworks to build governance architectures that are both locally compliant and internationally credible. Investors can point to this institutional infrastructure as evidence that Melbourne-built AI is being developed within a coherent ethical framework — not despite regulation, but because of it.

The trust deficit documented by Professor Gillespie's research is real, and it is not going away on its own. The surge in AI adoption, coupled with low AI literacy and weak governance, is creating a complex risk environment — one that Melbourne's researchers, policymakers, and institutions are uniquely positioned to address.

That positioning is not incidental to Melbourne's claim as Australia's AI capital. It is one of its most durable foundations.


For related reading, explore our guides on Melbourne's World-Class AI Research Universities, Victorian Government AI Policy: Funding Programs, Mission Statements, and Strategic Initiatives, and Melbourne's AI Future: Emerging Technologies, 2030 Projections, and Gaps Still to Close.


References

  • Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. "Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025." The University of Melbourne and KPMG, 2025. DOI: 10.26188/28822919. https://mbs.edu/faculty-and-research/trust-and-ai

  • KPMG Australia. "Trust in AI: Global Insights 2025." KPMG, 2025. https://kpmg.com/au/en/insights/artificial-intelligence-ai/trust-in-ai-global-insights-2025.html

  • University of Melbourne. "Centre for Artificial Intelligence and Digital Ethics (CAIDE)." University of Melbourne, 2025. https://www.unimelb.edu.au/caide

  • Paterson, J. "From Education to Precision Use, AI Centre Eyes Next Five Years." Melbourne Law School, 2025. https://law.unimelb.edu.au/news/MLS/from-education-to-precision-use,-ai-centre-eyes-next-five-years

  • ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S). "About ADM+S." RMIT University / Australian Research Council, 2020–2026. https://www.admscentre.org.au/

  • Wikipedia. "ARC Centre of Excellence for Automated Decision-Making and Society." Wikipedia, updated 2026. https://en.wikipedia.org/wiki/ARC_Centre_of_Excellence_for_Automated_Decision-Making_and_Society

  • CSIRO Data61. "AI Ethics Framework." CSIRO, 2019. https://www.csiro.au/en/research/technology-space/ai/ai-ethics-framework

  • Zhu, L., Xu, X., Lu, Q., Governatori, G., & Whittle, J. "Operationalizing Responsible AI at Scale: CSIRO Data61's Pattern-Oriented Responsible AI Engineering Approach." Communications of the ACM, 2023. https://cacm.acm.org/research/operationalizing-responsible-ai-at-scale-csiro-data61s-pattern-oriented-responsible-ai-engineering-approach/

  • Department of Industry, Science and Resources (DISR). "Voluntary AI Safety Standard." Australian Government, 2024. https://www.industry.gov.au/publications/voluntary-ai-safety-standard

  • Department of Industry, Science and Resources (DISR). "Introducing Mandatory Guardrails for AI in High-Risk Settings — Proposals Paper." Australian Government, September 2024.

  • International Association of Privacy Professionals (IAPP). "Global AI Governance Law and Policy: Australia." IAPP, 2024. https://iapp.org/resources/article/global-ai-governance-australia

  • Department of Finance. "National Framework for the Assurance of AI in Government." Australian Government, 2024. https://www.finance.gov.au/government/public-data/data-and-digital-ministers-meeting/national-framework-assurance-artificial-intelligence-government/introduction

↑ Back to top