Strategy, Governance and Risk

Governments around the world including Australia have embraced responsible AI policies grounded in ethics, transparency, and public trust. But while caution is warranted, inaction also carries risk. As expectations grow and AI capabilities accelerate, slow adoption can erode government relevance, capability, and credibility. The goal for government shouldn’t be to eliminate AI risks entirely, but to manage them responsibly while keeping pace with the community’s needs. This includes reducing over-reliance on external technology vendors and building internal capability to steer and govern AI effectively.
The principles might be appropriate, but are they enough?
Over the past two years in particular, the Australian Government has made real progress in setting strong foundations for responsible AI. Agencies have published transparency statements, appointed accountable officers, and committed to ethical principles such as human oversight, explainability, and fairness. The Digital Transformation Agency’s AI policy and classification framework has provided helpful structure. Most federal agencies currently use AI in low-risk domains: automating internal workflows, summarising documents, analysing policy data, or powering staff-facing virtual assistants.
This measured and principled approach is commendable and necessary. However, principles without increased momentum may soon become a liability. The public sector faces an emerging challenge: how to operationalise AI principles at scale and speed, without compromising trust. The truth is, while governments are moving carefully, the world around them isn’t slowing down.
The emerging risk: what happens if we move too slowly?
Playing devil’s advocate, it’s worth confronting an uncomfortable truth: being too risk-averse with AI can create new forms of risk. Here’s how:
First, there’s the issue of service experience. Citizens increasingly expect the same level of responsiveness and personalisation from government that they get from the private sector. When accessing health, welfare, or tax services, people want accurate, timely, digitally enabled experiences. If government systems feel slow, disconnected, or hard to navigate, trust erodes, not just in the service, but in the institution itself.
Second, there’s the missed opportunity for productivity. AI has the potential to automate low-value tasks, freeing up public servants to focus on strategic, high-impact work. Without it, inefficiencies compound. Staff remain bogged down in administrative burden, innovation slows, and the public sector struggles to keep pace with demand.
Third, regulatory and policy expertise is at risk. As AI becomes embedded in every sector, from finance to defence, governments need operational literacy to regulate, audit, and respond effectively. Agencies that haven’t strategically implemented AI internally, may find themselves ill-equipped to govern its use externally.
Fourth, there’s the talent challenge. Public servants want to do meaningful, future-focused work. If the public sector is perceived as behind the curve, it risks losing, or failing to attract skilled technologists, analysts, and innovators to industry or overseas markets.
Finally, and perhaps most urgently, there is growing concern, particularly following recent developments overseas, about third-party entities wielding disproportionate influence over public sector AI systems. As governments outsource capability to commercial providers, they may inadvertently cede control (or visibility) over core decision-making processes. This includes dependency on proprietary models, limited transparency into algorithmic behaviour, and constrained ability to explain or audit outcomes.
This underscores the need for robust in-house capability. Agencies must be able to assess, adapt, and govern AI tools, not just procure them. Otherwise, governments risk becoming passive users of someone else’s technology, rather than active stewards of their own digital future.
How other governments are moving responsibly but assertively
While current Australian government AI use largely remains assistive, supporting, rather than replacing human decision-making, there is a noticeable lack of adoption of agentic AI systems capable of autonomous planning and action. Yet, opportunities abound in well-governed deployments and globally, leading governments demonstrate that responsible yet assertive AI adoption, including agentic AI, is achievable and beneficial.
For example, New Zealand has implemented agentic AI with its SmartStart platform, where AI proactively registers births, schedules healthcare appointments, and coordinates associated social services automatically.
In Singapore, agentic AI systems coordinate real-time traffic management, dynamically responding to changing conditions and improving congestion outcomes.
Estonia uses an agentic AI that proactively helps citizens navigate and complete complex administrative processes across multiple government services.
These international experiences provide valuable insights into safely and effectively deploying AI that doesn’t just assist human tasks but autonomously performs complex public-service functions, offering practical lessons for Australia’s own strategic AI adoption.
These examples show that strategic, experimental, and iterative AI adoption is possible. The key is to be strategic and pair innovation with accountability, starting with modest, measurable pilots, applying proportionate controls, and building internal literacy alongside technical tools.
Bridging the trust gap: practical moves for government
Governments don’t have to go all-in on AI overnight. But they must start moving faster and smarter. That means:
To support these shifts, certain prerequisites are essential:
These recommendations, drawn from the Australian National Audit Office (ANAO) review of ATO’s AI practices[1], reinforce that effective adoption requires more than tools, it demands culture, capability, and control.
Most importantly, agencies must ensure that any AI system they use, whether developed in-house or by a vendor, remains within their control, explainability, and accountability.
[1] Governance of Artificial Intelligence at the Australian Taxation Office | Australian National Audit Office (ANAO)
Our team of experts will work closely with you to deeply understand your challenges and find how we can leave you feeling protected, or better equipped to drive change, and impact society.