Building Trust with AI: The Government’s Path to Responsible Tech
Learn how training, policy changes, collaboration, tech testing, public opinion and investment all work together to build trust in AI
Artificial Intelligence. It’s futuristic. It’s fast-paced. It’s lifelike. It’s scary (to some). And it’s becoming the new norm in assisting public service delivery. That said, the Australian government is laser-focused on the best way to leverage AI and how to do it as responsibly as possible. Australia aims to embrace the power of artificial intelligence while keeping things safe, ethical, and transparent. It’s all about building public trust, staying ahead of the tech curve, and making government services smarter for everyone.
With inspiration from the Digital Transformation Agency’s new Policy for the responsible use of AI in government, let’s look at how Australia is setting ground rules for AI to play fair while making life easier for everyone on this grand island.
Training. Why Does it Matter?
As AI has emerged as a key driver of innovation, its tools are running rife through technology and digital services. To implement and use these technologies responsibly, public sector employees must understand the role of AI literacy in decision-making, policymaking, and ensure that AI is used to benefit society while minimising risk. In other words, a nuanced understanding of AI integration is a must.
Along with evolving data and digital fundamentals, AI skills are still playing catch-up within government agencies, but targeted training programs can bridge this divide. The importance of training programs has been identified to support public sector employees’ ability to implement, monitor, and govern the proliferation of AI projects responsibly, with a more practical understanding of AI capabilities, limitations, and ethical considerations.
Here in Australia, the Digital Transformation Agency is launching an AI Fundamentals Training module help staff get up to speed on how general AI tools, like generative AI, could fit in and make sense for their roles and day-to-day tasks.
In the DTA’s new policy, there are also recommendations that APS agencies implement additional role-specific training for employees involved in AI system procurement, development, and deployment. This ensures that those directly handling AI projects are equipped to make informed decisions and manage AI risks effectively.
Trust. How Do We Get There?
Public transparency is key.
We need comprehensive policies and frameworks that emphasise transparency, ethics, and accountability.
Within this policy, all levels of government—federal, state, and territory— are on the same page with a new National framework. It’s built on Australia's AI Ethics Principles and ensures that AI is managed consistently across the board, so everyone knows what to expect when it comes to AI oversight and impact.
Collaboration is also key.
With further insight from the DTA policy, we are introduced to the Australian Government Taskforce, a collaboration of secondees and stakeholders who not only informed the policy, but will work tirelessly to consult and share knowledge to ensure the consistent, responsible us of AI.
Australia is also taking a leaf out of global initiatives like the World Economic Forum’s AI Governance Alliance which advocates for inclusive AI policies and international cooperation to ensure that AI technologies benefit all communities while minimising potential risks.
Further knowledge comes from OECD’s AI Policy Observatory which helps countries share best practices and align their policies to build public trust in AI by promoting accountability and transparency.
Technology. What is the Best Route to Responsible Use? Test & Assess!
A key process for agencies across the board is to ensure that AI systems are tested and evaluated for security and ethical implications before the public has the chance to use them.
Governments can set up AI testbeds and regulatory sandboxes to allow for the safe testing of new AI technologies and the assessment of their impacts before they are widely implemented, reducing risks associated with unexpected outcomes.
Following suit, the DTA has promised within the AI assurance framework that there will be consistent impact assessments and risk management across agencies in addition to piloting new technical standards.
On a global scale, The National Institute of Standards and Technology (NIST), (playing a significant role in shaping international standards and guidelines for the responsible development and deployment of AI technologies) has developed a framework for managing generative AI risks with the execution of the AI Risk Management Framework (AI RMF) and comprehensive testing programs, providing guidance for AI developers and organizations on secure AI development.
Similarly, organisations like the World Economic Forum’s AI Governance Alliance and the OECD AI Risk & Accountability Framework are both sharing tools and metrics internationally to ensure AI systems are fair, transparent, explainable, robust, and secure. These criteria exist to help governments and organizations test AI technologies rigorously before deployment to ensure that AI technologies are deployed responsibly across different regions.
Services. Safety is also a Public Responsibility
AI poses an immense opportunity for the growth and delivery of citizen services, but after safe deployment, further approaches that reel in public participation are key to assuring that trust and responsible use of AI moving forwards.
The latest use cases include:
- Automation of repetitive and administrative tasks, data entry, scheduling, and responding to common inquiries.
- Analysis of large datasets to identify trends, predict outcomes, and support evidence-based decision-making to develop more informed policies, respond to emerging issues proactively, and allocate resources more effectively.
- LLM-driven agents and personalised virtual assistants to provide 24/7 support to citizens and the public sector workforce, answering queries, and directing users to relevant information.
But how does the public have their say when it comes to AI in government service delivery?
In addition to previously mentioned frameworks and guidelines, governments are now creating channels for public input on AI implementations, such as public comment periods and advisory committees, to ensure that AI systems are aligned with the needs and expectations of citizens.
The Australian Digital Transformation Agency (DTA) actively encourages public engagement through public consultations and requests for feedback on AI policies and has also developed a dedicated public consultation platform where individuals and organizations can provide their perspectives on AI strategies and frameworks.
Additionally, public input is integrated into the development of technical standards and the forementioned AI assurance framework, which helps the government assess and manage the risks of AI technologies. The Department of Industry, Science, and Resources (DISR) coordinates these activities, facilitating ongoing dialogue with the public to refine AI policies and improve transparency.
Investment. How and Where are Funds Going to Ensure Commitment to Responsible Use?
It’s all well and good to receive transparency on how teams will be trained, how AI tools will be tested and how these services will be deployed and function moving forward. But all of these processes require funds to run and keep running effectively. Transparency in this matter plays a crucial role to the effective responsible use of AI in public services as well.
Governments decide on AI technology investment allocations through a combination of strategic planning, interagency coordination, and public policy priorities - and the Australian government is no different. There has been active investment in AI research and development to build a solid foundation for responsible AI adoption and here are a few examples:
- The National Artificial Intelligence Centre (NAIC), designed to support AI research, innovation, and ethical deployment and upon its inception in 2021 has allocated $124.1 million investment under the AI Action Plan and this year the budget papers allocate $21.6 million over four years, starting in the 2024-25 financial year, to be used to “establish and reshape National AI Centre (NAIC) and an AI advisory body.
- Critical Technologies Challenge Program which will provide $116.0 million over 5 years from 2022-23 to, along with providing aid to quantum technologies, extend the National AI Centre and its role in supporting responsible AI usage through developing governance and industry capabilities and support small and medium enterprises' adoption of AI technologies to improve business processes.
- Adopt AI Program will provide $17 million to establish new AI centres, advise and train businesses nationwide on the safe and responsible use of AI.
- The government has invested $44 million to establish four AI and Digital Capability Centres. These centres focus on training and upskilling individuals, providing small and medium enterprises (SMEs) with access to AI expertise, and promoting the commercialisation of AI research. The centres aim to bridge gaps in AI knowledge and ensure that businesses can responsibly adopt AI technologies to drive growth and job creation.
- Under the government’s AI Action Plan, one of the key areas of focus is attracting and retaining the world’s best AI talent and will invest $24.7 million in the skills of the future by establishing the Next Generation AI Graduates Program to attract and train home-grown, job-ready AI specialists.
It can be easy to become wrapped up in the hype and excitement of innovative technologies – especially when they promise to bring so much ease into our daily lives, enabling us to accomplish tasks faster and smarter. These technologies seem to evolve more quickly than we can keep up with and AI has been leading the charge in this turbo-speed technological evolution. The Australian government and governments the world over MUST interject and manage these technologies. There is no choice but to do so, as citizens must be confident they can trust the services provided by their government. Citizens must be able to trust that their lives and personal information are protected as they navigate all of the benefits AI can potentially give them.
The Australian government’s commitment to the responsible use of AI has become immediately clear through the latest comprehensive policies, training, and collaboration through all levels of government. Guided by frameworks like the "Policy for the Responsible Use of AI in Government," it emphasises transparency, ethical standards, and consistent risk management across public sector agencies. These efforts position Australia as a leader in safe and accountable AI adoption while maintaining public trust and ensuring positive outcomes for all citizens.
- Communities
- Cyber Security and Risk Management
- Digital Services and Customer Experience
- General
- IT Modernization and Cloud
- Region
- Australia
Published by
Most Popular