Sir Patrick Vallance, the UK’s outgoing chief scientific advisor, has said that generative AI could be as transformative as the industrial revolution. He followed that with a plea to government to act immediately and urgently to get ahead of the profound social and economic changes that the new tech technology could bring about.
In short, it’s clear that government must adapt to the astonishingly rapid growth and far-reaching ramifications of AI. But how can government successfully adopt and implement generative AI while avoiding the risks, and what can government learn from the private sector’s widespread adoption of the technology?
We spoke with Lee Hickin, AI Policy and Technology Lead, Asia from Microsoft ANZ and Gavin Artz, former Director Critical Technologies from the South Australia Department for Trade and Investment, to get both the public and private sector perspective on:
- How the private sector is using generative AI and what government organisations can learn from their example
- The best advice private sector organisations can give to government leaders to successfully use generative AI
- The main risks and challenges the public sector should keep in mind when utilising generative AI
How is Generative AI used in the private sector, and what can government organisations learn from these applications?
Gavin Artz: I didn’t expect the broad acceptance and understanding of the technology and its benefits that I am seeing. I have encountered a real desire to implement, tempered by very sensible explorations of security, intellectual property, workforce impacts and the business (and data) models of the providers of the technology.
The technology is powerful, and it has emerged and been put in our hands suddenly. Because of this it is understandable that the law, policy and approaches to operationalising the technology is only just emerging and have slowed any deep implementations.
There is more work needed on implementation and integration with an organisation’s data, so I am seeing use limited to low risk activities and individual employee exploration within a framework of safe, but permissive policy.
Lee Hickin: This is hard to answer with any degree of accuracy as we are so early in the journey and many organisations are still experimenting and learning. But what is clearly evident is that there is not one use case, one sector or one problem that is being solved. The broad areas of functionality being explored are knowledge exploration and customer service improvements – both areas that government can benefit from.
What advice can the private sector offer to government leaders aiming to effectively harness generative AI?
Gavin Artz: Generative and any other AI is not a stand-alone technology. It relies on data, and the associated data and communication infrastructure. This isn’t cheap, as the technology stands you will be relying on third party providers data and data infrastructure. In turn these third-party providers are rapidly building out the needed infrastructure. Investigate the data models they are applying to quires on your data and the how the large language models are developed and aligned is something you need to do to ensure you are comfortable with using the systems being developed.
Generative AI can provide efficiencies by taking on work that you would give to junior staff, or outsource? That will be great right now as you have skilled senior staff that can review the work, but where will you get the next generation of senior staff come from? Think about workforce and succession.
In the long run there will be efforts to move generative AI to the edge, but AI will still be eating significant amounts of energy. Do you have reliable energy to maintain these as they become critical systems? Do you know if the energy used is renewable? This will be important as ESG requirements strengthen.
Lee Hickin: I think – at least for me – there are 3 key lessons being learned in these early days of AI led business.
- Experiment, learn and be open to new concepts. This is all a new area and we should feel comfortable with new ideas and approaches
- Don’t confuse the ‘ChatGPT’ experience with the power of Generative AI more broadly. There is a huge difference in terms of both safety and functionality between the two.
- Perfection is the enemy of progress, keep iterating and deploying – learn from user’s experience.
What are some ethical and regulatory challenges the public sector should consider when adopting Generative AI?
Gavin Artz: These are not truth engines. Like any AI there are inherent biases from the data they have been trained on and the methods used to align them.
The liability stops with you. Think of generative AI as a room full of junior staff working for you. You still need to check the work and make sure you are happy with it and that it makes sense or feels right. When it goes to a client or into the public domain it is your work, it is work you have approved, you are liable.
Lee Hickin: A very big topic that one short paragraph can’t do justice to. But top of mind should be to focus on maintaining human oversight in your processes, focus your attention on the risk based outcome of your AI use, not the technology itself and finally – remember that whilst AI regulation is very likely coming globally, laws already exist to protect citizens from bad or illegal processes – keep that as your north star when considering future regulation.
Hear more from Lee Hickin, Gavin Artz and other industry leaders at the Generative AI Summit for Government and gain access to cutting-edge insights and strategies that will put your organisation ahead of the curve. Learn more.
To access the detailed conference program, download the brochure here.