Recently, the Department for Science, Innovation and Technology published the long-awaited white paper discussing the UK government’s proposed approach to regulating Artificial Intelligence (AI).
The paper, which focuses on fostering innovation, emphasises the role AI will play in delivering on societal priorities such as better public services, high quality jobs and opportunities to learn the skills that will power our future. It also explores AI’s potential to help the UK become a science and technology superpower by 2030.
At the same time, however, it is recognised that there are numerous risks and challenges associated with this transformative technology. These risks can range from physical and psychological harm to undermining national security, and they need to be addressed. They should be considered alongside the ethical challenges that AI development and deployment poses, in order to foster public trust.
The vital role of public trust in AI
The new regulatory framework sets out three key objectives:
- Drive growth and prosperity by making responsible innovation easier and reducing regulatory uncertainty.
- Increase public trust in AI by addressing risks and protecting our fundamental values.
- Strengthen the UK’s position as a global leader in AI.
The Government recognises the importance of trust – acknowledging it as a key driver for AI adoption. It is therefore crucial that the new regulatory framework effectively addresses the AI risk and that people fully trust AI systems that they are exposed to. Without trust, as the paper points out, people will be reluctant to use AI, and as a result, demand for new products will be reduced and innovation hindered.
Citizens expect to trust government use of data and AI
In our inaugural ‘ Digital Ethics Outlook’, we explored the issue of trust extensively. We surveyed 1,000 UK citizens to provide a baseline of trust in public digital services, and a foundational understanding of the perceptions around government use of data and technology.
The results show that nearly half of respondents (44%) do not trust government organisations to collect, share and use their data ethically, and less than half (46%) believe organisations use data fairly and effectively to assess people for services.
While a majority (58%) want government to use digital technology for public services, there is clearly concern around algorithms and automated decisions. Only 30% of respondents believe that decisions made by algorithms result in good quality public services or that AI leads to public good.
The Importance of Digital Ethics
Embedding ethics throughout the digital service lifecycle is the critical basis for a service being perceived as trustworthy. For this reason, digital ethics presents a great opportunity for the Government to earn the trust of its citizens and lead to more widespread adoption of AI across the public sector and beyond, helping the UK reap the benefits of innovative technologies.
To learn more about public trust in digital services and the current state of digital ethics adoption across the government, download Digital Ethics Outlook - a duo of research-based reports into both the Citizen View and the Government View of digital ethics in government.