Garbage In, Garbage Out: The Challenge of AI, Public Expectations, and the Quality of Data
Artificial intelligence (AI) is reshaping policing, offering significant potential to enhance public safety, streamline investigations, and predict crime. Facial recognition, predictive policing algorithms and AI-driven data analytics are enabling police
forces to achieve the mission more effectively. This has been the case over the summer when riots were handled through immediate containment and rapid identification of offenders, leading to arrest frequently after the event without exacerbating tensions. However, as with all powerful tools, the use of AI in policing presents unique ethical challenges.
Senior officers must grapple with two key areas of concern: public expectations and the quality of data on which these technologies rely.
AI and Policing: A Double-Edged Sword
On one hand, AI can assist forces in performing tasks with remarkable speed and accuracy. Predictive policing algorithms analyse historical data to forecast where future risks are likely to occur, or the chances of people re-offending,
allowing for better resource allocation. Facial recognition technology can identify suspects in real time,
helping with cases such as the riots this summer. Even before then, the last government announced that £17.6m was to be invested in enhancing facial recognition capabilities,
including up to £4m to support the procurement of purpose-built live facial recognition mobile units. In theory, these technologies can create a safer society by improving efficiency and freeing up officers to focus on tasks that require human
judgement.
But herein lies the rub: AI is only as good as the data it’s trained on, and that data can often be flawed. Biased, incomplete, or outdated datasets are common in many police systems. The result is that the outputs of AI-driven systems can reinforce
existing biases, leading to discriminatory practices which gain the attention of academics, the media and lobby groups such as Big Brother Watch.
Public Expectations vs. Policing Reality
The public's expectations of law enforcement in the age of AI are complex and often contradictory. On the one hand, there
is a growing demand for accountability and transparency in police work. Many communities expect AI systems to be free from human bias and prejudice. The hope is that technology will create a more equitable and objective policing system, one that is
less prone to the influence of personal bias.
Yet as much as AI promises impartiality, the public’s mistrust of these technologies is equally palpable. Facial recognition, for example, has been widely criticised for its higher rate of false positives among ethnic minorities. Predictive policing tools have faced scrutiny for reinforcing policing in historically over-policed neighbourhoods, leading to a feedback loop where these communities are disproportionately targeted. These failures erode public trust in both the police
and the technologies being adopted.
Beyond that, it is crucial that AI and technology in general does not detract from the local presence of officers. Public trust in policing is based in large part on their direct experience of the police,
and if that involves a replacement of people on the front line for machines, the impact on trust will be negative.
For senior officers, navigating these public expectations is no small job. It requires balancing the benefits of these tools with a transparent recognition of their limitations. The public wants police forces that are smart, fair, and just. AI can potentially
deliver this, but only if it is applied ethically and with rigorous safeguards. As the National Contact Management Strategic Plan 2023-2028 states, 'Digital opportunities need to be harnessed in a way that is lawful, ethical and scalable and continues to improve our communities' experiences
of contacting the police while simultaneously improving the efficiency and effectiveness of the policing contact operation.' The challenge is therefore ensuring the public’s trust while harnessing the best of what AI has to offer.
The Challenge of Poor-Quality Data
At the heart of these ethical concerns is the quality of data being used in AI systems. Data in policing is messy: crime reports, arrest records, and other forms of police data are often riddled with inaccuracies, human error, and bias. If AI systems
are trained on poor-quality data, they will produce poor-quality outcomes, potentially worsening existing social inequalities.
Take a hypothetical example of predictive policing. If a system is fed data that reflects the over-policing of certain minority communities, the AI will reinforce those patterns, sending more patrols to those areas and further entrenching the cycle. Similarly,
facial recognition algorithms trained on datasets that under-represent minority faces can lead to higher false-positive rates for individuals from these groups. The problem is that AI will continue to make these mistakes, but with the added weight
of technological authority and a presence at scale across forces. After all, if the AI says someone is guilty, there’s often a presumption that the machine is infallible. And while one force may experience prejudice, how much worse for that
prejudice to be rolled out across much of the UK?
The key issue here is not that AI is inherently flawed but that its effectiveness depends to a large degree on the quality of data it receives. Garbage in, garbage out, as they say. The ethical responsibility, then, falls to police leadership to ensure
that data used for AI is both accurate and free from bias. But this is easier said than done.
Data cleaning, auditing, and regular updating require significant resources. The challenge becomes not only technical but managerial. How do senior officers ensure that the data their departments rely on is fit for purpose? And what safeguards can be
put in place to prevent AI systems producing biased outcomes from? This requires thoughtful, proactive policies that view AI as a tool rather than a solution in itself.
A Call to Action: Ethical AI Requires Human Input and Oversight
To address these challenges, leaders must recognise that the adoption of AI in policing is not a question of “if”, but “how”. While the potential of these technologies is undeniable, senior officers need to take
steps to ensure that AI is implemented ethically and responsibly. Here’s how:
- Invest in Data Quality: AI will only be as good as the data it relies on. Police departments must allocate resources to improving the quality of their datasets. This includes eliminating bias, filling in data gaps, and ensuring that data reflects
an accurate picture of the communities being served. Toolkits such as that offered by Interpol are particularly
helpful here.
- Maintain Human Oversight: AI can enhance policing, but it cannot replace human judgement. Senior officers need to ensure that decisions made by AI are always reviewed by humans, particularly in cases where there are potential impacts on civil liberties.
It is encouraging to see increasing data ethics committees being set up in constabularies such as Avon and Somerset,
and Thames Valley, to name just two.
- Engage the Public: Transparency is critical to maintaining public trust. Police forces should work with community leaders to explain
how AI is being used and to listen to concerns. This dialogue will help to build a policing system that is both effective and fair.
- Employ Common Ethical Standards: National and local standards for the ethical use of AI, such as the Covenant for using AI in Policing,
should be implemented, and officers trained on these standards.
The future of policing is closely linked to technology, but the ethical use of AI depends on strong leadership. Senior officers have a responsibility to ensure that AI serves
the public good, not just the convenience of the force. We must take steps now to ensure that the AI systems we are using are fair, transparent, and accountable.
Policing is about protecting the vulnerable and ensuring fairness under the law.
In the age of AI, these principles remain as vital as ever. Let’s make sure that technology reflects them. In Sopra Steria, recently awarded PAC best in class for AI, we have internal governance for all purchases, use and sales of AI. For
more information on AI ethics or how a digital ethics strategy can help your force, contact Kevin Macnish, Head of Ethics and Sustainability Consulting at Sopra Steria kevin.macnish@soprasteria.com.