Unlocking the power of computer vision

by Dr Kevin Macnish - Head of Ethics and Sustainability Consulting, Sopra Steria Next UK
| minute read

Imagine a world where your commute is perfectly timed to dodge crowded trains or traffic jams, or where we could predict crime before it even happens. That’s the kind of promise AI and data analytics are offering when they crunch massive amounts of data to forecast human behaviour. We can get an idea as to which periods are going to be busy at a train station, or on roads given roadworks going on around the country.

Predicting problems

A dream in security has long been to predict when a person is about to commit a crime. One step toward this is predictive policing tools like PredPol, designed to forecast where crimes might happen. While these tools can be powerful, they’ve faced heavy criticism for reinforcing racial biases baked into the data they rely on. This is not the same as Minority Report, in which people were incarcerated for “pre-crime”. However, there have been calls to use AI and computer vision to identify people who are up to no good.

The limitations of AI in understanding human intentions

But here’s the catch: AI can’t read minds—neither can we. We want to spot someone planning to do something wrong, but all we can actually see are their actions. So, we end up looking for suspicious behaviour. And that’s not even touching on the cultural baggage that comes with deciding what counts as ‘suspicious.’ To some people, a group of teenagers will always act suspiciously simply by their mere existence. We are left facing the problem that actions do not always indicate intentions.

Imagine seeing a man in his forties standing outside a nail bar. Is he stalking someone inside the nail bar, planning a robbery, or waiting for his daughter to finish getting her nails done? There are a myriad ways in which we would make a judgment in this case. Some could be legitimate (does it look as if he is trying to hide his face, or trying to hide from someone inside the nail bar?) and some clearly not (are his eyes ’too close together’, is he the ’wrong colour’ or other stereotypes?)

Now imagine that we delegate the task of identifying a person loitering outside a nail bar to an AI using computer vision. We would need to spell out precisely which behaviours we regard as suspicious. But there’s only so much an automated system can spot. The difference between someone hiding his face and blowing his nose may be lost on an AI that hasn’t been trained to recognise a sneeze. Instead, the system must rely on more blunt measures, such as whether a person stays within a circle of radius 2m in the space of 5 minutes, and use this as the definition of ’loitering’.

This wouldn’t be so bad if the system identified the man as remaining within a circle of set radius for set minutes. But this won’t meet the desire to identify loitering and would mean nothing to the security guard required to act on the system’s recommendations. To ensure the system meets requirements and is meaningful to the user, a translation is required from ’loitering’ to ’remaining within a circle of set radius for set minutes’ and then back to ’loitering’. The problem is that the two terms are not synonymous. We have seen already that a person may remain within a circle of set radius for set minutes without any bad intentions.

Balancing AI and human judgement

The problem is exacerbated by two further considerations. The first is automation bias—our tendency to trust what an automated system tells us. This was spotted back in the 1970s, when pilots would believe their instruments that an engine was on fire even if they could see with their own eyes that it wasn’t. It remains a problem today, most notably with people who follow their satnav telling them to drive into a river, or who trust a self-driving car to handle oncoming traffic, only to find out (too late) that it can’t.

The second consideration is that as automated systems become faster and more complex, it is harder to maintain a ’human in the loop’ to decide whether to act. In cases where an AI is used to identify fraud, for example, the amount of information processed can be so vast that there is no way a human can effectively understand the input and make a meaningful decision prior to action. Because of this, we are likely to see increasing instances of a ’human on the loop’.

This is where a person can review decisions at will, but that person’s consent is not required for the decision to be enacted.

These two considerations effectively remove human judgement from the equation. Either the person doesn’t use their critical faculties, or the person has been replaced for expediency. In both we lose the human ability to recognise context and nuance.

This is not to say that AI is unreliable or should not be used for security. Rather, we need to approach scenarios with caution and a developed awareness of what computers still can’t do and may never be able to do.

Ensuring responsible AI governance

Going forward, we have to avoid the trap of automation bias and remember: AI isn’t perfect. It’s up to us to ask tough questions, demand transparency, and make sure these systems are governed responsibly—before we hand over too much control. We need to recognise the limitations of AI, and the harms that can arise from placing too great a trust in their abilities.

We can do this is through education and robust governance throughout the development and deployment of AI. Governance needs to be sufficiently diverse to recognise the range of problems that may arise, and sufficiently expert to identify what the system is really doing, rather than how its decision-making is portrayed using everyday language. This needs to be built into development and deployment processes so that it is not an optional extra, but a requirement for all AI. It also needs to involve sufficiently senior members of an organisation to ensure that the outputs are seen as requirements and not recommendations.

Education needs to go beyond the annual check box exercise where everyone takes an online course with 10 multiple choice questions at the end. It must engage seriously with the values, ambitions and culture of the organisation. If it only leads to an increase in knowledge, rather than root and branch behaviour change, it’s failed.

Sopra Steria has achieved all of these with its internal AI governance board and associated education. The board reviews all instances of AI in the company, is a requirement for all development and use of AI. It involves, among others, the company Chief Technical Officer, Chief Information and Security Officer, Head of Commercial, Head of Procurement, Data Protection Officer and Head of Ethics Consulting.

Kevin Macnish has been working in and teaching AI governance since 2010, and has advised the European Commission, governments and industry in this area. If you would like to know more, or would like help setting up your own AI governance and education please reach out to Kevin

Search

ai-and-technology

consulting

digital-ethics

Related content

How putting the needs of citizens at the heart of policing is critical to transforming police control rooms

Taking a citizen-centric approach to control rooms can help police forces address their most pressing challenges.

Digital twin adoption: success lies in understanding the ethics

Digital twin project managers must be aware of the ethical challenges, or risk failure and reputational damage. Here are some of the considerations around the ethics of digital twins, building on the Gemini Principles.

Royal Marsden: The power of the pause in improving patient care

We worked with Royal Marsden to define a vision for their future ways of working, including leveraging the opportunity to integrate digital innovation and explore new solutions through a collaborative and user-centric approach.