Artificial Intelligence has been getting a lot of attention recently. Firstly there has been the explosion of interest in ChatGPT, which is already having a huge impact on education and business – in many cases positive, but in some cases not so much. Then there was the news last week that Geoffrey Hinton, one of the ‘Godfathers of AI’ has quit Google over his concerns about the development of AI.
Similarly, the wide-ranging debate around AI in policing only seems to intensify.
Will AI revolutionise the way that policing approaches data and analytics in the UK – or would it be wiser to keep it at arm’s length until it has a clearer understanding of the impact and challenges associated with it?
Principle One’s Phil Tomlinson considers both sides of the argument.
The concept of AI isn’t new. It’s not even new-ish. Talos, the mechanical bronze giant that protected Crete from Jason and the Argonauts dates to the 8th Century BC. Talos was eventually destroyed when Jason pulled a plug out of his ankle – an IT fix that still works today.
Slightly more recently, in 1950 Alan Turing suggested that within 50 years we would be unable to distinguish a human from a machine and it would even be able to beat a human at chess. In fact, computers started beating humans at chess as early as 1956 and no human has beaten a computer in a chess tournament for over 15 years. AI is everywhere – so much so that we’ve stopped noticing it.
Satnavs, aircraft piloting systems, chatbots, translation tools, streaming services, social media, online dating, online gaming and online shopping all use AI routinely to make services more intuitive in a world increasingly cluttered with options and choices. We want to get to the answers fast and we want AI to take us there.
So why, if everyone is using and trusting AI every day, is there still so much concern about its use in policing? Certainly, using an algorithm that may lead to a person being arrested has very different consequences to clicking a boxset recommendation on Netflix. But if it gets resource-constrained police officers to the right answer more quickly – surely that’s a good thing, right? Especially if it means identifying a suspect, safeguarding the public and placing police resources in the best place to target and reduce crime.
Well, let’s look at the perception of AI in popular culture. The Terminator, Robocop, HAL, Agent Smith and WOPR (one for the 80s kids) are just a few examples of where AI has been seen as untrustworthy or out of control. Our own programming doesn’t yet let us trust AI by default.
So, what does this mean for policing? Is AI a force for good which will enable them to be faster, more effective and achieve greater trust or will it lead to increased bias, greater workloads and a perception that policing is using new technology without the necessary protection, privacy or transparency? Quite simply, there is a lot more at stake when we use AI in policing than in many of the day-to-day applications that we take for granted.
Perhaps the term Artificial Intelligence is a little unhelpful to policing – and gives the impression that decisions are being made by some kind of autonomous detector-bot, using poorly recorded information on an old police database. This is not the case, and it often surprises people when they hear that the police have been using AI for decades.
Artificial Intelligence is best described as having three categories – Assistive, Augmented and Autonomous.
Assistive Intelligence allows us to undertake laborious tasks far more quickly, easily and more accurately. It doesn’t make decisions for us – it simply gets us to the right information so we can assess it and make prompt decisions. It’s the Control-F of AI in many ways and something police officers and staff use every day within even the most basic intelligence and crime management systems.
We agree this is good for policing and saves time, effort and money.
Augmented Intelligence allows us to complete tasks that simply wouldn’t be possible if done manually because of the vast amounts of information to be analysed. Not only does it get us to the right information, but it allows us to generate additional insight and understanding via mapping, timelines and network charts. Often referred to as Augmented Analytics, it can also work proactively for us to exploit data and produce reports for us to assess. Something police analysts will be very familiar with, if they examine large volumes of ANPR, Communications Data or CCTV images using analytical software. So, this is good AI for policing too.
Autonomous Intelligence is the one that sets the hares running because it is not only finding the data, but also assessing it and making decisions without a ‘human in the loop’ – it’s the autopilot of AI. Interestingly, we’ve trusted autopilot in aircraft for decades and think nothing of drifting off to sleep in the safe hands of a piece of software, confident that a pilot will take over if necessary. So, why do the public struggle to trust AI in policing, even though the police are there to take over when necessary too?
Is it this concept of ‘black box’ analytics that’s causing the trust and confidence issues for policing? We are used to working with deterministic evidence based on facts that lets us prove a case in court. This is language that is familiar to the public and they associate with the criminal justice system, this is how we arrive at ‘nothing but the truth’.
Once we introduce AI, we are dealing with probabilistic evidence – it’s about likelihoods, percentages, probabilities and possibilities. It might be the truth, based on the algorithms used. This concept is already familiar to us in forensic science and in particular DNA analysis - but it’s backed up by tangible evidence and experience. However, if we think back to the initial use of DNA in policing – it got a very hard time too. Its accuracy and reliability was rightly challenged and questioned.
So what can we learn from that experience?
Well firstly, you need to be able to understand your use of AI – or at least have someone who can explain it; an expert who can stand up in court. A black box won’t be standing in the witness box explaining its decisions. And if the police don’t understand how the algorithms work – then they won’t be able to explain it either. AI often lacks clarity in its communication - and the companies who develop AI are understandably reluctant to share what is valuable IP. Or maybe they’re not sure either?!
Another challenge we face in policing is the quality, range and potential bias of the datasets you have at your disposal. Just like humans and indeed DNA, if the evidence is flawed or contaminated, so is the decision-making. This leads to questions on accuracy and reliability. If an analyst has to keep checking the results, they may as well do it themselves and save time. You then risk having an expensive black box sat in the corner getting dusty.
So maybe this is where we should draw the line on AI in policing, right?
Well yes – and no. The use of Autonomous Intelligence should not be seen as a no-go area for policing, quite the opposite. We have access to some of the most innovative thinking around AI, both in academia and in industry and a strong commitment across both communities to use it for good. We’re not talking AI for the sake of it – we’re talking AI to achieve better outcomes for victims, communities and policing through careful and targeted application.
However, only through open, transparent and carefully controlled testing will trust and confidence be maintained, and we can tackle the perception that the police are blindly harvesting their data and allowing the machines to take over the decisions.
Can it replace the skill, experience, knowledge and instinct that sits in the heads of front-line officers, detective and analysts? Nope - well not just yet. But it can write you a terrific 10-point AI Policing Strategy in 10 seconds – I checked on ChatGPT. So, writing a strategy is the easy part - implementing the change necessary to deliver the strategy and realise all the operational benefits of AI is a bit trickier.
Comments