Police Story: AI as Crime Preventer, Crime-Solver

AI already has its eyes and ears on you

The use of AI by law enforcement agents (LEAs) is perhaps more pervasive than most people realize. Conclusions drawn from a 2023 Executive Order from the Office of the U.S. President and published by the (U.S.) Congressional Research Service identified a number of computer-driven tools LEAs already use:

▪ Automated license plate readers to identify red-light violations and issue citations automatically to the vehicle’s registered owner.

▪ Some security cameras employ AI-embedded hardware that can match crime suspects using real-time facial recognition.

▪ Advanced speech recognition software can help identify specific individual’s voices when police or prosecutors are evaluating evidence for us in interrogations or witness testimony.

▪ Incorporating chatbots into police communications with the public enhances police communications with the public it serves by responding to routine inquiries and push notifications of emergency information. Yet, despite the slow creep – or perhaps because of the slow creep – of technology into law enforcement, some suggest we’re not asking the right questions about the use of technology in law enforcement.

The stakes are high for all the stakeholders

Researchers from Marquette University, Carnegie Mellon University, University of Colorado Boulder, University of Chicago, and University of Toronto gathered 60 participants from a midwestern, mid-sized U.S. city to interact with some of the tools and scenarios people on both sides of what has sometimes been called “the thin blue line” (referring to the separation between officers and civilians) and watch how they engaged in the various settings.

The participants were made up of community members, technical experts, and law enforcement agents. What the research aimed to divine was how these three groups, distinguished by their lived experiences, technical knowledge, and domain expertise shaped their interactions with algorithmic decision-support systems (ADS) and how that engagement shaped their human-AI decision making.

The domain experts readily accepting the first crime map shown to them, a phenomenon known as anchoring bias. Community members and technical experts were more prone to proactively engage with the tool, modify the parameters that had been applied, and generate substantively different maps. All the feedback, the researchers determined was useful to their recommendations on how best to implement AI in law enforcement. Community members questioned the tool’s core motivation, technical experts noted the elasticity of the data science practice itself, and LEAs found ways to suggest how to redesign the tool to complement their real-world experience in the field.

The community members wanted LEAs to reconsider the incentive structures of the tools, which, from their perspective, all too often resulted in racial profiling and targeting low-income communities.

racially profile and target low-income communities. The technical group pointed out the elasticity and subjective nature of the data science and suggested a more collaborative, interdisciplinary method to influence the decision-making such tools influence. The LEAs learned that their decisions were frequently used to validate their experiences operating in the field.

The take-away from the study is this: Law enforcement AI systems need to consider worker-centric experience and supplement that information with experiences from the communities they serve. A system that incorporates human experience and AI-derived decisions is better than using only one or the other.

AI Today

Post Office Box 54272, San Jose, CA, 95154, US.
© 2024 Hologram LLC. All rights reserved.

Social Links