Use of technology in attempts reduce crime, sometimes before it even happens
Predictive Policing (8)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Time Magazine
- 2017
Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.
- Time Magazine
- 2017
-
- 5 min
- Time Magazine
- 2017
The Police Are Using Computer Algorithms to Tell If You’re a Threat
Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.
Is the police risk score system biased, and does it improve or enhance human bias? Is it plausible to use digital technology to eliminate bias from American policing, or is this impossible? What might this look like? Does reliance on numerical data give police and tech companies more power or less power?
-
- 7 min
- Vice
- 2019
An academic perspective on an algorithm created by PredPol to “predict crime.” Unless every single crime is reported, and unless and police pursue all types of crimes committed by all people equally, it’s impossible to have a reinforcement learning system that predicts crime itself.Rather, police find crimes in the same places they’ve been told to look for them, feeding the algorithm ineffective data and allowing unjust targeting of communities of color by the police to continue based on trust in the algorithm.
- Vice
- 2019
-
- 7 min
- Vice
- 2019
Academics Confirm Major Predictive Policing Algorithm is Fundamentally Flawed
An academic perspective on an algorithm created by PredPol to “predict crime.” Unless every single crime is reported, and unless and police pursue all types of crimes committed by all people equally, it’s impossible to have a reinforcement learning system that predicts crime itself.Rather, police find crimes in the same places they’ve been told to look for them, feeding the algorithm ineffective data and allowing unjust targeting of communities of color by the police to continue based on trust in the algorithm.
Can an algorithm which claims to predict crime ever be fair? Is it ever justified for volatile actors such as police to act based on directions from a machine, where the logic is not always transparent?
-
- 7 min
- TED
- 2017
Predictive policing software such as PredPol may claim to be objective through mathematical, “colorblind” analyses of geographical crime areas, yet this supposed objectivity is not free of human bias and is in fact used as a justification for the further targeting of oppressed groups, such as poor communities or racial and ethnic minorities. Further, the balance between fairness and efficacy in the justice system must be considered, since algorithms tend more toward the latter than the former.
- TED
- 2017
-
- 7 min
- TED
- 2017
Justice in the Age of Big Data
Predictive policing software such as PredPol may claim to be objective through mathematical, “colorblind” analyses of geographical crime areas, yet this supposed objectivity is not free of human bias and is in fact used as a justification for the further targeting of oppressed groups, such as poor communities or racial and ethnic minorities. Further, the balance between fairness and efficacy in the justice system must be considered, since algorithms tend more toward the latter than the former.
Should we leave policing to algorithms? Can any “perfect” algorithm for policing be created? How can police departments and software companies be held accountable for masquerading bias as the objectivity of an algorithm?
-
- 5 min
- MIT Technology Review
- 2019
In the case of the New Orleans Police Department, along with other cities, data used to train predictive crime algorithms was inconsistent and “dirty” to begin with, making the results disproportionately targeted toward disadvantaged communities.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
Police across the US are training crime-predicting AIs on falsified data
In the case of the New Orleans Police Department, along with other cities, data used to train predictive crime algorithms was inconsistent and “dirty” to begin with, making the results disproportionately targeted toward disadvantaged communities.
If the data which we train algorithms with is inherently biased, then can we truly ever get a “fair” algorithm? Can AI programs ever solve or remove human bias? What might happen if machines make important criminal justice decisions, such as sentence lengths?
-
- 8 min
- Kinolab
- 2019
Chris is a ride-share driver who has taken passenger Jaden hostage, with the conditions of release being that he is connected with Billy Bauer, the CEO of social media company Smithereen, for a conversation. While the London police attempt to deal with the situation through negotiation, the management team at Smithereen uses several data mining techniques, including analysis of Chris’s various social media pages and audio data streaming from his device, to provide the police with a valuable and complete profile on Chris.
- Kinolab
- 2019
Social Media Data and Cooperation with Law Enforcement
Chris is a ride-share driver who has taken passenger Jaden hostage, with the conditions of release being that he is connected with Billy Bauer, the CEO of social media company Smithereen, for a conversation. While the London police attempt to deal with the situation through negotiation, the management team at Smithereen uses several data mining techniques, including analysis of Chris’s various social media pages and audio data streaming from his device, to provide the police with a valuable and complete profile on Chris.
Are things said on social media fair game for law enforcement to use against a person? Does this include data that a user might not even know a company has gathered on them? How might “abstractions” of a user formed by a social media company be misused to make a bad judgement about a person? Should social media information and profiles of users made by companies be used in attempts to stop crime or criminals before any wrongdoings are committed? What are the dangers of big data companies having a close relationship with law enforcement?
-
- 13 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens.
- Kinolab
- 2002
Preventative Policing and Surveillance Information
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens.
How will predicted crime be prosecuted? Should predicted crime be prosecuted? How could technologies such as the ones shown here be affected for the worse by human bias? How would these devices make racist policing practices even worse? Would certain communities be targeted? Is there ever any justification for constant civil surveillance?