AI (143)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- New York Times
- 2019
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
- New York Times
- 2019
-
- 10 min
- New York Times
- 2019
As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
What are the consequences of employing biased technologies to survey citizens? Who loses agency, and who gains agency?
-
- 7 min
- TED
- 2017
Predictive policing software such as PredPol may claim to be objective through mathematical, “colorblind” analyses of geographical crime areas, yet this supposed objectivity is not free of human bias and is in fact used as a justification for the further targeting of oppressed groups, such as poor communities or racial and ethnic minorities. Further, the balance between fairness and efficacy in the justice system must be considered, since algorithms tend more toward the latter than the former.
- TED
- 2017
-
- 7 min
- TED
- 2017
Justice in the Age of Big Data
Predictive policing software such as PredPol may claim to be objective through mathematical, “colorblind” analyses of geographical crime areas, yet this supposed objectivity is not free of human bias and is in fact used as a justification for the further targeting of oppressed groups, such as poor communities or racial and ethnic minorities. Further, the balance between fairness and efficacy in the justice system must be considered, since algorithms tend more toward the latter than the former.
Should we leave policing to algorithms? Can any “perfect” algorithm for policing be created? How can police departments and software companies be held accountable for masquerading bias as the objectivity of an algorithm?
-
- 27 min
- Cornell Tech
- 2019
Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.
- Cornell Tech
- 2019
Teaching Ethics in Data Science
Solon Barocas discusses his relatively new course on ethics in data science, following a larger trend of developing ethical sensibility in this field. He shares ideas of spreading out lessons across courses, promoting dialogue, and making sure we are really analyzing problems while learning to stand up for the right thing. Offers a case study of technological ethical sensibilities through questions raised by predictive policing algorithms.
Why is it important to implement ethical sensibility in data science? What could happen if we do not?
-
- 7 min
- Vice
- 2019
An academic perspective on an algorithm created by PredPol to “predict crime.” Unless every single crime is reported, and unless and police pursue all types of crimes committed by all people equally, it’s impossible to have a reinforcement learning system that predicts crime itself.Rather, police find crimes in the same places they’ve been told to look for them, feeding the algorithm ineffective data and allowing unjust targeting of communities of color by the police to continue based on trust in the algorithm.
- Vice
- 2019
-
- 7 min
- Vice
- 2019
Academics Confirm Major Predictive Policing Algorithm is Fundamentally Flawed
An academic perspective on an algorithm created by PredPol to “predict crime.” Unless every single crime is reported, and unless and police pursue all types of crimes committed by all people equally, it’s impossible to have a reinforcement learning system that predicts crime itself.Rather, police find crimes in the same places they’ve been told to look for them, feeding the algorithm ineffective data and allowing unjust targeting of communities of color by the police to continue based on trust in the algorithm.
Can an algorithm which claims to predict crime ever be fair? Is it ever justified for volatile actors such as police to act based on directions from a machine, where the logic is not always transparent?
-
- 2 min
- Wired
- 2019
Synthetic human faces are able to be generated by a machine learning algorithm.
- Wired
- 2019
-
- 2 min
- Wired
- 2019
Artificial Intelligence Is Coming For Our Faces
Synthetic human faces are able to be generated by a machine learning algorithm.
What are some consequences to AI being able to render fake yet believable human faces?
-
- 7 min
- MIT Technology Review
- 2019
Autonomous vehicles could be subject to hacks by adversarial machine-learning, possibly perpetrated by out-of-work truck/Uber drivers and “adversarial machine learning”. The fact that vehicle algorithms can already be fairly easily tricked also raises concerns.
- MIT Technology Review
- 2019
-
- 7 min
- MIT Technology Review
- 2019
Hackers Are the Real Obstacle for Self-Driving Vehicles
Autonomous vehicles could be subject to hacks by adversarial machine-learning, possibly perpetrated by out-of-work truck/Uber drivers and “adversarial machine learning”. The fact that vehicle algorithms can already be fairly easily tricked also raises concerns.
Had you considered this big obstacle in self-driving? How would this risk impact the business of self-driving vehicles? What are the consequences of companies not fully understanding the machine algorithms that they use? Should we use self-driving vehicles when this threat stands?