Themes (326)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Kinolab
- 2019
Eleanor Shellstrop runs a fake afterlife, in which she conducts an experiment to prove that humans with low ethical sensibility can improve themselves. One of the subjects, Simone, is in deep denial upon arriving in this afterlife, and does as she pleases after convincing herself that nothing is real. Elsewhere, another conductor of the experiment, Jason, kills a robot which has been taunting him since the advent of the experiment.
- Kinolab
- 2019
Resisting Realities and Robotic Murder
Eleanor Shellstrop runs a fake afterlife, in which she conducts an experiment to prove that humans with low ethical sensibility can improve themselves. One of the subjects, Simone, is in deep denial upon arriving in this afterlife, and does as she pleases after convincing herself that nothing is real. Elsewhere, another conductor of the experiment, Jason, kills a robot which has been taunting him since the advent of the experiment.
What are the pros and cons of solipsism as a philosophy? Does it pose a danger of making us act immorally? How can we apply the risk of solipsism to technology such as virtual reality– a space where we know nothing is real except our own feelings and perceptions. Should virtual reality have ethical rules to prevent solipsism from brewing in it? Could that leak into our daily lives as well?
Is it ethical for humans to kill AI beings in fits of negative emotions, such as jealousy? Should this be able to happen on a whim? Should humans have total control of whether AI beings live or die?
-
- 10 min
- New York Times
- 2019
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
- New York Times
- 2019
-
- 10 min
- New York Times
- 2019
As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
What are the consequences of employing biased technologies to survey citizens? Who loses agency, and who gains agency?
-
- 15 min
- Hidden Switch
- 2018
A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.
- Hidden Switch
- 2018
-
- 15 min
- Hidden Switch
- 2018
Monster Match
A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.
How do algorithms in dating apps work? What gaps seemed most prominent to you? What upset you most about the way this algorithm defined you and the choices it offered to you?
-
- 3 min
- CNET
- 2019
US Government agencies rely on outdated verification methods, increasing the risk of identify theft.
- CNET
- 2019
-
- 3 min
- CNET
- 2019
Thanks to Equifax breach, 4 US agencies don’t properly verify your data, GAO finds
US Government agencies rely on outdated verification methods, increasing the risk of identify theft.
If the government does not ensure our cyber security, then who does? Can any digital method for identity verification be completely safe, especially given how much of our personal data lives in the digital world?
-
- 7 min
- The New York Times
- 2019
Biometric facial recognition software, specifically that used with arrest photos in the NYPD, makes extensive use of children’s arrest photos despite a far lower accuracy rate.
- The New York Times
- 2019
-
- 7 min
- The New York Times
- 2019
She Was Arrested at 14. Then Her Photo Went to a Biometrics Database
Biometric facial recognition software, specifically that used with arrest photos in the NYPD, makes extensive use of children’s arrest photos despite a far lower accuracy rate.
How can machine learning algorithms cause inequality to compound? Would it be better practice to try to make facial recognition equitable across all populations, or to abandon its use in law enforcement altogether, as some cities like Oakland have done?
-
- 5 min
- MIT Technology Review
- 2019
In the case of the New Orleans Police Department, along with other cities, data used to train predictive crime algorithms was inconsistent and “dirty” to begin with, making the results disproportionately targeted toward disadvantaged communities.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
Police across the US are training crime-predicting AIs on falsified data
In the case of the New Orleans Police Department, along with other cities, data used to train predictive crime algorithms was inconsistent and “dirty” to begin with, making the results disproportionately targeted toward disadvantaged communities.
If the data which we train algorithms with is inherently biased, then can we truly ever get a “fair” algorithm? Can AI programs ever solve or remove human bias? What might happen if machines make important criminal justice decisions, such as sentence lengths?