All Narratives (77)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- New York Times
- 2020
Decisions on whether or not law enforcement should be trusted with facial recognition are tricky, as is argued by Detroit city official James Tate. On one hand, the combination of the bias latent in the technology itself and the human bias of those who use it sometimes leads to over-policing of certain communities. On the other hand, with the correct guardrails, it can be an effective tool in getting justice in cases of violent crime. This article details the ongoing debate about how much facial recognition technology use is proper in Detroit.
- New York Times
- 2020
-
- 5 min
- New York Times
- 2020
A Case for Facial Recognition
Decisions on whether or not law enforcement should be trusted with facial recognition are tricky, as is argued by Detroit city official James Tate. On one hand, the combination of the bias latent in the technology itself and the human bias of those who use it sometimes leads to over-policing of certain communities. On the other hand, with the correct guardrails, it can be an effective tool in getting justice in cases of violent crime. This article details the ongoing debate about how much facial recognition technology use is proper in Detroit.
Who should be deciding on the guardrails surrounding the use of facial recognition technology? How can citizens have more control over when their face is being recorded or captured? Can there ever be enough guardrails to truly ensure that facial recognition technology can be used with no chance of bias?
-
- 5 min
- CNET
- 2019
Fight for the Future, a digital activist group, used Amazon’s Rekognition facial recognition software to scan faces on the street in Washington DC to show that there should be more guardrails on the use of this type of technology, before it is deployed for ends which violate human rights such as identifying peaceful protestors.
- CNET
- 2019
-
- 5 min
- CNET
- 2019
Demonstrators scan public faces in DC to show lack of facial recognition laws
Fight for the Future, a digital activist group, used Amazon’s Rekognition facial recognition software to scan faces on the street in Washington DC to show that there should be more guardrails on the use of this type of technology, before it is deployed for ends which violate human rights such as identifying peaceful protestors.
Does this kind of stunt seem effective at getting the attention of the public on the ways that facial recognition can be misused? How? Who decides what is a “positive” use of facial recognition technology, and how can these use cases be negotiated with those citizens who want their privacy protected?
-
- 11 min
- Kinolab
- 2016
Detectives Karin Parke and Blue Coulson work together to put an end to the series of mysterious murders perpetrated by the #DeathTo trend on social media. In this “Game of Consequences,” the person most mentioned under this hashtag each day becomes the target of ADIs, government drones shaped like bees, to track down and kill. This trend was spurred by bots on social media, drawing many people into participation, and a sole hacker was responsible both for the bots and for the abuse of the drones. After the detectives fail to protect one victim of the #DeathTo trend, they attempt to shut down the malware, but instead discover a large data mine and unleash a much more massive danger.
- Kinolab
- 2016
Hacked Drones and Targeting Citizens
Detectives Karin Parke and Blue Coulson work together to put an end to the series of mysterious murders perpetrated by the #DeathTo trend on social media. In this “Game of Consequences,” the person most mentioned under this hashtag each day becomes the target of ADIs, government drones shaped like bees, to track down and kill. This trend was spurred by bots on social media, drawing many people into participation, and a sole hacker was responsible both for the bots and for the abuse of the drones. After the detectives fail to protect one victim of the #DeathTo trend, they attempt to shut down the malware, but instead discover a large data mine and unleash a much more massive danger.
Can the “unreal” nature of digital platform ever truly remove harmful intent from inflammatory words or statements? How should “free speech” be regulated on platforms where not everything can be taken literally? How can the information available about a person through their social media use be abused to make them targets? Should the government use cutting-edge digital technology if there is even the slightest chance that it can be abused? Are there requisite consequences to showing a lack of empathy toward others on digital platforms?
-
- 7 min
- Mad Scientist Laboratory
- 2018
The combination of the profit motive for tech companies and the vague language of non-binding ehtical agreements for coders means that there must be a higher regulation for ethical deployment and use of technology. Argues that there must be clear demarcations between what is considered real and human versus fake and virtual. Digital technologies should be regulated in a manner similar to other technologies, such as guns, cars, or nuclear weapons.
- Mad Scientist Laboratory
- 2018
-
- 7 min
- Mad Scientist Laboratory
- 2018
Man Machine Rules
The combination of the profit motive for tech companies and the vague language of non-binding ehtical agreements for coders means that there must be a higher regulation for ethical deployment and use of technology. Argues that there must be clear demarcations between what is considered real and human versus fake and virtual. Digital technologies should be regulated in a manner similar to other technologies, such as guns, cars, or nuclear weapons.
How do we ensure the ethical use of AI by digital tech giants? Should there be an equivalent of the hippocratic oath for development of digital technology? How would you imagine something like this being put in place? Are the man-machine rules laid out at the end of the article realistic?
-
- 7 min
- MIT Technology Review
- 2019
Autonomous vehicles could be subject to hacks by adversarial machine-learning, possibly perpetrated by out-of-work truck/Uber drivers and “adversarial machine learning”. The fact that vehicle algorithms can already be fairly easily tricked also raises concerns.
- MIT Technology Review
- 2019
-
- 7 min
- MIT Technology Review
- 2019
Hackers Are the Real Obstacle for Self-Driving Vehicles
Autonomous vehicles could be subject to hacks by adversarial machine-learning, possibly perpetrated by out-of-work truck/Uber drivers and “adversarial machine learning”. The fact that vehicle algorithms can already be fairly easily tricked also raises concerns.
Had you considered this big obstacle in self-driving? How would this risk impact the business of self-driving vehicles? What are the consequences of companies not fully understanding the machine algorithms that they use? Should we use self-driving vehicles when this threat stands?
-
- 7 min
- The New York Times
- 2019
Biometric facial recognition software, specifically that used with arrest photos in the NYPD, makes extensive use of children’s arrest photos despite a far lower accuracy rate.
- The New York Times
- 2019
-
- 7 min
- The New York Times
- 2019
She Was Arrested at 14. Then Her Photo Went to a Biometrics Database
Biometric facial recognition software, specifically that used with arrest photos in the NYPD, makes extensive use of children’s arrest photos despite a far lower accuracy rate.
How can machine learning algorithms cause inequality to compound? Would it be better practice to try to make facial recognition equitable across all populations, or to abandon its use in law enforcement altogether, as some cities like Oakland have done?