Video and data surveillance by public and private entities.
Civil Surveillance (39)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- New York Times
- 2019
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
- New York Times
- 2019
-
- 10 min
- New York Times
- 2019
As Cameras Track Detroit’s Residents, a Debate Ensues Over Racial Bias
Racial bias in facial recognition software used for Government Civil Surveillance in Detroit. Racially biased technology. Diminishes agency of minority groups and enhances latent human bias.
What are the consequences of employing biased technologies to survey citizens? Who loses agency, and who gains agency?
-
- 3 min
- CNBC
- 2013
Facial recognition software, or using computer vision and biometric technology on an image of a person to identify them, has potential applications in law enforcement to help catch suspects or criminals. However, aspects of probability are at play, especially as the photos or videos captured become blurrier and need an additional layer of software analysis to be “de-pixelized.” Also, identification depends on the databases to which the FBI has access.
- CNBC
- 2013
-
- 3 min
- CNBC
- 2013
How Facial Recognition Technology Could Help Catch Criminals
Facial recognition software, or using computer vision and biometric technology on an image of a person to identify them, has potential applications in law enforcement to help catch suspects or criminals. However, aspects of probability are at play, especially as the photos or videos captured become blurrier and need an additional layer of software analysis to be “de-pixelized.” Also, identification depends on the databases to which the FBI has access.
How should law enforcement balance training these facial recognition programs with good amounts of quality data and avoiding breaching privacy by accessing more databases with citizen faces? Where can human bias enter into the human-computer systems described in the article? Should there be any margin of error or aspect of probability in technologies that work in volatile areas like law enforcement?
-
- 7 min
- Wired
- 2020
As different levels of the U.S government have introduced and passed bills regulating or banning the use of facial recognition technologies, tech monopolies such as Amazon and IBM have become important lobbying agents in these conversations. It seems that most larger groups are on different pages in terms of how exactly face recognition algorithms should be limited or used, especially given their negative impacts on privacy when used for surveillance.
- Wired
- 2020
-
- 7 min
- Wired
- 2020
Congress Is Eyeing Face Recognition, and Companies Want a Say
As different levels of the U.S government have introduced and passed bills regulating or banning the use of facial recognition technologies, tech monopolies such as Amazon and IBM have become important lobbying agents in these conversations. It seems that most larger groups are on different pages in terms of how exactly face recognition algorithms should be limited or used, especially given their negative impacts on privacy when used for surveillance.
Can and should the private sector be regulated in its use of facial recognition technologies? How is it that tech monopolies might hold so much sway with government officials, and how can this be addressed? Do the benefits of facial recognition, such as convenience at the airport, listed at the end of the article make enough of a case against a complete ban of the technology, or do the bad applications ultimately outweigh the good ones? What would the ideal bill look like in terms of limiting or banning facial recognition?
-
- 5 min
- Gizmodo
- 2020
This article describes the new Amazon Sidewalk feature and subsequently explains why users should not buy into this service. Essentially, this feature uses the internet of things created by Amazon devices such as the Echo or Ring camera to create a secondary network connecting nearby homes which also contain these devices, which is sustained by each home “donating” a small amount of broadband. It is explained that this is a dangerous concept because this smaller network may be susceptible to hackers, putting a large number of users at risk.
- Gizmodo
- 2020
-
- 5 min
- Gizmodo
- 2020
You Need to Opt Out of Amazon Sidewalk
This article describes the new Amazon Sidewalk feature and subsequently explains why users should not buy into this service. Essentially, this feature uses the internet of things created by Amazon devices such as the Echo or Ring camera to create a secondary network connecting nearby homes which also contain these devices, which is sustained by each home “donating” a small amount of broadband. It is explained that this is a dangerous concept because this smaller network may be susceptible to hackers, putting a large number of users at risk.
Why are “secondary networks” like the one described here a bad idea in terms of both surveillance and data privacy? Is it possible for the world to be too networked? How can tech developers make sure the general public has a healthy skepticism toward new devices? Or is it ultimately Amazon’s job to think about the ethical implications of this secondary network before introducing it for profits?
-
- 12 min
- Wired
- 2018
This video offers a basic introduction to the use of machine learning in predictive policing, and how this disproportionately affects low income communities and communities of color.
- Wired
- 2018
How Cops Are Using Algorithms to Predict Crimes
This video offers a basic introduction to the use of machine learning in predictive policing, and how this disproportionately affects low income communities and communities of color.
Should algorithms ever be used in a context where human bias is already rampant, such as in police departments? Why is it that the use of digital technologies to accomplish tasks in this age makes a process seem more “efficient” or “objective”? What are the problems with police using algorithms of which they do not fully understand the inner workings? Is the use of predictive policing algorithms ever justifiable?
-
- 7 min
- MIT Tech Review
- 2020
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
- MIT Tech Review
- 2020
-
- 7 min
- MIT Tech Review
- 2020
Why 2020 was a pivotal, contradictory year for facial recognition
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
Should there be a national moratorium on facial recognition technology? How can it be ensured that smaller companies like Clearview AI are more carefully watched and regulated? Do we consent to having or faces identified any time we post something to social media?