Privacy (134)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Wired
- 2019
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
-
- 5 min
- Gizmodo
- 2020
This article describes the new Amazon Sidewalk feature and subsequently explains why users should not buy into this service. Essentially, this feature uses the internet of things created by Amazon devices such as the Echo or Ring camera to create a secondary network connecting nearby homes which also contain these devices, which is sustained by each home “donating” a small amount of broadband. It is explained that this is a dangerous concept because this smaller network may be susceptible to hackers, putting a large number of users at risk.
- Gizmodo
- 2020
-
- 5 min
- Gizmodo
- 2020
You Need to Opt Out of Amazon Sidewalk
This article describes the new Amazon Sidewalk feature and subsequently explains why users should not buy into this service. Essentially, this feature uses the internet of things created by Amazon devices such as the Echo or Ring camera to create a secondary network connecting nearby homes which also contain these devices, which is sustained by each home “donating” a small amount of broadband. It is explained that this is a dangerous concept because this smaller network may be susceptible to hackers, putting a large number of users at risk.
Why are “secondary networks” like the one described here a bad idea in terms of both surveillance and data privacy? Is it possible for the world to be too networked? How can tech developers make sure the general public has a healthy skepticism toward new devices? Or is it ultimately Amazon’s job to think about the ethical implications of this secondary network before introducing it for profits?
-
- 3 min
- CNN
- 2021
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
- CNN
- 2021
-
- 3 min
- CNN
- 2021
Microsoft patented a chatbot that would let you talk to dead people. It was too disturbing for production
The prominence of social data on any given person afforded by digital artifacts, such as social media posts and text messages, can be used to train a new algorithm patented by Microsoft to create a chatbot meant to imitate that specific person. This technology has not been released, however, due to its harrowing ethical implications of impersonation and dissonance. For the Black Mirror episode referenced in the article, see the narratives “Martha and Ash Parts I and II.”
How do humans control their identity when it can be replicated through machine learning? What sorts of quirks and mannerisms are unique to humans and cannot be replicated by an algorithm?
-
- 10 min
- The Washington Post
- 2021
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
- The Washington Post
- 2021
-
- 10 min
- The Washington Post
- 2021
He predicted the dark side of the Internet 30 years ago. Why did no one listen?
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
Why are humanities perspectives needed in computer science and artificial intelligence fields? What would it take for data barons and/or technology users to listen to the predictions and ethical concerns of whistleblowers?
-
- 7 min
- MIT Tech Review
- 2020
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
- MIT Tech Review
- 2020
-
- 7 min
- MIT Tech Review
- 2020
Why 2020 was a pivotal, contradictory year for facial recognition
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
Should there be a national moratorium on facial recognition technology? How can it be ensured that smaller companies like Clearview AI are more carefully watched and regulated? Do we consent to having or faces identified any time we post something to social media?
-
- 7 min
- Wired
- 2021
An anonymous college student created a website titled “Faces of the Riot,” a virtual wall containing over 6,000 face images of insurrectionists present at the riot at the Capitol on January 6th, 2021. The ultimate goal of the creator’s site, which used facial recognition algorithms to crawl through videos posted to the right-wing social media site Parler, is to hopefully have viewers identify any criminals that they recognize to the proper authorities. While the creator put safeguards for privacy in place, such as using “facial detection” rather than “facial recognition”, and their intentions are supposedly positive, some argue that the implications on privacy and the widespread integration of this technique could be negative.
- Wired
- 2021
-
- 7 min
- Wired
- 2021
This Site Published Every Face From Parler’s Capitol Riot Videos
An anonymous college student created a website titled “Faces of the Riot,” a virtual wall containing over 6,000 face images of insurrectionists present at the riot at the Capitol on January 6th, 2021. The ultimate goal of the creator’s site, which used facial recognition algorithms to crawl through videos posted to the right-wing social media site Parler, is to hopefully have viewers identify any criminals that they recognize to the proper authorities. While the creator put safeguards for privacy in place, such as using “facial detection” rather than “facial recognition”, and their intentions are supposedly positive, some argue that the implications on privacy and the widespread integration of this technique could be negative.
Who deserves to be protected from having shameful data about themselves posted publicly to the internet? Should there even be any limits on this? What would happen if a similar website appeared in a less seemingly noble context, such as identifying members of a minority group in a certain area? How could sites like this expand the agency of bad or discriminatory actors?