AI (124)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- Vice
- 2019
Programmer creates an application that uses neural networks to remove clothing from the images of women. Deepfake technology being used against women systematically, despite continued narrative that its use in the political realm is the most pressing issue.
- Vice
- 2019
-
- 7 min
- Vice
- 2019
This Horrifying App Undresses a Photo of Any Woman With a Single Click
Programmer creates an application that uses neural networks to remove clothing from the images of women. Deepfake technology being used against women systematically, despite continued narrative that its use in the political realm is the most pressing issue.
How does technology enhance violation of sexual privacy? Who should regulate this technology, and how?
-
- 7 min
- New York Times
- 2018
Youtube’s algorithm suggests increasingly radical recommendations to its users, maximising the amount of time they spend on the platform. The tendency toward inflammatory recommendations often leads to political misinformation.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Youtube, The Great Radicalizer
Youtube’s algorithm suggests increasingly radical recommendations to its users, maximising the amount of time they spend on the platform. The tendency toward inflammatory recommendations often leads to political misinformation.
What are the dangers of being offered increasingly radical videos on Youtube?
-
- 10 min
- Quartz
- 2019
A comparison of surveillance systems in China and the US which target, and aid in the persecution of, ethnic minorities. Data on targeted people is tracked extensively and compiled into intuitive databases which can be abused by government organizations.
- Quartz
- 2019
-
- 10 min
- Quartz
- 2019
China embraces its surveillance state. The US pretends it doesn’t have one
A comparison of surveillance systems in China and the US which target, and aid in the persecution of, ethnic minorities. Data on targeted people is tracked extensively and compiled into intuitive databases which can be abused by government organizations.
In what ways are the surveillance systems of the US and China similar? Should big tech companies be allowed to contract with the government on the scale that a company like Palantir did?
-
- 15 min
- The App Solutions
Overview of recommender systems, which are information filtering algorithms design to suggest content or products to a particular user.
- The App Solutions
-
- 15 min
- The App Solutions
5 types of recommender systems and their impact on customer experience
Overview of recommender systems, which are information filtering algorithms design to suggest content or products to a particular user.
How do information filtering algorithms work and learn? Are some types of recommender systems more generally ethical than others?
-
- 5 min
- CNN
- 2010
Algorithms and machines can struggle with facial recognition, and need ideal source images to perform it consistently. However, its potential use in monitoring and identifying citizens is concerning.
- CNN
- 2010
-
- 5 min
- CNN
- 2010
Why face recognition isn’t scary — yet
Algorithms and machines can struggle with facial recognition, and need ideal source images to perform it consistently. However, its potential use in monitoring and identifying citizens is concerning.
How have the worries regarding facial recognition changed since 2010? Can we teach machines to identify human faces? How can facial recognition pose a danger/worry when use for governmental purposes?
-
- 7 min
- The Verge
- 2019
Reliance on “emotion recognition” algorithms, which use facial analysis to infer feelings. Credibility of the results in question based on inability of machines to recognize abstract nuances.
- The Verge
- 2019
-
- 7 min
- The Verge
- 2019
AI ‘Emotion Recognition’ Can’t Be Trusted
Reliance on “emotion recognition” algorithms, which use facial analysis to infer feelings. Credibility of the results in question based on inability of machines to recognize abstract nuances.
Can digital artifacts potentially detect human emotions correctly? Should our emotions be read by machines? Are emotions too complex for machines to understand? How is human agency impacted by discrete AI categories for emotions?