Recommender Systems (6)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Wired
- 2019
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
- Wired
- 2019
-
- 6 min
- Wired
- 2019
The Toxic Potential of YouTube’s Feedback Loop
Spreading of harmful content through Youtube’s AI recommendation engine algorithm. AI helps create filter bubbles and echo chambers. Limited user agency to be exposed to certain content.
How much agency do we have over the content we are shown in our digital artifacts? Who decides this? How skeptical should we be of recommender systems?
-
- 15 min
- The App Solutions
Overview of recommender systems, which are information filtering algorithms design to suggest content or products to a particular user.
- The App Solutions
-
- 15 min
- The App Solutions
5 types of recommender systems and their impact on customer experience
Overview of recommender systems, which are information filtering algorithms design to suggest content or products to a particular user.
How do information filtering algorithms work and learn? Are some types of recommender systems more generally ethical than others?
-
- 7 min
- New York Times
- 2018
Youtube’s algorithm suggests increasingly radical recommendations to its users, maximising the amount of time they spend on the platform. The tendency toward inflammatory recommendations often leads to political misinformation.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Youtube, The Great Radicalizer
Youtube’s algorithm suggests increasingly radical recommendations to its users, maximising the amount of time they spend on the platform. The tendency toward inflammatory recommendations often leads to political misinformation.
What are the dangers of being offered increasingly radical videos on Youtube?
-
- 15 min
- n/a
- 2018
Choose-your-own-adventure game, in which you experience some sort of data fraud through acting in the position of a cast of characters.
- n/a
- 2018
-
- 15 min
- n/a
- 2018
Choose Your Own Fake News
Choose-your-own-adventure game, in which you experience some sort of data fraud through acting in the position of a cast of characters.
How can you be less vulnerable to fake news and fake advertising online?
-
- 2 min
- The Verge
- 2019
In this very short narrative, the Social Media Addiction Reduction technology Act is presented in the context of social networks and concerns around digital addiction.
- The Verge
- 2019
-
- 2 min
- The Verge
- 2019
New bill would ban autoplay videos and endless scrolling
In this very short narrative, the Social Media Addiction Reduction technology Act is presented in the context of social networks and concerns around digital addiction.
How do they work and what are the risks of digital addiction mechanisms? How to regulate digital content that can generate addictions?
-
- 10 min
- The Washington Post
- 2021
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
- The Washington Post
- 2021
-
- 10 min
- The Washington Post
- 2021
He predicted the dark side of the Internet 30 years ago. Why did no one listen?
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
Why are humanities perspectives needed in computer science and artificial intelligence fields? What would it take for data barons and/or technology users to listen to the predictions and ethical concerns of whistleblowers?