Describes limitations and shortfalls of current digital technologies, particularly when compared to human capabilities.
Limitations of Digital Technologies (21)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 15 min
- Hidden Switch
- 2018
A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.
- Hidden Switch
- 2018
-
- 15 min
- Hidden Switch
- 2018
Monster Match
A hands-on learning experience about the algorithms used in dating apps through the perspective of a created monster avatar.
How do algorithms in dating apps work? What gaps seemed most prominent to you? What upset you most about the way this algorithm defined you and the choices it offered to you?
-
- 10 min
- The New Yorker
- 2019
Great breakdown of the concerns that come with automating the world without understanding why it works. Provides the principal concerns with the “hidden layer” of artificial neural networks, and how the lack of human understanding of some AI decision making makes these machines susceptible to manipulation.
- The New Yorker
- 2019
-
- 10 min
- The New Yorker
- 2019
The Hidden Costs of Automated Thinking
Great breakdown of the concerns that come with automating the world without understanding why it works. Provides the principal concerns with the “hidden layer” of artificial neural networks, and how the lack of human understanding of some AI decision making makes these machines susceptible to manipulation.
Should we still use technology that we do not have a full understanding of? Might machines play a role in the demise of expertise? How can companies and institutions be held accountable for “lifting the curtain” behind their algorithms?
- Wired
- 2021
Youtube algorithm’s struggle to distinguish chess-related terms from hate speech and abuse has revealed shortcomings in artificial intelligence’s ability to moderate online hate speech. The incident reflects the need to develop digital technologies capable of processing natural languages with a sufficient degree of social sensitivity.
- Wired
- 2021
- Wired
- 2021
Why a YouTube Chat About Chess Got Flagged for Hate Speech
Youtube algorithm’s struggle to distinguish chess-related terms from hate speech and abuse has revealed shortcomings in artificial intelligence’s ability to moderate online hate speech. The incident reflects the need to develop digital technologies capable of processing natural languages with a sufficient degree of social sensitivity.
Where do you draw the line between freedom of speech and online community conduct and regulations? What are some problems you think AI will experience in moderating hate speech like slurs?