Describes limitations and shortfalls of current digital technologies, particularly when compared to human capabilities.
Limitations of Digital Technologies (21)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- The Verge
- 2019
Reliance on “emotion recognition” algorithms, which use facial analysis to infer feelings. Credibility of the results in question based on inability of machines to recognize abstract nuances.
- The Verge
- 2019
-
- 7 min
- The Verge
- 2019
AI ‘Emotion Recognition’ Can’t Be Trusted
Reliance on “emotion recognition” algorithms, which use facial analysis to infer feelings. Credibility of the results in question based on inability of machines to recognize abstract nuances.
Can digital artifacts potentially detect human emotions correctly? Should our emotions be read by machines? Are emotions too complex for machines to understand? How is human agency impacted by discrete AI categories for emotions?
-
- 7 min
- ZDNet
- 2020
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
- ZDNet
- 2020
-
- 7 min
- ZDNet
- 2020
Rebooting AI: Deep learning, meet knowledge graphs
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
Does giving a machine similar learning capabilities to humans and animals bring artificial intelligence closer to singularity? Should humans ultimately be in control of what a machine learns? What is problematic about leaving AI less capable of understanding semantic webs?
-
- 10 min
- MIT Technology Review
- 2020
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
- MIT Technology Review
- 2020
-
- 10 min
- MIT Technology Review
- 2020
We read the paper that forced Timnit Gebru out of Google. Here’s what it says.
This article explains the ethical warnings of Timnit Gebru against training Natural Language Processing algorithms on large language models developed on sets of textual data from the internet. Not only does this process have a negative environmental impact, it also still does not allow these machine learning tools to process semantic nuance, especially as it relates to burgeoning social movements or countries with lower internet access. Dr. Gebru’s refusal to retract this paper ultimately lead to her dismissal from Google.
How should models for training NLP algorithms be more closely scrutinized? What sorts of voices are needed at the design table to ensure that the impact of such algorithms are consistent across all populations? Can this ever be achieved?
-
- 7 min
- The Verge
- 2020
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
- The Verge
- 2020
-
- 7 min
- The Verge
- 2020
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
What potential harms could you see from the misapplication of the PULSE algorithm? What sorts of bias-mitigating solutions besides more diverse data sets could you envision? Based on this case study, what sorts of real-world applications should facial recognition technology be trusted with?
-
- 10 min
- The New Yorker
- 2020
This article contextualizes the BLM uprisings of 2020 in a larger trend of using social media and other digital platforms to promote activist causes. A comparison between the benefits of in-person, on-the-ground activism and activism which takes place through social media is considered.
- The New Yorker
- 2020
-
- 10 min
- The New Yorker
- 2020
The Second Act of Social Media Activism
This article contextualizes the BLM uprisings of 2020 in a larger trend of using social media and other digital platforms to promote activist causes. A comparison between the benefits of in-person, on-the-ground activism and activism which takes place through social media is considered.
How should activism in its in-person and online forms be mediated? How does someone become an authority, for information or otherwise, on the internet? What are the benefits and detriments of the decentralization of organization afforded by social media activism?
-
- 10 min
- The Washington Post
- 2021
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
- The Washington Post
- 2021
-
- 10 min
- The Washington Post
- 2021
He predicted the dark side of the Internet 30 years ago. Why did no one listen?
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
Why are humanities perspectives needed in computer science and artificial intelligence fields? What would it take for data barons and/or technology users to listen to the predictions and ethical concerns of whistleblowers?