Human Control of Technology (67)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 6 min
- Vox
- 2020
Even virtual realities with unrealistic yet believable graphics are able to fool the brain’s sense of perception into believing that the digital environment still operates under the same rules as the real world. Connecting the technologies directly to one’s senses is more immersive than looking at a screen; although human brains have been able to process flat images for a long time, the direct sight connection to two screens with virtual reality makes perception a bit more muddled.
- Vox
- 2020
How Virtual Reality Tricks Your Brain
Even virtual realities with unrealistic yet believable graphics are able to fool the brain’s sense of perception into believing that the digital environment still operates under the same rules as the real world. Connecting the technologies directly to one’s senses is more immersive than looking at a screen; although human brains have been able to process flat images for a long time, the direct sight connection to two screens with virtual reality makes perception a bit more muddled.
Should virtual reality ever reach a point where it is indistinguishable from true reality in terms of graphic design or other sensory information? How could such technology be weaponized or abused? How accessible should the most immersive virtual reality technologies be to the general public?
-
- 7 min
- VentureBeat
- 2021
New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.
- VentureBeat
- 2021
-
- 7 min
- VentureBeat
- 2021
Salesforce researchers release framework to test NLP model robustness
New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.
What does “robustness” in a natural language processing algorithm mean to you? Should machines always be taught to automatically associate certain words or terms? What are the consequences of large corporations not using the most robust training data for their NLP algorithms?
-
- 7 min
- Chronicle
- 2021
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
- Chronicle
- 2021
-
- 7 min
- Chronicle
- 2021
Artificial Intelligence Is a House Divided
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
Which approach to AI (symbolic or neural networks) do you believe leads to greater transparency? Which approach to AI do you believe might be more effective in accomplishing a certain goal? Does one approach make you feel more comfortable than the other? How could these two approaches be synthesized, if at all?
-
- 10 min
- The Washington Post
- 2021
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
- The Washington Post
- 2021
-
- 10 min
- The Washington Post
- 2021
He predicted the dark side of the Internet 30 years ago. Why did no one listen?
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
Why are humanities perspectives needed in computer science and artificial intelligence fields? What would it take for data barons and/or technology users to listen to the predictions and ethical concerns of whistleblowers?
-
- 10 min
- The New Yorker
- 2020
This article contextualizes the BLM uprisings of 2020 in a larger trend of using social media and other digital platforms to promote activist causes. A comparison between the benefits of in-person, on-the-ground activism and activism which takes place through social media is considered.
- The New Yorker
- 2020
-
- 10 min
- The New Yorker
- 2020
The Second Act of Social Media Activism
This article contextualizes the BLM uprisings of 2020 in a larger trend of using social media and other digital platforms to promote activist causes. A comparison between the benefits of in-person, on-the-ground activism and activism which takes place through social media is considered.
How should activism in its in-person and online forms be mediated? How does someone become an authority, for information or otherwise, on the internet? What are the benefits and detriments of the decentralization of organization afforded by social media activism?
-
- 7 min
- The Verge
- 2020
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
- The Verge
- 2020
-
- 7 min
- The Verge
- 2020
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
What potential harms could you see from the misapplication of the PULSE algorithm? What sorts of bias-mitigating solutions besides more diverse data sets could you envision? Based on this case study, what sorts of real-world applications should facial recognition technology be trusted with?