Machine Learning (83)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 12 min
- Wired
- 2018
This video offers a basic introduction to the use of machine learning in predictive policing, and how this disproportionately affects low income communities and communities of color.
- Wired
- 2018
How Cops Are Using Algorithms to Predict Crimes
This video offers a basic introduction to the use of machine learning in predictive policing, and how this disproportionately affects low income communities and communities of color.
Should algorithms ever be used in a context where human bias is already rampant, such as in police departments? Why is it that the use of digital technologies to accomplish tasks in this age makes a process seem more “efficient” or “objective”? What are the problems with police using algorithms of which they do not fully understand the inner workings? Is the use of predictive policing algorithms ever justifiable?
-
- 6 min
- TED
- 2020
Jamila Gordon, an AI activist and the CEO and founder of Lumachain, tells her story as a refugee from Ethiopia to illuminate the great strokes of luck that eventually brought her to her important position in the global tech industry. This makes the strong case for introducing AI into the workplace, as approaches using computer vision can lead to greater safety and machine learning can be applied to help those who may speak a language not dominant in that workplace or culture train and acclimate more effectively.
- TED
- 2020
How AI can help shatter barriers to equality
Jamila Gordon, an AI activist and the CEO and founder of Lumachain, tells her story as a refugee from Ethiopia to illuminate the great strokes of luck that eventually brought her to her important position in the global tech industry. This makes the strong case for introducing AI into the workplace, as approaches using computer vision can lead to greater safety and machine learning can be applied to help those who may speak a language not dominant in that workplace or culture train and acclimate more effectively.
Would constant computer vision surveillance of a workplace be ultimately positive or negative or both? How could it be ensured that machine learning algorithms were only used for positive forces in a workplace? What responsibility to large companies have to help those in less privileged countries access digital fluency?
-
- 7 min
- The New Republic
- 2020
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
- The New Republic
- 2020
-
- 7 min
- The New Republic
- 2020
Who Gets a Say in Our Dystopian Tech Future?
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
Should machines be trusted to handle and process the incredibly nuanced meaning of human language? How do different understandings of what languages and words mean and represent become harmful when a minority of people are deciding how to train NLP algorithms? How do tech monopolies prevent more diverse voices from entering this conversation?
-
- 4 min
- VentureBeat
- 2020
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
- VentureBeat
- 2020
-
- 4 min
- VentureBeat
- 2020
Researchers Find that Even Fair Hiring Algorithms Can Be Biased
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
Can machine learning ever be enacted in a way that fully gets rid of human bias? Is bias encoded into every trained machine learning program? What does the ideal circumstance look like when using digital technologies and machine learning to reach a point of equitable representation in hiring?
-
- 10 min
- The Washington Post
- 2021
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
- The Washington Post
- 2021
-
- 10 min
- The Washington Post
- 2021
He predicted the dark side of the Internet 30 years ago. Why did no one listen?
The academic Philip Agre, a computer scientist by training, wrote several papers warning about the impacts of unfair AI and data barons after spending several years studying the humanities and realizing that these perspectives were missing from the field of computer science and artificial intelligence. These papers were published in the 1990s, long before the data-industrial complex and the normalization of algorithms in the everyday lives of citizens. Although he was an educated whistleblower, his predictions were ultimately ignored, the field of artificial intelligence remaining closed off from outside criticism.
Why are humanities perspectives needed in computer science and artificial intelligence fields? What would it take for data barons and/or technology users to listen to the predictions and ethical concerns of whistleblowers?
-
- 4 min
- OneZero
- 2020
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
- OneZero
- 2020
-
- 4 min
- OneZero
- 2020
Dr. Timnit Gebru, Joy Buolamwini, Deborah Raji — an Enduring Sisterhood of Face Queens
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
How can the charge led by these women for more equitable computer vision technologies be made even more visible? Should people need high degrees to have a voice in fighting against technologies which are biased against them? How can corporations be made to listen to voices such as those of the Face Queens?