Fairness and Non-discrimination (56)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- The New Republic
- 2020
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
- The New Republic
- 2020
-
- 7 min
- The New Republic
- 2020
Who Gets a Say in Our Dystopian Tech Future?
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
Should machines be trusted to handle and process the incredibly nuanced meaning of human language? How do different understandings of what languages and words mean and represent become harmful when a minority of people are deciding how to train NLP algorithms? How do tech monopolies prevent more diverse voices from entering this conversation?
-
- 5 min
- Venture Beat
- 2021
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
- Venture Beat
- 2021
-
- 5 min
- Venture Beat
- 2021
Google targets AI ethics lead Margaret Mitchell after firing Timnit Gebru
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
How can bias in tech monopolies be mitigated? How can authorities on AI ethics be positioned in such a way that they cannot be fired when developers do not want to listen to them?
-
- 12 min
- Kinolab
- 2016
“Hidden Figures” chronicles the journeys of Katherine Johnson (Taraji P. Henson), Dorothy Vaughan (Octavia Spencer), and Mary Jackson (Janelle Monáe), three black women who worked on the space missions at the Langley Research Center in Hampton, Virginia in 1961. All three women persist against segregation and abject racism as they climb the ladder and make important contributions to the space mission. While Katherine becomes the first black woman on Al Harrison’s Space Task Group, Mary Jackson pursues her dream of becoming an engineer at NASA by petitioning to take courses at an all white school, and Dorothy Vaughan attempts to learn the programming language Fortran in order to ensure that herself and fellow human computers are not replaced by the newest IBM 7090 computer.
- Kinolab
- 2016
Hidden Figures Part II: Goals of Equity and Women of Color in the Workplace
“Hidden Figures” chronicles the journeys of Katherine Johnson (Taraji P. Henson), Dorothy Vaughan (Octavia Spencer), and Mary Jackson (Janelle Monáe), three black women who worked on the space missions at the Langley Research Center in Hampton, Virginia in 1961. All three women persist against segregation and abject racism as they climb the ladder and make important contributions to the space mission. While Katherine becomes the first black woman on Al Harrison’s Space Task Group, Mary Jackson pursues her dream of becoming an engineer at NASA by petitioning to take courses at an all white school, and Dorothy Vaughan attempts to learn the programming language Fortran in order to ensure that herself and fellow human computers are not replaced by the newest IBM 7090 computer.
How is the history of the oppression of Black people in America responsible for a lack of diversity in workplaces, including those involving science and technology in the present? What do technology companies in the current day need to consider in order to ensure that their workforce is diverse and equitable? What does the specific case of Dorothy being initially denied access to the Fortran book reveal about the past and present accessibility of minority groups to fluency in digital technologies? What needs to happen inside of and outside of the technology industry to ensure better opportunities for women of color in technology-focused workplaces? What role does implicit bias play in all of these considerations?
-
- 7 min
- MIT Tech Review
- 2020
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
- MIT Tech Review
- 2020
-
- 7 min
- MIT Tech Review
- 2020
Why 2020 was a pivotal, contradictory year for facial recognition
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
Should there be a national moratorium on facial recognition technology? How can it be ensured that smaller companies like Clearview AI are more carefully watched and regulated? Do we consent to having or faces identified any time we post something to social media?
-
- 7 min
- Venture Beat
- 2021
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
- Venture Beat
- 2021
-
- 7 min
- Venture Beat
- 2021
Center for Applied Data Ethics suggests treating AI like a bureaucracy
As machine learning algorithms become more deeply embedded in all levels of society, including governments, it is critical for developers and users alike to consider how these algorithms may shift or concentrate power, specifically as it relates to biased data. Historical and anthropological lenses are helpful in dissecting AI in terms of how they model the world, and what perspectives might be missing from their construction and operation.
Whose job is it to ameliorate the “privilege hazard”, and how should this be done? How should large data sets be analyzed to avoid bias and ensure fairness? How can large data aggregators such as Google be held accountable to new standards of scrutinizing data and introducing humanities perspectives in applications?
-
- 7 min
- Farnam Street Blog
- 2021
Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.
- Farnam Street Blog
- 2021
-
- 7 min
- Farnam Street Blog
- 2021
A Primer on Algorithms and Bias
Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.
How can data sets fed to algorithms be properly verified? What would the most beneficial collaboration between humans and algorithms look like?