Fairness and Non-discrimination (56)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 41 min
- The New York Times
- 2021
In this podcast episode, Ellen Pao, an early whistleblower on gender bias and racial discrimination in the tech industy, tells the story of her experience suing the venture capital firm Kleiner Perkins for gender discrimination. The episode then moves into a discussion of how Silicon Valley, and the tech industry more broadly, is dominated by white men who do not try to deeply understand or move toward racial or gender equity; instead, they focus on PR moves. Specifically, she reveals that social media companies and CEOs can be particularly performative when it comes to addressing racial or gender inequality, focusing on case studies rather than breeding a new, more fair culture.
- The New York Times
- 2021
Sexism and Racism in Silicon Valley
In this podcast episode, Ellen Pao, an early whistleblower on gender bias and racial discrimination in the tech industy, tells the story of her experience suing the venture capital firm Kleiner Perkins for gender discrimination. The episode then moves into a discussion of how Silicon Valley, and the tech industry more broadly, is dominated by white men who do not try to deeply understand or move toward racial or gender equity; instead, they focus on PR moves. Specifically, she reveals that social media companies and CEOs can be particularly performative when it comes to addressing racial or gender inequality, focusing on case studies rather than breeding a new, more fair culture.
How did Silicon Valley and the technology industry come to be dominated by white men? How can this be addressed, and how can the culture change? How can social networks in particular be re-imagined to open up doors to more diverse leadership and workplace cultures?
-
- 51 min
- TechCrunch
- 2020
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
- TechCrunch
- 2020
Artificial Intelligence and Disability
In this podcast, several disability experts discuss the evolving relationship between disabled people, society, and technology. The main point of discussion is the difference between the medical and societal models of disability, and how the medical lens tends to spur technologies with an individual focus on remedying disability, whereas the societal lens could spur technologies that lead to a more accessible world. Artificial Intelligence and machine learning is labelled as inherently “normative” since it is trained on data that comes from a biased society, and therefore is less likely to work in favor of a social group as varied as disabled people. There is a clear need for institutional change in the technology industry to address these problems.
What are some problems with injecting even the most unbiased of technologies into a system biased against certain groups, including disabled people? How can developers aim to create technology which can actually put accessibility before profit? How can it be ensured that AI algorithms take into account more than just normative considerations? How can developers be forced to consider the myriad impacts that one technology may have on large heterogeneous communities such as the disabled community?
-
- 5 min
- Business Insider
- 2020
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
- Business Insider
- 2020
-
- 5 min
- Business Insider
- 2020
One of Google’s leading AI researchers says she’s been fired in retaliation for an email to other employees
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
How can tech monopolies dismiss recommendations to make their technologies more ethical? How do bias ethicists such as Gebru get onto a more unshakeable platform? Who is going to hold tech monopolies more accountable? Should these monopolies even by trying to fix their current algorithms, or might it be better to just start fresh?
-
- 4 min
- OneZero
- 2020
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
- OneZero
- 2020
-
- 4 min
- OneZero
- 2020
Dr. Timnit Gebru, Joy Buolamwini, Deborah Raji — an Enduring Sisterhood of Face Queens
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
How can the charge led by these women for more equitable computer vision technologies be made even more visible? Should people need high degrees to have a voice in fighting against technologies which are biased against them? How can corporations be made to listen to voices such as those of the Face Queens?
-
- 4 min
- VentureBeat
- 2020
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
- VentureBeat
- 2020
-
- 4 min
- VentureBeat
- 2020
Researchers Find that Even Fair Hiring Algorithms Can Be Biased
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
Can machine learning ever be enacted in a way that fully gets rid of human bias? Is bias encoded into every trained machine learning program? What does the ideal circumstance look like when using digital technologies and machine learning to reach a point of equitable representation in hiring?
-
- 5 min
- TechCrunch
- 2020
At the end of 2020, Twitch, a social network predicated on streaming video content and commenting, expanded and clarified its definitions of hateful content in order to moderate comments or posts which harassed other users or otherwise had a negative effect on other people. However, as a workplace, the Twitch company has much to prove before validating this updated policy as something more than a PR move.
- TechCrunch
- 2020
-
- 5 min
- TechCrunch
- 2020
Twitch updates its hateful content and harassment policy after company called out for its own abuses
At the end of 2020, Twitch, a social network predicated on streaming video content and commenting, expanded and clarified its definitions of hateful content in order to moderate comments or posts which harassed other users or otherwise had a negative effect on other people. However, as a workplace, the Twitch company has much to prove before validating this updated policy as something more than a PR move.
How can content moderation algorithms be used for a greater good, in terms of recognizing hate speech and symbols? What nuances might be missed by this approach? What does the human part of content moderation look like? What responsibilities does such a position come with? How might content moderation on digital platforms moderate harassment behaviors in real life, and vice versa?