Themes (326)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- Wired
- 2020
In discussing the history of the singular Internet that many global users experience every day, this article reveals some dangers of digital technologies becoming transparent through repeated use and reliance. Namely, it becomes more difficult to imagine a world where there could be alternatives to the current digital way of doing things.
- Wired
- 2020
-
- 7 min
- Wired
- 2020
Hello, World! It is ‘I’, the Internet
In discussing the history of the singular Internet that many global users experience every day, this article reveals some dangers of digital technologies becoming transparent through repeated use and reliance. Namely, it becomes more difficult to imagine a world where there could be alternatives to the current digital way of doing things.
Is it too late to imagine alternatives to the Internet? How could people be convinced to get on board with a radical redo of the internet as we know it? Do alternatives need to be imagined before forming a certain digital product or service, especially if they end up being as revolutionary as the internet? Are the most popular and powerful digital technologies and services “tools”, or have they reached the status of cultural norms and conduits?
-
- 5 min
- TechCrunch
- 2020
At the end of 2020, Twitch, a social network predicated on streaming video content and commenting, expanded and clarified its definitions of hateful content in order to moderate comments or posts which harassed other users or otherwise had a negative effect on other people. However, as a workplace, the Twitch company has much to prove before validating this updated policy as something more than a PR move.
- TechCrunch
- 2020
-
- 5 min
- TechCrunch
- 2020
Twitch updates its hateful content and harassment policy after company called out for its own abuses
At the end of 2020, Twitch, a social network predicated on streaming video content and commenting, expanded and clarified its definitions of hateful content in order to moderate comments or posts which harassed other users or otherwise had a negative effect on other people. However, as a workplace, the Twitch company has much to prove before validating this updated policy as something more than a PR move.
How can content moderation algorithms be used for a greater good, in terms of recognizing hate speech and symbols? What nuances might be missed by this approach? What does the human part of content moderation look like? What responsibilities does such a position come with? How might content moderation on digital platforms moderate harassment behaviors in real life, and vice versa?
-
- 4 min
- VentureBeat
- 2020
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
- VentureBeat
- 2020
-
- 4 min
- VentureBeat
- 2020
Researchers Find that Even Fair Hiring Algorithms Can Be Biased
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
Can machine learning ever be enacted in a way that fully gets rid of human bias? Is bias encoded into every trained machine learning program? What does the ideal circumstance look like when using digital technologies and machine learning to reach a point of equitable representation in hiring?
-
- 4 min
- OneZero
- 2020
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
- OneZero
- 2020
-
- 4 min
- OneZero
- 2020
Dr. Timnit Gebru, Joy Buolamwini, Deborah Raji — an Enduring Sisterhood of Face Queens
A group of “Face Queens” (Dr. Timnit Gebru, Joy Buolamwini, and Deborah Raji) have joined forces to do important racial justice and equity work in the field of computer vision, struggling against racism in the industry to whistleblow against biased machine learning and computer vision technologies still deployed by companies like Amazon.
How can the charge led by these women for more equitable computer vision technologies be made even more visible? Should people need high degrees to have a voice in fighting against technologies which are biased against them? How can corporations be made to listen to voices such as those of the Face Queens?
-
- 5 min
- Business Insider
- 2020
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
- Business Insider
- 2020
-
- 5 min
- Business Insider
- 2020
One of Google’s leading AI researchers says she’s been fired in retaliation for an email to other employees
This article tells the story of Timnit Gebru, a Google employee who was fired after Google refused to take her research on machine learning and algorithmic bias into full account. She was terminated hastily after sending an email asking Google to meet certain research-based conditions. Gebru is a leading expert in the field of AI and bias.
How can tech monopolies dismiss recommendations to make their technologies more ethical? How do bias ethicists such as Gebru get onto a more unshakeable platform? Who is going to hold tech monopolies more accountable? Should these monopolies even by trying to fix their current algorithms, or might it be better to just start fresh?
-
- 3 min
- TechCrunch
- 2020
This short article details a pledge inspired by the practices of the French government for tech monopolies to be more responsible in the areas of taxes and privacy. As of 2020, many have signed onto this initiative.
- TechCrunch
- 2020
-
- 3 min
- TechCrunch
- 2020
Dozens of tech companies sign ‘Tech for Good Call’ following French initiative
This short article details a pledge inspired by the practices of the French government for tech monopolies to be more responsible in the areas of taxes and privacy. As of 2020, many have signed onto this initiative.
What does accountability for tech monopolies look like? Who should offer robust challenges to these companies, and who actually has the power to do so?