All Narratives (355)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- The New York Times
- 2021
On October 4th, 2021, Facebook’s servers experienced an outage which left its apps, including the commonly used Facebook, Instagram, and Whatsapp, out of commission for several hours. The problem is said to be caused by a incorrect configuring of Facebook’s servers, which ultimately led to a Domain Name System error in which the numerical IP addresses determined by the computer became inaccessible. The myriad effects of this outage spread across the globe as businesses were effected by the lack of access to these social networks. Additionally, certain other internet services linked to Facebook became inaccessible.
- The New York Times
- 2021
-
- 7 min
- The New York Times
- 2021
Facebook and all of its apps go down simultaneously.
On October 4th, 2021, Facebook’s servers experienced an outage which left its apps, including the commonly used Facebook, Instagram, and Whatsapp, out of commission for several hours. The problem is said to be caused by a incorrect configuring of Facebook’s servers, which ultimately led to a Domain Name System error in which the numerical IP addresses determined by the computer became inaccessible. The myriad effects of this outage spread across the globe as businesses were effected by the lack of access to these social networks. Additionally, certain other internet services linked to Facebook became inaccessible.
What are the dangers of relying on fallible networks to perform essential functions such as business? How can network infrastructure be more protected? How much data and information should Facebook be trusted with?
-
- 20 min
- MIT Press
- 2018
Lilith, a contract laborer, ends up in a dangerous situation when the self-driving ship she rides malfunctions. Kyleen, a human who has undergone a human-editing networking process called “meshing,” is able to control a proxy robot via a brain-computer interface to help Lilith get to her destination safely.
- MIT Press
- 2018
-
- 20 min
- MIT Press
- 2018
Robotic Proxies and Telepresence: “Different Seas” by Alastair Reynolds
Lilith, a contract laborer, ends up in a dangerous situation when the self-driving ship she rides malfunctions. Kyleen, a human who has undergone a human-editing networking process called “meshing,” is able to control a proxy robot via a brain-computer interface to help Lilith get to her destination safely.
How can robotic proxies be helpful to people in danger? Who should be allowed or certified to operate these, in theory? How might these be implicated in inequitable class structures, as outlined in the story? Should humans be networked with machines, and would this really be to the ultimate benefit of humanity?
-
- 7 min
- MIT Tech Review
- 2020
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
- MIT Tech Review
- 2020
-
- 7 min
- MIT Tech Review
- 2020
Why 2020 was a pivotal, contradictory year for facial recognition
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
Should there be a national moratorium on facial recognition technology? How can it be ensured that smaller companies like Clearview AI are more carefully watched and regulated? Do we consent to having or faces identified any time we post something to social media?
-
- 6 min
- TED
- 2020
Jamila Gordon, an AI activist and the CEO and founder of Lumachain, tells her story as a refugee from Ethiopia to illuminate the great strokes of luck that eventually brought her to her important position in the global tech industry. This makes the strong case for introducing AI into the workplace, as approaches using computer vision can lead to greater safety and machine learning can be applied to help those who may speak a language not dominant in that workplace or culture train and acclimate more effectively.
- TED
- 2020
How AI can help shatter barriers to equality
Jamila Gordon, an AI activist and the CEO and founder of Lumachain, tells her story as a refugee from Ethiopia to illuminate the great strokes of luck that eventually brought her to her important position in the global tech industry. This makes the strong case for introducing AI into the workplace, as approaches using computer vision can lead to greater safety and machine learning can be applied to help those who may speak a language not dominant in that workplace or culture train and acclimate more effectively.
Would constant computer vision surveillance of a workplace be ultimately positive or negative or both? How could it be ensured that machine learning algorithms were only used for positive forces in a workplace? What responsibility to large companies have to help those in less privileged countries access digital fluency?
-
- 5 min
- Venture Beat
- 2021
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
- Venture Beat
- 2021
-
- 5 min
- Venture Beat
- 2021
Google targets AI ethics lead Margaret Mitchell after firing Timnit Gebru
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
How can bias in tech monopolies be mitigated? How can authorities on AI ethics be positioned in such a way that they cannot be fired when developers do not want to listen to them?
-
- 7 min
- Wired
- 2021
An anonymous college student created a website titled “Faces of the Riot,” a virtual wall containing over 6,000 face images of insurrectionists present at the riot at the Capitol on January 6th, 2021. The ultimate goal of the creator’s site, which used facial recognition algorithms to crawl through videos posted to the right-wing social media site Parler, is to hopefully have viewers identify any criminals that they recognize to the proper authorities. While the creator put safeguards for privacy in place, such as using “facial detection” rather than “facial recognition”, and their intentions are supposedly positive, some argue that the implications on privacy and the widespread integration of this technique could be negative.
- Wired
- 2021
-
- 7 min
- Wired
- 2021
This Site Published Every Face From Parler’s Capitol Riot Videos
An anonymous college student created a website titled “Faces of the Riot,” a virtual wall containing over 6,000 face images of insurrectionists present at the riot at the Capitol on January 6th, 2021. The ultimate goal of the creator’s site, which used facial recognition algorithms to crawl through videos posted to the right-wing social media site Parler, is to hopefully have viewers identify any criminals that they recognize to the proper authorities. While the creator put safeguards for privacy in place, such as using “facial detection” rather than “facial recognition”, and their intentions are supposedly positive, some argue that the implications on privacy and the widespread integration of this technique could be negative.
Who deserves to be protected from having shameful data about themselves posted publicly to the internet? Should there even be any limits on this? What would happen if a similar website appeared in a less seemingly noble context, such as identifying members of a minority group in a certain area? How could sites like this expand the agency of bad or discriminatory actors?