All Narratives (328)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- Chronicle
- 2021
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
- Chronicle
- 2021
-
- 7 min
- Chronicle
- 2021
Artificial Intelligence Is a House Divided
The history of AI contains a pendulum which swings back and forth between two approaches to artificial intelligence; symbolic AI, which tries to replicate human reasoning, and neural networks/deep learning, which try to replicate the human brain.
Which approach to AI (symbolic or neural networks) do you believe leads to greater transparency? Which approach to AI do you believe might be more effective in accomplishing a certain goal? Does one approach make you feel more comfortable than the other? How could these two approaches be synthesized, if at all?
-
- 10 min
- The Washington Post
- 2019
After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.
- The Washington Post
- 2019
-
- 10 min
- The Washington Post
- 2019
Are ‘bots’ manipulating the 2020 conversation? Here’s what’s changed since 2016.
After prolonged discussion on the effect of “bots,” or automated accounts on social networks, interfering with the electoral process in America in 2016, many worries surfaced that something similar could happen in 2020. This article details the shifts in strategy for using bots to manipulate political conversations online, from techniques like Inorganic Coordinated Activity or hashtag hijacking. Overall, some bot manipulation in political discourse is to be expected, but when used effectively these algorithmic tools still have to power to shape conversations to the will of their deployers.
How are social media networks architectures that can be manipulated to an individual’s agenda, and how could this be addressed? Should any kind of bot accounts be allowed on Twitter, or do they all have too much negative potential to be trusted? What affordances of social networks allow bad actors to redirect the traffic of these networks? Is the problem of “trends” or “cascades” inherent to social media?
-
- 7 min
- MIT Tech Review
- 2020
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
- MIT Tech Review
- 2020
-
- 7 min
- MIT Tech Review
- 2020
Why 2020 was a pivotal, contradictory year for facial recognition
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
Should there be a national moratorium on facial recognition technology? How can it be ensured that smaller companies like Clearview AI are more carefully watched and regulated? Do we consent to having or faces identified any time we post something to social media?
-
- 5 min
- Venture Beat
- 2021
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
- Venture Beat
- 2021
-
- 5 min
- Venture Beat
- 2021
Google targets AI ethics lead Margaret Mitchell after firing Timnit Gebru
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
How can bias in tech monopolies be mitigated? How can authorities on AI ethics be positioned in such a way that they cannot be fired when developers do not want to listen to them?
-
- 7 min
- Wired
- 2021
An anonymous college student created a website titled “Faces of the Riot,” a virtual wall containing over 6,000 face images of insurrectionists present at the riot at the Capitol on January 6th, 2021. The ultimate goal of the creator’s site, which used facial recognition algorithms to crawl through videos posted to the right-wing social media site Parler, is to hopefully have viewers identify any criminals that they recognize to the proper authorities. While the creator put safeguards for privacy in place, such as using “facial detection” rather than “facial recognition”, and their intentions are supposedly positive, some argue that the implications on privacy and the widespread integration of this technique could be negative.
- Wired
- 2021
-
- 7 min
- Wired
- 2021
This Site Published Every Face From Parler’s Capitol Riot Videos
An anonymous college student created a website titled “Faces of the Riot,” a virtual wall containing over 6,000 face images of insurrectionists present at the riot at the Capitol on January 6th, 2021. The ultimate goal of the creator’s site, which used facial recognition algorithms to crawl through videos posted to the right-wing social media site Parler, is to hopefully have viewers identify any criminals that they recognize to the proper authorities. While the creator put safeguards for privacy in place, such as using “facial detection” rather than “facial recognition”, and their intentions are supposedly positive, some argue that the implications on privacy and the widespread integration of this technique could be negative.
Who deserves to be protected from having shameful data about themselves posted publicly to the internet? Should there even be any limits on this? What would happen if a similar website appeared in a less seemingly noble context, such as identifying members of a minority group in a certain area? How could sites like this expand the agency of bad or discriminatory actors?
-
- 3 min
- Politico
- 2021
Live streaming technologies are challenging to moderate and might have a negative effect on society’s perception of violent events. They also raise the question of how such content can be deleted once it has been broadcasted and potentially copied multiple times by different recipients.
- Politico
- 2021
-
- 3 min
- Politico
- 2021
Library of Congress bomb suspect livestreamed on Facebook for hours before being blocked
Live streaming technologies are challenging to moderate and might have a negative effect on society’s perception of violent events. They also raise the question of how such content can be deleted once it has been broadcasted and potentially copied multiple times by different recipients.
What are the ethical trade-offs of live-streaming through social media? Is it possible to remediate a socially undesirable broadcast? What actors are responsible for moderating live streaming on social media?