AI (124)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Time
- 2021
In 2021, former Facebook employee and whistleblower Frances Haugen testified to the fact that Facebook knew how its products harmed teenagers in terms of body image and social comparison; yet because of their interest in their profit model, they do not significantly attempt to ameliorate these harms. This article provides four key lessons to learn from how Facebook’s model is harmful.
- Time
- 2021
-
- 5 min
- Time
- 2021
4 Big Takeaways From the Facebook Whistleblower Congressional Hearing
In 2021, former Facebook employee and whistleblower Frances Haugen testified to the fact that Facebook knew how its products harmed teenagers in terms of body image and social comparison; yet because of their interest in their profit model, they do not significantly attempt to ameliorate these harms. This article provides four key lessons to learn from how Facebook’s model is harmful.
How does social quantification result in negative self-conception? How are the environments of social media platforms more harmful in terms of body image or “role models” than in-person environments? What are the dangers of every person having easy access to a broad platform of communication in terms of forming models of perfection? Why do social media algorithms want to feed users increasingly extreme content?
-
- 12 min
- Kinolab
- 1965
The city of Alphaville is under the complete rule of Alpha-60, an omnipresent robot whose knowledge is more vast than that of any human. This robot, whose learning and knowledge model is deemed “too complex for human understanding,” cements its rule through effectively outlawing emotion in Alphaville, with all definitions of consciousness centering on rationality. All words expressing curiosity or emotion are erased from human access, with asking “why” being replaced with saying “because.” Lemmy is a detective who has entered Alphaville from an external land to destroy Alpha-60. However, in their conversation, Alpha-60 is immediately able to suss out the suspicious aspects of Lemmy’s visit and character.
- Kinolab
- 1965
Supercomputer Rule and Condensing Human Behavior
The city of Alphaville is under the complete rule of Alpha-60, an omnipresent robot whose knowledge is more vast than that of any human. This robot, whose learning and knowledge model is deemed “too complex for human understanding,” cements its rule through effectively outlawing emotion in Alphaville, with all definitions of consciousness centering on rationality. All words expressing curiosity or emotion are erased from human access, with asking “why” being replaced with saying “because.” Lemmy is a detective who has entered Alphaville from an external land to destroy Alpha-60. However, in their conversation, Alpha-60 is immediately able to suss out the suspicious aspects of Lemmy’s visit and character.
Can governing computers or machines ever be totally objective? Is this objectivity dangerous? Do humans need emotionality to define themselves and their societies, so that a focus on rationality does not allow for computers to take over rule of our societies? Can the actions of all humans throughout the past be reduced down to a pattern that computers can understand or manipulate? What are the implications of omnipresent technology in city settings?
-
- 7 min
- Slate
- 2021
A new law passed unanimously in Maine heavily restricts the contexts in which facial recognition technology can be deployed, putting significant guardrails around how it is used by law enforcement. Also, it allows citizens to sue if they believe the technology has been misused. This is a unique step in a time when several levels of government, all the way up to the federal government, are less likely to attach strict rules to the use of facial recognition technology, despite the clear bias that is seen in the wake of its use.
- Slate
- 2021
-
- 7 min
- Slate
- 2021
Maine Now Has the Toughest Facial Recognition Restrictions in the U.S.
A new law passed unanimously in Maine heavily restricts the contexts in which facial recognition technology can be deployed, putting significant guardrails around how it is used by law enforcement. Also, it allows citizens to sue if they believe the technology has been misused. This is a unique step in a time when several levels of government, all the way up to the federal government, are less likely to attach strict rules to the use of facial recognition technology, despite the clear bias that is seen in the wake of its use.
How can tech companies do even more to lobby for stricter facial recognition regulation? Is a moratorium on facial recognition use by all levels of government the best plan? Why or why not? Does creating “more diverse datasets” truly solve all the problems of bias with the technology?
-
- 5 min
- Premium Beat
- 2020
This blog post explores what a combination of deepfake and computer generated images (CGI) technologies might mean to film makers.
- Premium Beat
- 2020
-
- 5 min
- Premium Beat
- 2020
Is Deepfake Technology the Future of the Film Industry?
This blog post explores what a combination of deepfake and computer generated images (CGI) technologies might mean to film makers.
-
- 5 min
- MIT Technology Review
- 2020
This article details the reactions to the deepfake documentary In the event of moon disaster.
- MIT Technology Review
- 2020
-
- 5 min
- MIT Technology Review
- 2020
Inside the strange new world of being a deepfake actor
This article details the reactions to the deepfake documentary In the event of moon disaster.