All Narratives (328)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Gizmodo
- 2021
Thorough investigation led to the conclusion that bots played a role in the economic disruption of GameStop stocks in early 2021. Essentially, the automated accounts aided in the diffusion of materials promoting the purchase and maintenance of GameStop stocks as a ploy to act as a check on wealthy hedge fund managers who bet that the stock would crash. The wholistic effect of these bots in this specific campaign, and thus a measure of how bots may generally be used to cause economic disruption in online markets through interaction with humans, remains hard to read.
- Gizmodo
- 2021
-
- 5 min
- Gizmodo
- 2021
Bots Reportedly Helped Fuel GameStonks Hype on Facebook, Twitter, and Other Platforms
Thorough investigation led to the conclusion that bots played a role in the economic disruption of GameStop stocks in early 2021. Essentially, the automated accounts aided in the diffusion of materials promoting the purchase and maintenance of GameStop stocks as a ploy to act as a check on wealthy hedge fund managers who bet that the stock would crash. The wholistic effect of these bots in this specific campaign, and thus a measure of how bots may generally be used to cause economic disruption in online markets through interaction with humans, remains hard to read.
Do you consider this case study, and the use of the bots, to be “activism”? How can this case study be summarized into a general principle for how bots may manipulate the economy? How do digital technologies help both wealth and non-wealthy people serve their own interests?
-
- 3 min
- MacRumors
- 2021
Facebook’s collaboration with Ray-Ban on a new technology of “smart glasses” comes with a host of questions on whether or not capabilities such as facial recognition should be built into the technology.
- MacRumors
- 2021
-
- 3 min
- MacRumors
- 2021
Facebook Weighing Up Legality of Facial Recognition in Upcoming Smart Glasses
Facebook’s collaboration with Ray-Ban on a new technology of “smart glasses” comes with a host of questions on whether or not capabilities such as facial recognition should be built into the technology.
What are the “so clear” benefits and risks of having facial recognition algorithms implanted into smart glasses, in your view? What are the problems with “transparent technology” such as smart glasses, where other citizens may not even know that they are being surveilled?
-
- 7 min
- New York Times
- 2018
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Facial Recognition Is Accurate, if You’re a White Guy
This article details the research of Joy Buolamwini on racial bias coded into algorithms, specifically facial recognition programs. When auditing facial recognition software from several large companies such as IBM and Face++, she found that they are far worse at properly identifying darker skinned faces. Overall, this reveals that facial analysis and recognition programs are in need of exterior systems of accountability.
What does exterior accountability for facial recognition software look like, and what should it look like? How and why does racial bias get coded into technology, whether explicitly or implicitly?
-
- 5 min
- Time
- 2021
In 2021, former Facebook employee and whistleblower Frances Haugen testified to the fact that Facebook knew how its products harmed teenagers in terms of body image and social comparison; yet because of their interest in their profit model, they do not significantly attempt to ameliorate these harms. This article provides four key lessons to learn from how Facebook’s model is harmful.
- Time
- 2021
-
- 5 min
- Time
- 2021
4 Big Takeaways From the Facebook Whistleblower Congressional Hearing
In 2021, former Facebook employee and whistleblower Frances Haugen testified to the fact that Facebook knew how its products harmed teenagers in terms of body image and social comparison; yet because of their interest in their profit model, they do not significantly attempt to ameliorate these harms. This article provides four key lessons to learn from how Facebook’s model is harmful.
How does social quantification result in negative self-conception? How are the environments of social media platforms more harmful in terms of body image or “role models” than in-person environments? What are the dangers of every person having easy access to a broad platform of communication in terms of forming models of perfection? Why do social media algorithms want to feed users increasingly extreme content?
-
- 7 min
- The Verge
- 2020
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
- The Verge
- 2020
-
- 7 min
- The Verge
- 2020
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
PULSE is an algorithm which can supposedly determine what a face looks like from a pixelated image. The problem: more often than not, the algorithm will return a white face, even when the person from the pixelated photograph is a person of color. The algorithm works through creating a synthetic face which matches with the pixel pattern, rather than actually clearing up the image. It is these synthetic faces that demonstrate a clear bias toward white people, demonstrating how institutional racism makes its way thoroughly into technological design. Thus, diversity in data sets will not full help until broader solutions combatting bias are enacted.
What potential harms could you see from the misapplication of the PULSE algorithm? What sorts of bias-mitigating solutions besides more diverse data sets could you envision? Based on this case study, what sorts of real-world applications should facial recognition technology be trusted with?
-
- 7 min
- Wall Street Journal
- 2021
Google’s new Pixel 6 smartphone claims to have “the world’s most inclusive camera” based on its purported ability to more accurately reflect darker skin tones in photographs, a form of digital justice notably absent from previous iterations of computational photography across the phones of various tech monopolies.
- Wall Street Journal
- 2021
-
- 7 min
- Wall Street Journal
- 2021
Google Built the Pixel 6 Camera to Better Portray People With Darker Skin Tones. Does It?
Google’s new Pixel 6 smartphone claims to have “the world’s most inclusive camera” based on its purported ability to more accurately reflect darker skin tones in photographs, a form of digital justice notably absent from previous iterations of computational photography across the phones of various tech monopolies.
How can “arms races” between different tech monopolies potentially lead to positive innovations, especially those that center equity? Why did it take so long to have a more inclusive camera? How can a camera be exclusive?