Themes (326)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Indie Wire
- 2021
New virtual exhibits displayed through Web XR, or Extended Reality available over the network of internet browsers, allow Black artists and creators to present ancestral knowledge and stories while providing a new basis on which AI could be trained. This use of AI leads to an imagination free of colonial or racist constructs that may otherwise be present in digital media.
- Indie Wire
- 2021
-
- 5 min
- Indie Wire
- 2021
How Black Storytellers Are Using XR and Afro-Futurism to Explore Ancestral Identity
New virtual exhibits displayed through Web XR, or Extended Reality available over the network of internet browsers, allow Black artists and creators to present ancestral knowledge and stories while providing a new basis on which AI could be trained. This use of AI leads to an imagination free of colonial or racist constructs that may otherwise be present in digital media.
How does artificial intelligence and augmented reality open doors for expression of minority voices? How can digital art be used to make a specific statement or call for a cultural shift? What are the benefits of applying wisdom from across the globe and before the digital age into the design and deployment of digital technologies?
-
- 10 min
- The Atlantic
- 2014
When the Apple Health app first released, it lacked one crucial component: the ability to track menstrual cycles. This exclusion of women from accessible design of technology is not the exception but rather the rule. This results from problems inherent to the gender imbalance in technology workplaces, especially at the level of design. Communities such as the Quantified Self offer spaces to help combat this exclusive culture.
- The Atlantic
- 2014
-
- 10 min
- The Atlantic
- 2014
How Self-Tracking Apps Exclude Women
When the Apple Health app first released, it lacked one crucial component: the ability to track menstrual cycles. This exclusion of women from accessible design of technology is not the exception but rather the rule. This results from problems inherent to the gender imbalance in technology workplaces, especially at the level of design. Communities such as the Quantified Self offer spaces to help combat this exclusive culture.
In what ways are women being left behind by personal data tracking apps, and how can this be fixed? How can design strategies and institutions in technology development be inherently sexist? What will it take to ensure glaring omissions such as this one do not occur in other future products? How can apps that track and promote certain behaviors avoid being patronizing or patriarchal?
-
- 6 min
- CBS News
- 2021
In light of the recent allegations of Facebook whistleblower Frances Haugen that the platform irresponsibly breeds division and mental health issues, AI Specialist Karen Hao explains how Facebook’s “algorithm(s)” serve or fail the people who use them. Specifically, the profit motive and a lack of exact and comprehensive knowledge of the algorithm system prevents groundbreaking change from being made.
- CBS News
- 2021
Facebook algorithm called into question after whistleblower testimony calls it dangerous
In light of the recent allegations of Facebook whistleblower Frances Haugen that the platform irresponsibly breeds division and mental health issues, AI Specialist Karen Hao explains how Facebook’s “algorithm(s)” serve or fail the people who use them. Specifically, the profit motive and a lack of exact and comprehensive knowledge of the algorithm system prevents groundbreaking change from being made.
Do programmers and other technological minds have a responsibility to understand exactly how algorithms work and how they tag data? What are specific consequences to algorithms which use their own criteria to tag items? How do social media networks take advantage of human attention?
-
- 5 min
- Time
- 2021
In 2021, former Facebook employee and whistleblower Frances Haugen testified to the fact that Facebook knew how its products harmed teenagers in terms of body image and social comparison; yet because of their interest in their profit model, they do not significantly attempt to ameliorate these harms. This article provides four key lessons to learn from how Facebook’s model is harmful.
- Time
- 2021
-
- 5 min
- Time
- 2021
4 Big Takeaways From the Facebook Whistleblower Congressional Hearing
In 2021, former Facebook employee and whistleblower Frances Haugen testified to the fact that Facebook knew how its products harmed teenagers in terms of body image and social comparison; yet because of their interest in their profit model, they do not significantly attempt to ameliorate these harms. This article provides four key lessons to learn from how Facebook’s model is harmful.
How does social quantification result in negative self-conception? How are the environments of social media platforms more harmful in terms of body image or “role models” than in-person environments? What are the dangers of every person having easy access to a broad platform of communication in terms of forming models of perfection? Why do social media algorithms want to feed users increasingly extreme content?
-
- 7 min
- Farnam Street Blog
- 2021
Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.
- Farnam Street Blog
- 2021
-
- 7 min
- Farnam Street Blog
- 2021
A Primer on Algorithms and Bias
Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.
How can data sets fed to algorithms be properly verified? What would the most beneficial collaboration between humans and algorithms look like?
-
- 14 min
- Kinolab
- 2016
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
- Kinolab
- 2016
AI Memories and Self-Identification
Westworld, a western-themed amusement park, is populated by realistic robotic creatures known as “hosts” that are designed in a lab and constantly updated to seem as real and organic as possible. Bernard, an engineer at the park, recently oversaw an update to add “reveries,” or slight fake memories, into the coding of the robots to make them seem more human. However, members of the board overseeing the park demonstrate that these reveries can sometimes lead robots to remember and “hold grudges” even after they have been asked to erase their own memory, something that can lead to violent tendencies. Later, as Bernard and Theresa snoop on Ford, the director of the park, they learn shocking information, and a robot once again becomes a violent tool as Ford murders Theresa.
Is ‘memory’ uniquely human? What is the role of memory in creating advanced AI consciousness? Does memory of trauma/suffering ultimately create AI that are hostile to humans? Even if we had the technological means to give AI emotions and memory, should we? And if we do, what ethics and morals must we follow to prevent traumatic memory, such as uploading memories of a fake dead son into Bernard? How can androids which are programmed to follow the directions of one person be used for violent ends? If robots are programmed to not hurt humans, how are they supposed to protect themselves from bad actors, especially if they believe themselves human? Should humans create humanoid replicant robots that do not possess any inherently negative human traits, such as anxiety?