All Narratives (355)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 3 min
- Kinolab
- 2018
Wade Watts lives in an imagined future in which the OASIS, a limitless virtual reality world, acts as a constant distraction from the real world for the majority of citizens. In this scene, his virtual avatar Parzival visits the Halliday Journals, a complete archive of the memories of James Halliday, the creator of the OASIS. These memories are digitized in their complete abstract form, and seem freely accessible to anyone.
- Kinolab
- 2018
The Digitization of Memory and its Consequences
Wade Watts lives in an imagined future in which the OASIS, a limitless virtual reality world, acts as a constant distraction from the real world for the majority of citizens. In this scene, his virtual avatar Parzival visits the Halliday Journals, a complete archive of the memories of James Halliday, the creator of the OASIS. These memories are digitized in their complete abstract form, and seem freely accessible to anyone.
How can tech like the “cloud” be used for the storage of abstract data like consciousness and memories? What would be potential impact on human memory if memories were easily able to become fully digital? What are the dangers of intimate memories being potentially accessible to anyone in the digital world?
-
- 30 min
- Wired
- 2019
In China, “supercompanies” such as WeChat or Alipay aggregate massive amounts of varied data on users. The Zhima Credit score system directly influences the agency of users by limiting their options in acting in their environment, or determining with whom they interact. The Chinese government interests itself with allying with large tech companies to incorporate a social ranking system which can be used to control and suppress citizens. Although the United States does not have “supercompanies” like those mentioned from China, the large companies that collect user data in the US certainly have the same potential to limit human agency.
- Wired
- 2019
-
- 30 min
- Wired
- 2019
Inside China’s Vast New Experiment In Social Ranking
In China, “supercompanies” such as WeChat or Alipay aggregate massive amounts of varied data on users. The Zhima Credit score system directly influences the agency of users by limiting their options in acting in their environment, or determining with whom they interact. The Chinese government interests itself with allying with large tech companies to incorporate a social ranking system which can be used to control and suppress citizens. Although the United States does not have “supercompanies” like those mentioned from China, the large companies that collect user data in the US certainly have the same potential to limit human agency.
How does social credit instituted by technology help perpetuate social division? What level of privacy is appropriate when it comes to social standing? Where should the line be drawn in terms of making decisions about people based on their digitally collected data?
-
- 5 min
- GIS Lounge
- 2019
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
- GIS Lounge
- 2019
-
- 5 min
- GIS Lounge
- 2019
When AI Goes Wrong in Spatial Reasoning
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
What happens when the source of the data itself (the dataset) is biased? Can the ideas present in this article (namely the intentionally broadening of the training data pool and inclusion of composite data) find application beyond GIS?
-
- 5 min
- MIT Technology Review
- 2019
Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
This is how AI bias really happens—and why it’s so hard to fix
Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.
How can the “portability trap” described in the article be avoided? Who should be involved in making decisions about framing problems that AI are meant to solve?
-
- 5 min
- Wired
- 2019
Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.
- Wired
- 2019
-
- 5 min
- Wired
- 2019
This dating app exposes the monstrous bias of algorithms
Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.
What are some inexplicit ways in which algorithms reinforce biases? Are machine learning algorithms equipped to handle the multiple confounding variables at play in things like dating preferences? Does online dating unquestionably give people more agency in finding a partner?
-
- 5 min
- Wall Street Journal
- 2019
Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.
- Wall Street Journal
- 2019
-
- 5 min
- Wall Street Journal
- 2019
Investors Urge AI Startups to Inject Early Dose of Ethics
Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.
How do we have an ethical guardrail around AI? How should tech companies approach gathering outside perspectives on algorithms?