All Narratives (328)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Time Magazine
- 2017
Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.
- Time Magazine
- 2017
-
- 5 min
- Time Magazine
- 2017
The Police Are Using Computer Algorithms to Tell If You’re a Threat
Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.
Is the police risk score system biased, and does it improve or enhance human bias? Is it plausible to use digital technology to eliminate bias from American policing, or is this impossible? What might this look like? Does reliance on numerical data give police and tech companies more power or less power?
-
- 10 min
- The Washington Post
- 2019
Law enforcement officials at Federal and state levels, notably the FBI and ICE, use state driver’s license photo databases as a repository for facial recognition software. Such capabilities allow DMVs to help law enforcement in finding those suspected of a crime, undocumented immigrants, or even witnesses. Ultimately, states allow this to happen with certain stipulations, feeding into a concerning system of facial recognition and breach of trust. There is not a solid established system for citizen consent to such monitoring.
- The Washington Post
- 2019
-
- 10 min
- The Washington Post
- 2019
FBI, ICE find state driver’s license photos are a gold mine for facial-recognition searches
Law enforcement officials at Federal and state levels, notably the FBI and ICE, use state driver’s license photo databases as a repository for facial recognition software. Such capabilities allow DMVs to help law enforcement in finding those suspected of a crime, undocumented immigrants, or even witnesses. Ultimately, states allow this to happen with certain stipulations, feeding into a concerning system of facial recognition and breach of trust. There is not a solid established system for citizen consent to such monitoring.
Does this case study of facial recognition make the US seem like a surveillance state or not? How can and should average citizens have more agency in DMV databases being used for facial recognition? Can the government use any digital surveillance in a way that does not breach citizen trust?
-
- 7 min
- Vice
- 2019
Programmer creates an application that uses neural networks to remove clothing from the images of women. Deepfake technology being used against women systematically, despite continued narrative that its use in the political realm is the most pressing issue.
- Vice
- 2019
-
- 7 min
- Vice
- 2019
This Horrifying App Undresses a Photo of Any Woman With a Single Click
Programmer creates an application that uses neural networks to remove clothing from the images of women. Deepfake technology being used against women systematically, despite continued narrative that its use in the political realm is the most pressing issue.
How does technology enhance violation of sexual privacy? Who should regulate this technology, and how?
-
- 30 min
- Wired
- 2019
In China, “supercompanies” such as WeChat or Alipay aggregate massive amounts of varied data on users. The Zhima Credit score system directly influences the agency of users by limiting their options in acting in their environment, or determining with whom they interact. The Chinese government interests itself with allying with large tech companies to incorporate a social ranking system which can be used to control and suppress citizens. Although the United States does not have “supercompanies” like those mentioned from China, the large companies that collect user data in the US certainly have the same potential to limit human agency.
- Wired
- 2019
-
- 30 min
- Wired
- 2019
Inside China’s Vast New Experiment In Social Ranking
In China, “supercompanies” such as WeChat or Alipay aggregate massive amounts of varied data on users. The Zhima Credit score system directly influences the agency of users by limiting their options in acting in their environment, or determining with whom they interact. The Chinese government interests itself with allying with large tech companies to incorporate a social ranking system which can be used to control and suppress citizens. Although the United States does not have “supercompanies” like those mentioned from China, the large companies that collect user data in the US certainly have the same potential to limit human agency.
How does social credit instituted by technology help perpetuate social division? What level of privacy is appropriate when it comes to social standing? Where should the line be drawn in terms of making decisions about people based on their digitally collected data?
-
- 10 min
- Quartz
- 2019
A comparison of surveillance systems in China and the US which target, and aid in the persecution of, ethnic minorities. Data on targeted people is tracked extensively and compiled into intuitive databases which can be abused by government organizations.
- Quartz
- 2019
-
- 10 min
- Quartz
- 2019
China embraces its surveillance state. The US pretends it doesn’t have one
A comparison of surveillance systems in China and the US which target, and aid in the persecution of, ethnic minorities. Data on targeted people is tracked extensively and compiled into intuitive databases which can be abused by government organizations.
In what ways are the surveillance systems of the US and China similar? Should big tech companies be allowed to contract with the government on the scale that a company like Palantir did?
-
- 5 min
- GIS Lounge
- 2019
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
- GIS Lounge
- 2019
-
- 5 min
- GIS Lounge
- 2019
When AI Goes Wrong in Spatial Reasoning
GIS, a relatively new form of computational analysis, can often contain algorithms with biases based on biases present in the training data from open data sources, with this case study focusing on the tendency of power-line identification data being centered around the Western world. This problem can be improved by approaching data collection with more intentionality, either broadening the pool of collected geographic data or inputting artificial images to help the tool recognize a greater number of circumstances and thus become more accurate.
What happens when the source of the data itself (the dataset) is biased? Can the ideas present in this article (namely the intentionally broadening of the training data pool and inclusion of composite data) find application beyond GIS?