AI (143)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- Quartz
- 2019
A comparison of surveillance systems in China and the US which target, and aid in the persecution of, ethnic minorities. Data on targeted people is tracked extensively and compiled into intuitive databases which can be abused by government organizations.
- Quartz
- 2019
-
- 10 min
- Quartz
- 2019
China embraces its surveillance state. The US pretends it doesn’t have one
A comparison of surveillance systems in China and the US which target, and aid in the persecution of, ethnic minorities. Data on targeted people is tracked extensively and compiled into intuitive databases which can be abused by government organizations.
In what ways are the surveillance systems of the US and China similar? Should big tech companies be allowed to contract with the government on the scale that a company like Palantir did?
-
- 7 min
- New York Times
- 2018
Youtube’s algorithm suggests increasingly radical recommendations to its users, maximising the amount of time they spend on the platform. The tendency toward inflammatory recommendations often leads to political misinformation.
- New York Times
- 2018
-
- 7 min
- New York Times
- 2018
Youtube, The Great Radicalizer
Youtube’s algorithm suggests increasingly radical recommendations to its users, maximising the amount of time they spend on the platform. The tendency toward inflammatory recommendations often leads to political misinformation.
What are the dangers of being offered increasingly radical videos on Youtube?
-
- 10 min
- The Washington Post
- 2019
Law enforcement officials at Federal and state levels, notably the FBI and ICE, use state driver’s license photo databases as a repository for facial recognition software. Such capabilities allow DMVs to help law enforcement in finding those suspected of a crime, undocumented immigrants, or even witnesses. Ultimately, states allow this to happen with certain stipulations, feeding into a concerning system of facial recognition and breach of trust. There is not a solid established system for citizen consent to such monitoring.
- The Washington Post
- 2019
-
- 10 min
- The Washington Post
- 2019
FBI, ICE find state driver’s license photos are a gold mine for facial-recognition searches
Law enforcement officials at Federal and state levels, notably the FBI and ICE, use state driver’s license photo databases as a repository for facial recognition software. Such capabilities allow DMVs to help law enforcement in finding those suspected of a crime, undocumented immigrants, or even witnesses. Ultimately, states allow this to happen with certain stipulations, feeding into a concerning system of facial recognition and breach of trust. There is not a solid established system for citizen consent to such monitoring.
Does this case study of facial recognition make the US seem like a surveillance state or not? How can and should average citizens have more agency in DMV databases being used for facial recognition? Can the government use any digital surveillance in a way that does not breach citizen trust?
-
- 27 min
- Cornell Tech
- 2019
Podcast about worker quantification in factors such as hiring, productivity and more. Dives into the discussion on why we should attempt a fair making of algorithms. Warns specifically about how algorithms can find “proxy variables” to approximate for cultural fits like race or gender even when the algorithms is supposedly controlled for these factors.
- Cornell Tech
- 2019
Quantifying Workers
Podcast about worker quantification in factors such as hiring, productivity and more. Dives into the discussion on why we should attempt a fair making of algorithms. Warns specifically about how algorithms can find “proxy variables” to approximate for cultural fits like race or gender even when the algorithms is supposedly controlled for these factors.
What are the dangers of having an algorithm involved in the hiring process? Is efficiency worth the cost in this scenario? Can humans ever be placed in a binary context?
-
- 5 min
- Time Magazine
- 2017
Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.
- Time Magazine
- 2017
-
- 5 min
- Time Magazine
- 2017
The Police Are Using Computer Algorithms to Tell If You’re a Threat
Chicago police enact an algorithm to calculate a “risk score” for individuals based on factors such as criminal history and age with the aim of assessing and pre-emptively striking against risk. However, these numbers are inherently linked to human bias both in input and outcome, and could lead to unfair targeted of citizens, even as it supposedly introduces objectivity to the system.
Is the police risk score system biased, and does it improve or enhance human bias? Is it plausible to use digital technology to eliminate bias from American policing, or is this impossible? What might this look like? Does reliance on numerical data give police and tech companies more power or less power?
-
- 5 min
- Wired
- 2019
Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.
- Wired
- 2019
-
- 5 min
- Wired
- 2019
Taser User Says It Wont Use Biometrics In BodyCams
Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.
Should facial recognition ever be used in police body cameras, even if it does theoretically evolve to eliminate bias? How can citizens and governments have more power in limiting facial recognition and enforcing a more widespread use of ethics boards?