Privacy (134)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 10 min
- The New York Times
- 2019
Databases of people’s faces are being compiled without their knowledge by companies and researchers (including social media companies or dating sites), with many shared around the world and fueling the advancement of facial recognition technology.
- The New York Times
- 2019
-
- 10 min
- The New York Times
- 2019
Facial Recognition Tech is Growing Stronger, Thanks to Your Face
Databases of people’s faces are being compiled without their knowledge by companies and researchers (including social media companies or dating sites), with many shared around the world and fueling the advancement of facial recognition technology.
How comfortable would you feel knowing that your face is in various databases and is being use, in some cases, to fuel their machine learning algorithms? As of right now, Google and Facebook, who are said to have the largest facial databases of all, do not share their information, but might they? And what would happen if they did?
-
- 28 min
- Cornell Tech
- 2019
Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.
- Cornell Tech
- 2019
Algorithms in the Courtroom
Pre-trial risk assessment is part of an attempted answer to mass incarceration. Data sometimes answers a different question than the ones we’re trying to answer (data based on riskiness before incarceration, not how dangerous they are later). Essentially, technologies and algorithms which end up in contexts of social power differentials can often be abused to further cause injustice against people accused of a crime, for example. Numbers are not neutral and can even be a “moral anesthetic,” especially if the sampled data has confounding variables that collectors ignore. Engineers designing technology do not always envisage ethical questions when making decisions that ought to be political.
Would you rely on a risk-assessment algorithm to make life-changing decisions for another human? How can the transparency culture which Robinson describes be created? How can we make sure that political decisions stay political, and don’t end up being ultimately answered by engineers? Can “fairness” be defined by a machine?
-
- 10 min
- MEL Beta
- 2019
The continued existence of pirating websites such as The Pirate Bay demonstrates how digital technologies can be used against institutions such as copyright, and further designates the idea of a completely free and open internet.
- MEL Beta
- 2019
-
- 10 min
- MEL Beta
- 2019
After 15 Years, The Pirate Bay Still Can’t Be Killed
The continued existence of pirating websites such as The Pirate Bay demonstrates how digital technologies can be used against institutions such as copyright, and further designates the idea of a completely free and open internet.
Why are communities online so hard to shut down? Can the internet ever be entirely free and open, as the founders of TPB discuss? What would be the consequences of this? Is the digital world decreasing the value of art forms such as recorded songs or films?
-
- 15 min
- MIT Tech Review
- 2019
An attack in Saudi Arabia through malware known as Triton demonstrates that hackers, potentially even those belonging to nation-states, are willing to spend considerable time and money to hack into the increasing numbers of targets in industrial internets of things. Such cyber attacks could lead to unsafe workplaces and even catastrophes.
- MIT Tech Review
- 2019
-
- 15 min
- MIT Tech Review
- 2019
Triton is the world’s most murderous malware, and it’s spreading
An attack in Saudi Arabia through malware known as Triton demonstrates that hackers, potentially even those belonging to nation-states, are willing to spend considerable time and money to hack into the increasing numbers of targets in industrial internets of things. Such cyber attacks could lead to unsafe workplaces and even catastrophes.
Is the grand increase in industrial convenience and productivity worth the increased risk for cyber attacks? In what ways can using an internet of things to control certain systems increase and decrease workplace safety, especially in more volatile settings?
-
- 2 min
- Kinolab
- 1990
With his homing signal activated, the android Data takes control of the USS Enterprise and its systems and blocks the human crew from stopping him. For further reading, see the narrative Triton is the world’s most murderous malware, and it’s spreading.
- Kinolab
- 1990
Data Takes Over: Robots and Humans in the Workplace
With his homing signal activated, the android Data takes control of the USS Enterprise and its systems and blocks the human crew from stopping him. For further reading, see the narrative Triton is the world’s most murderous malware, and it’s spreading.
What dangers can AI cause within institutions and systems, if it becomes remotely hijacked? Should AI ever be allowed to develop in such a way that they can block out human autonomy over a certain system?
-
- 5 min
- Wired
- 2019
Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.
- Wired
- 2019
-
- 5 min
- Wired
- 2019
Taser User Says It Wont Use Biometrics In BodyCams
Axon’s novel use of an ethics committee led to a decision to not use facial recognition programs on the body cameras which they provide to police department, on the basis of latent racial bias and privacy concerns. While this is a beneficial step, companies and government offices at multiple levels debate over when and how facial recognition should be deployed and limited.
Should facial recognition ever be used in police body cameras, even if it does theoretically evolve to eliminate bias? How can citizens and governments have more power in limiting facial recognition and enforcing a more widespread use of ethics boards?