Privacy (134)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 7 min
- Slate
- 2019
Discussion of Facebook’s massive collection of human faces and their potential impact on society.
- Slate
- 2019
-
- 7 min
- Slate
- 2019
Facebook’s Face-ID Database Could Be the Biggest in the World. Yes, It Should Worry Us.
Discussion of Facebook’s massive collection of human faces and their potential impact on society.
Is Facebook’s facial recognition database benign, or a slow-bubbling volcano?
-
- 15 min
- The App Solutions
Overview of recommender systems, which are information filtering algorithms design to suggest content or products to a particular user.
- The App Solutions
-
- 15 min
- The App Solutions
5 types of recommender systems and their impact on customer experience
Overview of recommender systems, which are information filtering algorithms design to suggest content or products to a particular user.
How do information filtering algorithms work and learn? Are some types of recommender systems more generally ethical than others?
-
- 5 min
- CNN
- 2010
Algorithms and machines can struggle with facial recognition, and need ideal source images to perform it consistently. However, its potential use in monitoring and identifying citizens is concerning.
- CNN
- 2010
-
- 5 min
- CNN
- 2010
Why face recognition isn’t scary — yet
Algorithms and machines can struggle with facial recognition, and need ideal source images to perform it consistently. However, its potential use in monitoring and identifying citizens is concerning.
How have the worries regarding facial recognition changed since 2010? Can we teach machines to identify human faces? How can facial recognition pose a danger/worry when use for governmental purposes?
-
- 7 min
- Wall Street Journal
- 2019
Large firms in the United States are becoming far more resilient to cyber attacks, primarily through larger spending and higher prioritization of security. This is especially important as digital hacking escalates conflicts between global nations.
- Wall Street Journal
- 2019
-
- 7 min
- Wall Street Journal
- 2019
U.S. Companies Learn to Defend Themselves in Cyberspace
Large firms in the United States are becoming far more resilient to cyber attacks, primarily through larger spending and higher prioritization of security. This is especially important as digital hacking escalates conflicts between global nations.
How might small businesses fit into this picture? How could cyber security development be more oriented toward the public good? How can tech corporations help the government in a age which seems to be tending toward digital mutually assured destruction?
-
- 5 min
- ARS Technica
- 2019
Google records some audio and has language experts review it to improve language skills of the technology. However, this may raise privacy concerns as sometimes they record by accident when users aren’t trying to use the Google Assistant.
- ARS Technica
- 2019
-
- 5 min
- ARS Technica
- 2019
Google workers listen to your “OK Google” queries– one of them leaked recordings
Google records some audio and has language experts review it to improve language skills of the technology. However, this may raise privacy concerns as sometimes they record by accident when users aren’t trying to use the Google Assistant.
Do you use a virtual assistant? What for? Are you always conscious of what you are asking, or do you say things that you think no one is hearing?
-
- 13 min
- Kinolab
- 2002
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens.
- Kinolab
- 2002
Preventative Policing and Surveillance Information
In the year 2054, the PreCrime police program is about to go national. At PreCrime, three clairvoyant humans known as “PreCogs” are able to forecast future murders by streaming audiovisual data which provides the surrounding details of the crime, including the names of the victims and perpetrators. Although there are no cameras, the implication is that anyone can be under constant surveillance by this program. Once the “algorithm” has gleaned enough data about the future crime, officers move out to stop the murder before it happens.
How will predicted crime be prosecuted? Should predicted crime be prosecuted? How could technologies such as the ones shown here be affected for the worse by human bias? How would these devices make racist policing practices even worse? Would certain communities be targeted? Is there ever any justification for constant civil surveillance?