The ability, especially of corporations or governments, to collect data that should not be publicly available.
Data Privacy (54)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Inc
Clubhouse, a new, exclusive social network app which appeared during the coronavirus pandemic, has some frightening data collection practices which are outlined in detail in this article. Essentially, while the company was not monetized at the time of this article, it collects data not only on users on the platform, but also any contacts of that user.
- Inc
-
- 5 min
- Inc
Clubhouse Is Recording Your Conversations. That’s Not Even Its Worst Privacy Problem
Clubhouse, a new, exclusive social network app which appeared during the coronavirus pandemic, has some frightening data collection practices which are outlined in detail in this article. Essentially, while the company was not monetized at the time of this article, it collects data not only on users on the platform, but also any contacts of that user.
What are the consequences of social networks having detailed data on the personal networks of its users? What are the dangers of collecting data by putting many different social networking platforms into conversation with one another? How do draws such as exclusivity pull attention away from irresponsible data mining practices?
-
- 10 min
- Kinolab
- 2016
In this episode, Kenny’s life is upended after hackers use malware to access a compromising video of Kenny on his laptop. Under the threat of this humiliating video being sent to everyone in his contacts, Kenny becomes a puppet of the hackers, forced to have his location services on and be tracked and contacted through his smartphone wherever he goes. Along with other puppets of the hackers, including a man named Hector who had an affair, he is forced to commit heinous acts such as a bank robbery and a fight to the death. Despite their compliance, the hackers release the puppets’ information anyway, leading to vast consequences in their personal lives.
- Kinolab
- 2016
Cyber Blackmailing and Compromising Data
In this episode, Kenny’s life is upended after hackers use malware to access a compromising video of Kenny on his laptop. Under the threat of this humiliating video being sent to everyone in his contacts, Kenny becomes a puppet of the hackers, forced to have his location services on and be tracked and contacted through his smartphone wherever he goes. Along with other puppets of the hackers, including a man named Hector who had an affair, he is forced to commit heinous acts such as a bank robbery and a fight to the death. Despite their compliance, the hackers release the puppets’ information anyway, leading to vast consequences in their personal lives.
Is anyone truly “alone” or “unwatched” when in the presence of their mobile computing devices? Whose responsibility is it to guard people against the dangers witnessed in this narrative? Do digital technologies need clearer and more thorough warnings about the possibilities of malware infecting a device? How can mobile computing devices and location tracking be manipulated to deprive people of autonomy? Are small individual steps such as covering up cameras enough to guard against these types of problems?
-
- 3 min
- MacRumors
- 2021
Facebook’s collaboration with Ray-Ban on a new technology of “smart glasses” comes with a host of questions on whether or not capabilities such as facial recognition should be built into the technology.
- MacRumors
- 2021
-
- 3 min
- MacRumors
- 2021
Facebook Weighing Up Legality of Facial Recognition in Upcoming Smart Glasses
Facebook’s collaboration with Ray-Ban on a new technology of “smart glasses” comes with a host of questions on whether or not capabilities such as facial recognition should be built into the technology.
What are the “so clear” benefits and risks of having facial recognition algorithms implanted into smart glasses, in your view? What are the problems with “transparent technology” such as smart glasses, where other citizens may not even know that they are being surveilled?
-
- 5 min
- Gizmodo
- 2021
Customs and Border protection used facial recognition technology to scan travelers entering the U.S at several points of entry in 2020, and did not identify any impostors or impersonators. This is part of a larger program of using biometrics to screen those who enter the country, which raises concerns about data privacy, who may have access to this data, and how it may be used.
- Gizmodo
- 2021
-
- 5 min
- Gizmodo
- 2021
CBP Facial Recognition Scanners Failed to Find a Single Imposter At Airports in 2020
Customs and Border protection used facial recognition technology to scan travelers entering the U.S at several points of entry in 2020, and did not identify any impostors or impersonators. This is part of a larger program of using biometrics to screen those who enter the country, which raises concerns about data privacy, who may have access to this data, and how it may be used.
What bad outcomes are possible from the government having extensive biometric data, including facial scans, on many people who try to enter the country? Why does the government get away with using biased technology to conduct facial scans at airports, for example? Are “facilitation improvements” worth aiming for if it means using technologies that are not 100% effective and will disproportionately harm certain populations?
-
- 5 min
- BBC
- 2021
The ability of facial recognition technology used by the South Wales Police force to identify an individual based on biometric data nearly instantly rather than the previous standard of 10 days allowed a mother to say goodbye to her son on his deathbed. It seems to have other positive impacts, such as identifying criminals earlier than they otherwise might have been. However, as is usually the case, concerns abound about how this facial recognition technology can violate human rights.
- BBC
- 2021
-
- 5 min
- BBC
- 2021
Facial recognition technology meant mum saw dying son
The ability of facial recognition technology used by the South Wales Police force to identify an individual based on biometric data nearly instantly rather than the previous standard of 10 days allowed a mother to say goodbye to her son on his deathbed. It seems to have other positive impacts, such as identifying criminals earlier than they otherwise might have been. However, as is usually the case, concerns abound about how this facial recognition technology can violate human rights.
Who can be trusted with facial recognition algorithms that can give someone several possibilities for the identity of a particular face? Who can be trusted to decide in what cases this technology can be deployed? How can bias become problematic when a human is selecting one of many faces recommended by the algorithm? Should the idea of constant surveillance or omnipresent cameras make us feel safe or concerned?
-
- 5 min
- New York Times
- 2020
Decisions on whether or not law enforcement should be trusted with facial recognition are tricky, as is argued by Detroit city official James Tate. On one hand, the combination of the bias latent in the technology itself and the human bias of those who use it sometimes leads to over-policing of certain communities. On the other hand, with the correct guardrails, it can be an effective tool in getting justice in cases of violent crime. This article details the ongoing debate about how much facial recognition technology use is proper in Detroit.
- New York Times
- 2020
-
- 5 min
- New York Times
- 2020
A Case for Facial Recognition
Decisions on whether or not law enforcement should be trusted with facial recognition are tricky, as is argued by Detroit city official James Tate. On one hand, the combination of the bias latent in the technology itself and the human bias of those who use it sometimes leads to over-policing of certain communities. On the other hand, with the correct guardrails, it can be an effective tool in getting justice in cases of violent crime. This article details the ongoing debate about how much facial recognition technology use is proper in Detroit.
Who should be deciding on the guardrails surrounding the use of facial recognition technology? How can citizens have more control over when their face is being recorded or captured? Can there ever be enough guardrails to truly ensure that facial recognition technology can be used with no chance of bias?