Privacy (134)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Tech Review
- 2020
With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.
- MIT Tech Review
- 2020
-
- 5 min
- MIT Tech Review
- 2020
The Year Deepfakes Went Mainstream
With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.
Should deepfake technology be allowed to proliferate enough that users have to question the reality of everything they consume on digital platforms? Should users already approach digital media with such scrutiny? What is defined as a “harmless” use for deepfake technology? What is the danger posed to real people in the acting industry with the rise of convincing synthetic media?
-
- 4 min
- TechCrunch
- 2021
On the day of the January 6th insurrection at the U.S Capitol, social media proved to be a valuable tool for telling the narrative of the horrors taking place within the Capitol building. At the same time, social media plays a large role in political polarization, as users can end up on fringe sites where content is tailored to their beliefs and not always true.
- TechCrunch
- 2021
-
- 4 min
- TechCrunch
- 2021
Social media allowed a shocked nation to watch a coup attempt in real time
On the day of the January 6th insurrection at the U.S Capitol, social media proved to be a valuable tool for telling the narrative of the horrors taking place within the Capitol building. At the same time, social media plays a large role in political polarization, as users can end up on fringe sites where content is tailored to their beliefs and not always true.
How can social media platforms be redesigned or regulated to crack down more harshly on misinformation and extremism? How much can social media be valued as a set of platforms that “help tell the true story of an event” when they also allow mass denial of objective fact? Who should be responsible for shutting down fringe sites, and how should this happen?
-
- 12 min
- Wired
- 2018
This video offers a basic introduction to the use of machine learning in predictive policing, and how this disproportionately affects low income communities and communities of color.
- Wired
- 2018
How Cops Are Using Algorithms to Predict Crimes
This video offers a basic introduction to the use of machine learning in predictive policing, and how this disproportionately affects low income communities and communities of color.
Should algorithms ever be used in a context where human bias is already rampant, such as in police departments? Why is it that the use of digital technologies to accomplish tasks in this age makes a process seem more “efficient” or “objective”? What are the problems with police using algorithms of which they do not fully understand the inner workings? Is the use of predictive policing algorithms ever justifiable?
-
- 5 min
- NPR
- 2020
After the FTC and 48 States charged Facebook with being a monopoly in late 2020, the FTC continues the push for accountability of tech monopolies by demanding that large social network companies, including Facebook, TikTok, and Twitter, share exactly what they do with user data in hopes of increased transparency. Pair with “Facebook hit with antitrust lawsuit from FTC and 48 state attorneys general“
- NPR
- 2020
-
- 5 min
- NPR
- 2020
Amazon, TikTok, Facebook, Others Ordered To Explain What They Do With User Data
After the FTC and 48 States charged Facebook with being a monopoly in late 2020, the FTC continues the push for accountability of tech monopolies by demanding that large social network companies, including Facebook, TikTok, and Twitter, share exactly what they do with user data in hopes of increased transparency. Pair with “Facebook hit with antitrust lawsuit from FTC and 48 state attorneys general“
Do you think that users, especially younger users, would trade their highly-tailored recommender system and social network experiences for data privacy? How much does transparency of tech monopolies help when many people are not fluent in the concept of how algorithms work? Should social media companies release the abstractions of users that it forms using data?
-
- 4 min
- Reuters
- 2020
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
- Reuters
- 2020
-
- 4 min
- Reuters
- 2020
From hate speech to nudity, Facebook’s oversight board picks its first cases
Facebook has a new independent Oversight Board to help moderate content on the site, picking individual cases from the many presented to them where it is alright to remove content. The cases usually deal in hate speech, “inappropriate visuals,” or misinformation.
How much oversight do algorithms or networks with a broad impact need? Who all needs to be in a room when deciding what an algorithm or site should or should not allow? Can algorithms be designed to detect and remove hate speech? Should such an algorithm exist?
-
- 5 min
- Gizmodo
- 2020
The data privacy of employees is at risk under a new “Productivity Score” program started by Microsoft, in which employers and administrators can use Microsoft 365 platforms to collect several metrics on their workers in order to “optimize productivity.” However, this approach causes unnecessary stress for workers, beginning a surveillance program in the workplace.
- Gizmodo
- 2020
-
- 5 min
- Gizmodo
- 2020
Microsoft’s Creepy New ‘Productivity Score’ Gamifies Workplace Surveillance
The data privacy of employees is at risk under a new “Productivity Score” program started by Microsoft, in which employers and administrators can use Microsoft 365 platforms to collect several metrics on their workers in order to “optimize productivity.” However, this approach causes unnecessary stress for workers, beginning a surveillance program in the workplace.
How are excuses such as using data to “optimize productivity” employed to gather more data on people? How could such a goal be accomplished without the surveillance aspect? How does this approach not account for a diversity of working methods?