News Article (130)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Tech Review
- 2020
With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.
- MIT Tech Review
- 2020
-
- 5 min
- MIT Tech Review
- 2020
The Year Deepfakes Went Mainstream
With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.
Should deepfake technology be allowed to proliferate enough that users have to question the reality of everything they consume on digital platforms? Should users already approach digital media with such scrutiny? What is defined as a “harmless” use for deepfake technology? What is the danger posed to real people in the acting industry with the rise of convincing synthetic media?
-
- 4 min
- TechCrunch
- 2021
On the day of the January 6th insurrection at the U.S Capitol, social media proved to be a valuable tool for telling the narrative of the horrors taking place within the Capitol building. At the same time, social media plays a large role in political polarization, as users can end up on fringe sites where content is tailored to their beliefs and not always true.
- TechCrunch
- 2021
-
- 4 min
- TechCrunch
- 2021
Social media allowed a shocked nation to watch a coup attempt in real time
On the day of the January 6th insurrection at the U.S Capitol, social media proved to be a valuable tool for telling the narrative of the horrors taking place within the Capitol building. At the same time, social media plays a large role in political polarization, as users can end up on fringe sites where content is tailored to their beliefs and not always true.
How can social media platforms be redesigned or regulated to crack down more harshly on misinformation and extremism? How much can social media be valued as a set of platforms that “help tell the true story of an event” when they also allow mass denial of objective fact? Who should be responsible for shutting down fringe sites, and how should this happen?
-
- 5 min
- NPR
- 2020
After the FTC and 48 States charged Facebook with being a monopoly in late 2020, the FTC continues the push for accountability of tech monopolies by demanding that large social network companies, including Facebook, TikTok, and Twitter, share exactly what they do with user data in hopes of increased transparency. Pair with “Facebook hit with antitrust lawsuit from FTC and 48 state attorneys general“
- NPR
- 2020
-
- 5 min
- NPR
- 2020
Amazon, TikTok, Facebook, Others Ordered To Explain What They Do With User Data
After the FTC and 48 States charged Facebook with being a monopoly in late 2020, the FTC continues the push for accountability of tech monopolies by demanding that large social network companies, including Facebook, TikTok, and Twitter, share exactly what they do with user data in hopes of increased transparency. Pair with “Facebook hit with antitrust lawsuit from FTC and 48 state attorneys general“
Do you think that users, especially younger users, would trade their highly-tailored recommender system and social network experiences for data privacy? How much does transparency of tech monopolies help when many people are not fluent in the concept of how algorithms work? Should social media companies release the abstractions of users that it forms using data?
-
- 7 min
- The New Republic
- 2020
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
- The New Republic
- 2020
-
- 7 min
- The New Republic
- 2020
Who Gets a Say in Our Dystopian Tech Future?
The narrative of Dr. Timnit Gebru’s termination from Google is inextricably bound with Google’s irresponsible practices with training data for its machine learning algorithms. Using large data sets to train Natural Language Processing algorithms is ultimately a harmful practice because for all the harms to the environment and biases against certain languages it causes, machines still cannot fully comprehend human language.
Should machines be trusted to handle and process the incredibly nuanced meaning of human language? How do different understandings of what languages and words mean and represent become harmful when a minority of people are deciding how to train NLP algorithms? How do tech monopolies prevent more diverse voices from entering this conversation?
-
- 5 min
- ABC News
- 2020
The United States government is pushing its interest in breaking up the tech monopoly that is Facebook, hoping to restore some competition in the social networking and data selling market which the company dominates. Facebook, of course, is resistant to these efforts.
- ABC News
- 2020
-
- 5 min
- ABC News
- 2020
Facebook hit with antitrust lawsuit from FTC and 48 state attorneys general
The United States government is pushing its interest in breaking up the tech monopoly that is Facebook, hoping to restore some competition in the social networking and data selling market which the company dominates. Facebook, of course, is resistant to these efforts.
What role did data collection and use play in Facebook’s rise as a monopoly power? What would breaking up this monopoly accomplish? Will users achieve more data privacy if one large company does not own several platforms on which users communicate?
-
- 7 min
- MIT Technology Review
- 2020
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
- MIT Technology Review
- 2020
-
- 7 min
- MIT Technology Review
- 2020
Tiny four-bit computers are now all you need to train AI
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
Does more efficiency mean more data would be wanted or needed? Would that be a good thing, a bad thing, or potentially both?