Machine Learning (83)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- MIT Technology Review
- 2019
Humans take the blame for failures of AI automated systems, protecting the integrity of the technological system and becoming a “liability sponge.” It is necessary to redefine the role of humans in sociotechnical systems.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
When algorithms mess up, the nearest human gets the blame
Humans take the blame for failures of AI automated systems, protecting the integrity of the technological system and becoming a “liability sponge.” It is necessary to redefine the role of humans in sociotechnical systems.
Should humans take the blame for algorithm-created harm? At what level (development, corporate, or personal) should this liability occur?
-
- 7 min
- MIT Technology Review
- 2020
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
- MIT Technology Review
- 2020
-
- 7 min
- MIT Technology Review
- 2020
Tiny four-bit computers are now all you need to train AI
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
Does more efficiency mean more data would be wanted or needed? Would that be a good thing, a bad thing, or potentially both?
-
- 7 min
- MIT Tech Review
- 2020
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
- MIT Tech Review
- 2020
-
- 7 min
- MIT Tech Review
- 2020
Why 2020 was a pivotal, contradictory year for facial recognition
This article examines several case studies from the year of 2020 to discuss the widespread usage, and potential for limitation, of facial recognition technology. The author argues that its potential for training and identification using social media platforms in conjunction with its use by law enforcement is dangerous for minority groups and protestors alike.
Should there be a national moratorium on facial recognition technology? How can it be ensured that smaller companies like Clearview AI are more carefully watched and regulated? Do we consent to having or faces identified any time we post something to social media?
-
- 5 min
- Venture Beat
- 2021
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
- Venture Beat
- 2021
-
- 5 min
- Venture Beat
- 2021
Google targets AI ethics lead Margaret Mitchell after firing Timnit Gebru
Relates the story of Google’s inspection of Margaret Mitchell’s account in the wake of Timnit Gebru’s firing from Google’s AI ethics division. With authorities in AI ethics clearly under fire, the Alphabet Worker’s Union aims to ensure that workers who can ensure ethical perspectives of AI development and deployment.
How can bias in tech monopolies be mitigated? How can authorities on AI ethics be positioned in such a way that they cannot be fired when developers do not want to listen to them?
-
- 7 min
- VentureBeat
- 2021
New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.
- VentureBeat
- 2021
-
- 7 min
- VentureBeat
- 2021
Salesforce researchers release framework to test NLP model robustness
New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.
What does “robustness” in a natural language processing algorithm mean to you? Should machines always be taught to automatically associate certain words or terms? What are the consequences of large corporations not using the most robust training data for their NLP algorithms?
-
- 5 min
- MIT Tech Review
- 2020
With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.
- MIT Tech Review
- 2020
-
- 5 min
- MIT Tech Review
- 2020
The Year Deepfakes Went Mainstream
With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.
Should deepfake technology be allowed to proliferate enough that users have to question the reality of everything they consume on digital platforms? Should users already approach digital media with such scrutiny? What is defined as a “harmless” use for deepfake technology? What is the danger posed to real people in the acting industry with the rise of convincing synthetic media?