All Narratives (328)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 3 min
- TechCrunch
- 2021
This article presents several case studies of technologies introduced at CES which are specifically designed to help elderly people continue to live independently, mostly using smartphones and internets of things to monitor both the home environment and the physical health of the occupant.
- TechCrunch
- 2021
-
- 3 min
- TechCrunch
- 2021
Startups at CES showed how tech can help elderly people and their caregivers
This article presents several case studies of technologies introduced at CES which are specifically designed to help elderly people continue to live independently, mostly using smartphones and internets of things to monitor both the home environment and the physical health of the occupant.
What implications do these technologies have for the agency of the senior citizens which they are meant to monitor? Does close surveillance truly equate to increased independence? Are there any other downsides or tradeoffs to these technologies?
-
- 7 min
- VentureBeat
- 2021
New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.
- VentureBeat
- 2021
-
- 7 min
- VentureBeat
- 2021
Salesforce researchers release framework to test NLP model robustness
New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.
What does “robustness” in a natural language processing algorithm mean to you? Should machines always be taught to automatically associate certain words or terms? What are the consequences of large corporations not using the most robust training data for their NLP algorithms?
-
- 5 min
- MIT Tech Review
- 2020
With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.
- MIT Tech Review
- 2020
-
- 5 min
- MIT Tech Review
- 2020
The Year Deepfakes Went Mainstream
With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.
Should deepfake technology be allowed to proliferate enough that users have to question the reality of everything they consume on digital platforms? Should users already approach digital media with such scrutiny? What is defined as a “harmless” use for deepfake technology? What is the danger posed to real people in the acting industry with the rise of convincing synthetic media?
-
- 30 min
- UNIVERSITY OF WÜRZBURG GRADUATE SCHOOLS
- 1982
Hardware specialist Automatic Jack is roped into a dangerous hacking scheme with his partner Bobby Quine while they compete for the affections of Rikki. Their plan is to use deadly malware to infiltrate the protections of Chrome, a mysterious overlord of cyberspace who hoards massive amounts of wealth. They enact this plan by entering cyberspace within a program and visualizing the data held within this digital network which connects people all across the globe.
- UNIVERSITY OF WÜRZBURG GRADUATE SCHOOLS
- 1982
-
- 30 min
- UNIVERSITY OF WÜRZBURG GRADUATE SCHOOLS
- 1982
Cyberspace and Internet Imaginations: “Burning Chrome” by William Gibson
Hardware specialist Automatic Jack is roped into a dangerous hacking scheme with his partner Bobby Quine while they compete for the affections of Rikki. Their plan is to use deadly malware to infiltrate the protections of Chrome, a mysterious overlord of cyberspace who hoards massive amounts of wealth. They enact this plan by entering cyberspace within a program and visualizing the data held within this digital network which connects people all across the globe.
How can malware be used for good, and when should it be used for good? How do imaginations of the internet influence how people perceive this mysterious yet pervasive network? In what ways would making aspects of the internet into tangible images help people understand it better? How should the most powerful stakeholders in a given digital architecture be challenged? How might immersion into cyberspace give people more agency?
-
- 7 min
- Farnam Street Blog
- 2021
Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.
- Farnam Street Blog
- 2021
-
- 7 min
- Farnam Street Blog
- 2021
A Primer on Algorithms and Bias
Discusses the main lessons from two recent books explaining how algorithmic bias occurs and how it may be ameliorated. Essentially, algorithms are little more than mathematical operations, but their lack of transparency and the bad, unrepresentative data sets which train them mean their pervasive use becomes dangerous.
How can data sets fed to algorithms be properly verified? What would the most beneficial collaboration between humans and algorithms look like?
-
- 5 min
- Gizmodo
- 2021
Customs and Border protection used facial recognition technology to scan travelers entering the U.S at several points of entry in 2020, and did not identify any impostors or impersonators. This is part of a larger program of using biometrics to screen those who enter the country, which raises concerns about data privacy, who may have access to this data, and how it may be used.
- Gizmodo
- 2021
-
- 5 min
- Gizmodo
- 2021
CBP Facial Recognition Scanners Failed to Find a Single Imposter At Airports in 2020
Customs and Border protection used facial recognition technology to scan travelers entering the U.S at several points of entry in 2020, and did not identify any impostors or impersonators. This is part of a larger program of using biometrics to screen those who enter the country, which raises concerns about data privacy, who may have access to this data, and how it may be used.
What bad outcomes are possible from the government having extensive biometric data, including facial scans, on many people who try to enter the country? Why does the government get away with using biased technology to conduct facial scans at airports, for example? Are “facilitation improvements” worth aiming for if it means using technologies that are not 100% effective and will disproportionately harm certain populations?