Themes (326)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Wired
- 2021
A computer vision algorithm created by an MIT PhD student and trained on a large data set of mammogram photos from several years show potential for use in radiology. The algorithm is able to identify risk for breast cancer seemingly more reliably than the older statistical models through tagging the data with attributes that human eyes have missed. This would allow for customization in screening and treatment plans.
- Wired
- 2021
-
- 5 min
- Wired
- 2021
These Doctors are using AI to Screen for Breast Cancer
A computer vision algorithm created by an MIT PhD student and trained on a large data set of mammogram photos from several years show potential for use in radiology. The algorithm is able to identify risk for breast cancer seemingly more reliably than the older statistical models through tagging the data with attributes that human eyes have missed. This would allow for customization in screening and treatment plans.
Do there seem to be any drawbacks to using this technology widely? How important is transparency of the algorithm in this case, as long as it seems to provide accurate results? How might this change the nature of doctor-patient relationships?
-
- 7 min
- Wired
- 2021
An anonymous college student created a website titled “Faces of the Riot,” a virtual wall containing over 6,000 face images of insurrectionists present at the riot at the Capitol on January 6th, 2021. The ultimate goal of the creator’s site, which used facial recognition algorithms to crawl through videos posted to the right-wing social media site Parler, is to hopefully have viewers identify any criminals that they recognize to the proper authorities. While the creator put safeguards for privacy in place, such as using “facial detection” rather than “facial recognition”, and their intentions are supposedly positive, some argue that the implications on privacy and the widespread integration of this technique could be negative.
- Wired
- 2021
-
- 7 min
- Wired
- 2021
This Site Published Every Face From Parler’s Capitol Riot Videos
An anonymous college student created a website titled “Faces of the Riot,” a virtual wall containing over 6,000 face images of insurrectionists present at the riot at the Capitol on January 6th, 2021. The ultimate goal of the creator’s site, which used facial recognition algorithms to crawl through videos posted to the right-wing social media site Parler, is to hopefully have viewers identify any criminals that they recognize to the proper authorities. While the creator put safeguards for privacy in place, such as using “facial detection” rather than “facial recognition”, and their intentions are supposedly positive, some argue that the implications on privacy and the widespread integration of this technique could be negative.
Who deserves to be protected from having shameful data about themselves posted publicly to the internet? Should there even be any limits on this? What would happen if a similar website appeared in a less seemingly noble context, such as identifying members of a minority group in a certain area? How could sites like this expand the agency of bad or discriminatory actors?
-
- 3 min
- Politico
- 2021
Live streaming technologies are challenging to moderate and might have a negative effect on society’s perception of violent events. They also raise the question of how such content can be deleted once it has been broadcasted and potentially copied multiple times by different recipients.
- Politico
- 2021
-
- 3 min
- Politico
- 2021
Library of Congress bomb suspect livestreamed on Facebook for hours before being blocked
Live streaming technologies are challenging to moderate and might have a negative effect on society’s perception of violent events. They also raise the question of how such content can be deleted once it has been broadcasted and potentially copied multiple times by different recipients.
What are the ethical trade-offs of live-streaming through social media? Is it possible to remediate a socially undesirable broadcast? What actors are responsible for moderating live streaming on social media?
-
- 3 min
- TechCrunch
- 2021
This article presents several case studies of technologies introduced at CES which are specifically designed to help elderly people continue to live independently, mostly using smartphones and internets of things to monitor both the home environment and the physical health of the occupant.
- TechCrunch
- 2021
-
- 3 min
- TechCrunch
- 2021
Startups at CES showed how tech can help elderly people and their caregivers
This article presents several case studies of technologies introduced at CES which are specifically designed to help elderly people continue to live independently, mostly using smartphones and internets of things to monitor both the home environment and the physical health of the occupant.
What implications do these technologies have for the agency of the senior citizens which they are meant to monitor? Does close surveillance truly equate to increased independence? Are there any other downsides or tradeoffs to these technologies?
-
- 7 min
- VentureBeat
- 2021
New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.
- VentureBeat
- 2021
-
- 7 min
- VentureBeat
- 2021
Salesforce researchers release framework to test NLP model robustness
New research and code was released in early 2021 to demonstrate that the training data for Natural Language Processing algorithms is not as robust as it could be. The project, Robustness Gym, allows researchers and computer scientists to approach training data with more scrutiny, organizing this data and testing the results of preliminary runs through the algorithm to see what can be improved upon and how.
What does “robustness” in a natural language processing algorithm mean to you? Should machines always be taught to automatically associate certain words or terms? What are the consequences of large corporations not using the most robust training data for their NLP algorithms?
-
- 5 min
- MIT Tech Review
- 2020
With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.
- MIT Tech Review
- 2020
-
- 5 min
- MIT Tech Review
- 2020
The Year Deepfakes Went Mainstream
With the surge of the coronavirus pandemic, the year 2020 became an important one in terms of new applications for deepfake technology. Although a primary concern of deepfakes is their ability to create convincing misinformation, this article describes other uses of deepfake which center more on entertaining, harmless creations.
Should deepfake technology be allowed to proliferate enough that users have to question the reality of everything they consume on digital platforms? Should users already approach digital media with such scrutiny? What is defined as a “harmless” use for deepfake technology? What is the danger posed to real people in the acting industry with the rise of convincing synthetic media?