News Article (130)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 5 min
- Silicon Angle
- 2019
Artificial Companions assist developmentally disabled kids based on the principle that humans can indeed form emotional connections with nonhuman objects. In fact, it is not exceedingly difficult for robots to read or mirror human emotions, which could have positive implications in workplace or educational settings.
- Silicon Angle
- 2019
-
- 5 min
- Silicon Angle
- 2019
Empathic AI mirrors human emotions to help autistic children
Artificial Companions assist developmentally disabled kids based on the principle that humans can indeed form emotional connections with nonhuman objects. In fact, it is not exceedingly difficult for robots to read or mirror human emotions, which could have positive implications in workplace or educational settings.
Is it possible to develop an emotional connection with a robotic companion? How can robotic company improve behaviour? How does it compare to human company?
-
- 15 min
- MIT Tech Review
- 2019
An attack in Saudi Arabia through malware known as Triton demonstrates that hackers, potentially even those belonging to nation-states, are willing to spend considerable time and money to hack into the increasing numbers of targets in industrial internets of things. Such cyber attacks could lead to unsafe workplaces and even catastrophes.
- MIT Tech Review
- 2019
-
- 15 min
- MIT Tech Review
- 2019
Triton is the world’s most murderous malware, and it’s spreading
An attack in Saudi Arabia through malware known as Triton demonstrates that hackers, potentially even those belonging to nation-states, are willing to spend considerable time and money to hack into the increasing numbers of targets in industrial internets of things. Such cyber attacks could lead to unsafe workplaces and even catastrophes.
Is the grand increase in industrial convenience and productivity worth the increased risk for cyber attacks? In what ways can using an internet of things to control certain systems increase and decrease workplace safety, especially in more volatile settings?
-
- 5 min
- MIT Technology Review
- 2019
Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
This is how AI bias really happens—and why it’s so hard to fix
Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.
How can the “portability trap” described in the article be avoided? Who should be involved in making decisions about framing problems that AI are meant to solve?
-
- 5 min
- The New York Times
- 2019
In New York City, biometrics were used as a step in the investigation process, and thus combined with human oversight to help identify criminals and victims alike.
- The New York Times
- 2019
-
- 5 min
- The New York Times
- 2019
How Biometrics Makes You Safer
In New York City, biometrics were used as a step in the investigation process, and thus combined with human oversight to help identify criminals and victims alike.
How does facial recognition technology facilitate challenging investigations? Do you believe police use of facial recognition is as transparent and pure as this article makes it seem? Where could bias enter this system of using facial recognition technology?
-
- 5 min
- Wired
- 2019
Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.
- Wired
- 2019
-
- 5 min
- Wired
- 2019
This dating app exposes the monstrous bias of algorithms
Monster Match, a game funded by Mozilla, shows how dating app algorithms are reinforcing bias through combining personal and mass aggregated data to systematically hide a vast number of profiles from user sight, effectively caging users into narrow preferences.
What are some inexplicit ways in which algorithms reinforce biases? Are machine learning algorithms equipped to handle the multiple confounding variables at play in things like dating preferences? Does online dating unquestionably give people more agency in finding a partner?
-
- 5 min
- Wall Street Journal
- 2019
Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.
- Wall Street Journal
- 2019
-
- 5 min
- Wall Street Journal
- 2019
Investors Urge AI Startups to Inject Early Dose of Ethics
Incorporation of ethical practices and outside perspectives in AI companies for bias prevention is beneficial, and becoming more popular. Spawns from a need for consistent human oversight in algorithms.
How do we have an ethical guardrail around AI? How should tech companies approach gathering outside perspectives on algorithms?