Professional Responsibility (52)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 2 min
- Kinolab
- 2019
In an imagined future of London, citizens all across the globe are connected to the Feed, a device and network accessed constantly through a brain-computer interface. In this clip, Meredith must seek help from the leader of a developing country without the Feed network, because the Feed has become hacked and is bringing down infrastructure with it.
- Kinolab
- 2019
Technological Superhighways and Monopolistic Control
In an imagined future of London, citizens all across the globe are connected to the Feed, a device and network accessed constantly through a brain-computer interface. In this clip, Meredith must seek help from the leader of a developing country without the Feed network, because the Feed has become hacked and is bringing down infrastructure with it.
How integrated should our advanced technology be into our daily lives and basic amenities? Does it pose risks if the tech is hacked? What risks exist with technology monopolization, besides economic inequality? Should nearly all service be reliant on a small number of platforms with highly centralized control? Is the developing country of COM better off now because it never had the technology to begin with, and is therefore uncorrupted now?
-
- 6 min
- Kinolab
- 2019
Eleanor Shellstrop runs a fake afterlife, in which she conducts an experiment to prove that humans with low ethical sensibility can improve themselves. One of the subjects, Simone, is in deep denial upon arriving in this afterlife, and does as she pleases after convincing herself that nothing is real. Elsewhere, another conductor of the experiment, Jason, kills a robot which has been taunting him since the advent of the experiment.
- Kinolab
- 2019
Resisting Realities and Robotic Murder
Eleanor Shellstrop runs a fake afterlife, in which she conducts an experiment to prove that humans with low ethical sensibility can improve themselves. One of the subjects, Simone, is in deep denial upon arriving in this afterlife, and does as she pleases after convincing herself that nothing is real. Elsewhere, another conductor of the experiment, Jason, kills a robot which has been taunting him since the advent of the experiment.
What are the pros and cons of solipsism as a philosophy? Does it pose a danger of making us act immorally? How can we apply the risk of solipsism to technology such as virtual reality– a space where we know nothing is real except our own feelings and perceptions. Should virtual reality have ethical rules to prevent solipsism from brewing in it? Could that leak into our daily lives as well?
Is it ethical for humans to kill AI beings in fits of negative emotions, such as jealousy? Should this be able to happen on a whim? Should humans have total control of whether AI beings live or die?
-
- 7 min
- The Verge
- 2019
Reliance on “emotion recognition” algorithms, which use facial analysis to infer feelings. Credibility of the results in question based on inability of machines to recognize abstract nuances.
- The Verge
- 2019
-
- 7 min
- The Verge
- 2019
AI ‘Emotion Recognition’ Can’t Be Trusted
Reliance on “emotion recognition” algorithms, which use facial analysis to infer feelings. Credibility of the results in question based on inability of machines to recognize abstract nuances.
Can digital artifacts potentially detect human emotions correctly? Should our emotions be read by machines? Are emotions too complex for machines to understand? How is human agency impacted by discrete AI categories for emotions?
-
- 20 min
- UC Research Repository
- 2018
This 2018 study uses several experiments to demonstrate how human racial bias is imposed upon robots as well, specifically in that racialised black robots are more likely to be perceived as threatening to the group sampled.
- UC Research Repository
- 2018
-
- 20 min
- UC Research Repository
- 2018
Robots and Racism
This 2018 study uses several experiments to demonstrate how human racial bias is imposed upon robots as well, specifically in that racialised black robots are more likely to be perceived as threatening to the group sampled.
Does robots having socially-constructed race happen inherently as robots become more humanoid? If so, could there be applications for robots to fight human racial bias? In general, could technology be a component in eradicating human biases?
-
- 5 min
- MIT Technology Review
- 2019
Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.
- MIT Technology Review
- 2019
-
- 5 min
- MIT Technology Review
- 2019
This is how AI bias really happens—and why it’s so hard to fix
Introduction to how bias is introduced in algorithms during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. Underlines the difficult nature of ameliorating bias in machine learning, given that algorithms are not always perfectly attuned to human social contexts.
How can the “portability trap” described in the article be avoided? Who should be involved in making decisions about framing problems that AI are meant to solve?
-
- 7 min
- Mad Scientist Laboratory
- 2018
The combination of the profit motive for tech companies and the vague language of non-binding ehtical agreements for coders means that there must be a higher regulation for ethical deployment and use of technology. Argues that there must be clear demarcations between what is considered real and human versus fake and virtual. Digital technologies should be regulated in a manner similar to other technologies, such as guns, cars, or nuclear weapons.
- Mad Scientist Laboratory
- 2018
-
- 7 min
- Mad Scientist Laboratory
- 2018
Man Machine Rules
The combination of the profit motive for tech companies and the vague language of non-binding ehtical agreements for coders means that there must be a higher regulation for ethical deployment and use of technology. Argues that there must be clear demarcations between what is considered real and human versus fake and virtual. Digital technologies should be regulated in a manner similar to other technologies, such as guns, cars, or nuclear weapons.
How do we ensure the ethical use of AI by digital tech giants? Should there be an equivalent of the hippocratic oath for development of digital technology? How would you imagine something like this being put in place? Are the man-machine rules laid out at the end of the article realistic?