Human Control of Technology (67)
Find narratives by ethical themes or by technologies.
FILTERreset filters-
- 12 min
- Kinolab
- 1965
The city of Alphaville is under the complete rule of Alpha-60, an omnipresent robot whose knowledge is more vast than that of any human. This robot, whose learning and knowledge model is deemed “too complex for human understanding,” cements its rule through effectively outlawing emotion in Alphaville, with all definitions of consciousness centering on rationality. All words expressing curiosity or emotion are erased from human access, with asking “why” being replaced with saying “because.” Lemmy is a detective who has entered Alphaville from an external land to destroy Alpha-60. However, in their conversation, Alpha-60 is immediately able to suss out the suspicious aspects of Lemmy’s visit and character.
- Kinolab
- 1965
Supercomputer Rule and Condensing Human Behavior
The city of Alphaville is under the complete rule of Alpha-60, an omnipresent robot whose knowledge is more vast than that of any human. This robot, whose learning and knowledge model is deemed “too complex for human understanding,” cements its rule through effectively outlawing emotion in Alphaville, with all definitions of consciousness centering on rationality. All words expressing curiosity or emotion are erased from human access, with asking “why” being replaced with saying “because.” Lemmy is a detective who has entered Alphaville from an external land to destroy Alpha-60. However, in their conversation, Alpha-60 is immediately able to suss out the suspicious aspects of Lemmy’s visit and character.
Can governing computers or machines ever be totally objective? Is this objectivity dangerous? Do humans need emotionality to define themselves and their societies, so that a focus on rationality does not allow for computers to take over rule of our societies? Can the actions of all humans throughout the past be reduced down to a pattern that computers can understand or manipulate? What are the implications of omnipresent technology in city settings?
-
- 9 min
- Kinolab
- 1995
In this world, a human consciousness (“ghost”) can inhabit an artificial body (“shell”), thus at once becoming edited humans in a somewhat robotic body. The Puppet Master, a notorious villain in this world, is revealed not to be a human hacker, but a computer program which has gained sentience and gone on to hack the captured shell. It challenges the law enforcement officials of Section 6 and Section 9 saying that it is a life-form and not an AI. It argues that its existence as a self-sustaining program which has achieved singularity is not different from human DNA as a “self-sustaining program.” The Puppet Master specifically references reproduction/offspring, not copying, as a distinguishing feature of living things as opposed to nonliving things. Additionally, it developed emotional connection with Major which led it to select her as a candidate for merging. It references how it can die but live on through the merging and, after Major’s death, in the internet.
- Kinolab
- 1995
Self-Sustaining Programs
In this world, a human consciousness (“ghost”) can inhabit an artificial body (“shell”), thus at once becoming edited humans in a somewhat robotic body. The Puppet Master, a notorious villain in this world, is revealed not to be a human hacker, but a computer program which has gained sentience and gone on to hack the captured shell. It challenges the law enforcement officials of Section 6 and Section 9 saying that it is a life-form and not an AI. It argues that its existence as a self-sustaining program which has achieved singularity is not different from human DNA as a “self-sustaining program.” The Puppet Master specifically references reproduction/offspring, not copying, as a distinguishing feature of living things as opposed to nonliving things. Additionally, it developed emotional connection with Major which led it to select her as a candidate for merging. It references how it can die but live on through the merging and, after Major’s death, in the internet.
Do you agree with the puppet master’s arguments that self-sustaining programs are conceptually the same as human DNA? Why or why not? Has the externalisation of memory made it far more possible for robots to achieve singularity and exist as human-like figures in the world? Is memory the sole feature that helps humans build their identities? List all the comparisons made in this narrative between self-sustaining programs and human genetics and existence.
-
- 12 min
- Kinolab
- 1973
Simulacron is a virtual reality full of 10,000 simulated humans who believe themselves to be sentient, but are actually nothing more than programs. The identity units in Simulacron do not know or understand that they are artificial beings, and they behave under the idea that they are real humans. “Real” humans can enter this virtual reality through a brain-computer interface, and control the virtual identity units. Christopher Nobody, a suspect whom Fred is trying to track down, had the revelation that he was an identity unit, and that realization led to a mental breakdown. In following this case, Fred meets Einstein, a virtual unit who desires to join the real world. As Einstein enacts the final stages of this plan, Fred discovers a shocking secret about his own identity. For a similar concept, see the narrative “Online Dating Algorithms” on the Hang the DJ episode of Black Mirror.
- Kinolab
- 1973
Simulated Humans and Virtual Realities
Simulacron is a virtual reality full of 10,000 simulated humans who believe themselves to be sentient, but are actually nothing more than programs. The identity units in Simulacron do not know or understand that they are artificial beings, and they behave under the idea that they are real humans. “Real” humans can enter this virtual reality through a brain-computer interface, and control the virtual identity units. Christopher Nobody, a suspect whom Fred is trying to track down, had the revelation that he was an identity unit, and that realization led to a mental breakdown. In following this case, Fred meets Einstein, a virtual unit who desires to join the real world. As Einstein enacts the final stages of this plan, Fred discovers a shocking secret about his own identity. For a similar concept, see the narrative “Online Dating Algorithms” on the Hang the DJ episode of Black Mirror.
What purposes can virtual reality “laboratories” full of simulated humans serve in terms of research in fields such as sociology? Is it justifiable to make programs which believe themselves to be sentient humans, yet deny them access to the “real world”? How can AI mental health be reassured, especially when it comes to existential crises like the one Fred has?
-
- 7 min
- ZDNet
- 2020
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
- ZDNet
- 2020
-
- 7 min
- ZDNet
- 2020
Rebooting AI: Deep learning, meet knowledge graphs
Dr. Gary Marcus explains that deep machine learning as it currently exists is not maximizing the potential of AI to collect and process knowledge. He essentially argues that these machine “brains” should have more innate knowledge than they do, similar to how animal brains function in processing an environment. Ideally, this sort of baseline knowledge would be used to collect and process information from “Knowledge graphs,” a semantic web of information available on the internet which can sometimes be hard for an AI to process without translation to machine vocabularies such as RDF.
Does giving a machine similar learning capabilities to humans and animals bring artificial intelligence closer to singularity? Should humans ultimately be in control of what a machine learns? What is problematic about leaving AI less capable of understanding semantic webs?
-
- 4 min
- VentureBeat
- 2020
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
- VentureBeat
- 2020
-
- 4 min
- VentureBeat
- 2020
Researchers Find that Even Fair Hiring Algorithms Can Be Biased
A study on the engine of TaskRabbit, an app which uses an algorithm to recommend the best workers for a specific task, demonstrates that even algorithms which attempt to account for fairness and parity in representation can fail to provide what they promise depending on different contexts.
Can machine learning ever be enacted in a way that fully gets rid of human bias? Is bias encoded into every trained machine learning program? What does the ideal circumstance look like when using digital technologies and machine learning to reach a point of equitable representation in hiring?
-
- 7 min
- MIT Technology Review
- 2020
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
- MIT Technology Review
- 2020
-
- 7 min
- MIT Technology Review
- 2020
Tiny four-bit computers are now all you need to train AI
This article details a new approach emerging in AI science; instead of using 16 bits to represent pieces of data which train an algorithm, a logarithmic scale can be used to reduce this number to four, which is more efficient in terms of time and energy. This may allow machine learning algorithms to be trained on smartphones, enhancing user privacy. Otherwise, this may not change much in the AI landscape, especially in terms of helping machine learning reach new horizons.
Does more efficiency mean more data would be wanted or needed? Would that be a good thing, a bad thing, or potentially both?