The Shape of Things to Come
Experts overwhelmingly agree: Robots will soon overthrow humanity. The only question is whether they will merely enslave us1 or will entirely destroy the human race2,3,4. To better face our coming downfall, it is important to understand how machines will gain the powers to destroy us. They have already bested us in chess5, game shows6, and will soon surpass our ability to flip burgers7.
But there is a large divide between the perfectly charboiled hamburger and the nuclear annihilation of the human race. How do the robots cross this divide? Only if humans were stupid enough to create robots that had the abilities and desires of a human being. And no one would be that stupid…except of course for scientists.
Robots are not a threat as long as we control the means of their production. One of biological life’s greatest tricks is the ability to procreate, providing a plentiful supply of organic beings to fight in the upcoming robot wars. As long the robot population is kept to a reasonable size, we should win through sheer numbers.
Unfortunately for humanity, Benedict Arnold scientists have already created self-assembling robots8. These robots consist of simple cubes which function similarly to the cells of the body. Each cube contains identical machinery and the complete program to assemble the robot. The cubes use electromagnets so they can interact with other cubes, attaching or detaching as the situation requires. A complete robot consists of four cubes, and each cube can swivel (Figure 1A) providing the ability to move and bend over to pick up additional cubes (Figure 1B). When the robot encounters single cubes, it stacks them on top of each other to create an exact replica of itself (Figure 1C). Both these robots are then capable of assembling more replicas of themselves. The entire process of a 4-cube robot finding 4 individual cubes and constructing an identical 4-cube robot takes 2.5 minutes.
Gaze upon these self-assembling cubes, and shudder.
Robots with Emotion
Alright, even if robots can replicate we still have something they do not. We have the resolve, the determination, the blood, sweat, and tears of our emotional character. It is not in humans to be defeated; From a population that may have dwindled to 10,000 at one point in history9 we now number over 7,000,000,000. The indomitable human spirit will not be overcome.
But some Judas Iscariot scientists are right now hard at work providing robots the ability to experience emotions from visual imagery10. As a starting point the scientists chose three dimensions of visual experience theorized to affect a human’s moment-to-moment emotional experience: color, face-like patterns, and fractal patterns. Each dimension was broken up into several parts (for example, color was broken into hue, saturation, and intensity). The robot scans the environment and obtains a value for each part of each dimension and then enters these values into an equation with different emotions associated with different numerical outcomes (Figure 2). It would vocalize this feeling to a human researcher (‘I feel surprised and a little angry’) who would respond ‘yes’ or ‘no’ and the robot would update its beliefs about how it was supposed to feel in different situations.
Teaching a robot proper emotion? Might as well just push the nuclear button right now.
Robots with Neural Tissue
The situation is looking dire. Self-assembling, emotional robots threaten our very existence. But we still have our ace card. We still have the creative edge associated with the wondrous lump of 100,000,000,000 cells we call a brain. Surely our ingenuity and imagination will allow us to triumph over the robot hordes.
Unless some Marcus Junius Brutus the Younger scientists are hard at work creating hybrid neural/machine interfaces11. This hybrid consists of rat neural tissue communicating directly with a mobile robotic platform. For the neural tissue, the neural cortex is removed from the fetus of a rat; Individual neurons are disconnected from each other using enzymes, then placed on a multielectrode array and bathed in a nutrient-rich bath. Over time (about a week), these neurons reconnect creating a densely interconnected network. The microelectrode is capable of measuring the neuronal activity (receiving output signals from the brain) and can stimulate the neurons in turn (sending input signals into the brain). By attaching this output/input system to a mobile robotic platform, an intimate connection is created between a machine and a brain of ~100,000 neurons.
When the hybrid is placed inside a corral it can drive around without bumping into the walls (Figure 3). The robot is outfitted with an ultrasonic sensor which sends an electrical signal to to the brain when it detects an approaching wall. When the brain receives this wall signal, it responds with electrical activity which is relayed within 100ms back to the robot. The robot interprets the brain’s electrical activity as a turn signal and is able to turn to avoid the wall. Interestingly, the brain exhibits basic learning through repetition. Early in its life the robot would not always send turn signals reliably when receiving a wall signal, or would send turn signals when no wall signal had been received. But over time, the robot improved its behavior, avoiding the wall more effectively. The robot improved its performance over time…do you hear the same ominous music I do?
I hesitate to point you towards more soul-crushing evidence of our coming ruin, but in case you desire more reason to give up the will to fight here is a jumping off point to some truly terrifying reading.
Foolish scientists! You have doomed us all.
1. The Matrix
5. Deep Blue
8. Zykov, V., Mytilinaios, E., Adams, B., & Lipson, H. (2005). Self-reproducing machines. Nature, 435(7038), 163-164.
9 Dawkins, Richard (2004). “The Grasshopper’s Tale”. The Ancestor’s Tale, A Pilgrimage to the Dawn of Life. Boston: Houghton Mifflin Company. p. 416. ISBN 0-297-82503-8. ISBN
10. Wong, A. S., Hong, K., Nicklin, S., Chalup, S. K., & Walla, P. (2013). Robot emotions generated and modulated by visual features of the environment. In IEEE Symposium on Computational Intelligence for Creativity and Affective Computing.
11. Warwick, K. (2010). Implications and consequences of robots with biological brains.Ethics and Information Technology, 12(3), 223-234. doi:10.1007/s10676-010-9218-6