
The Shape of Things to Come
Experts overwhelmingly agree: Robots will soon overthrow humanity. The only question is whether they will merely enslave us1 or will entirely destroy the human race2,3,4. To better face our coming downfall, it is important to understand how machines will gain the powers to destroy us. They have already bested us in chess5, game shows6, and will soon surpass our ability to flip burgers7.
But there is a large divide between the perfectly charboiled hamburger and the nuclear annihilation of the human race. How do the robots cross this divide? Only if humans were stupid enough to create robots that had the abilities and desires of a human being. And no one would be that stupid…except of course for scientists.
Self-Assembling Robots

Figure 1. A) One of the 4 cubes which make up a robot. The inner machinery allows each cube to swivel, providing the robots’ range of movement. B) The robot can assume a number of positions using each cube’s ability to swivel. C) The process by which a new robot is constructed. The first robot stacks the cubes of the second robot until where once there was one, now there are two.
Robots are not a threat as long as we control the means of their production. One of biological life’s greatest tricks is the ability to procreate, providing a plentiful supply of organic beings to fight in the upcoming robot wars. As long the robot population is kept to a reasonable size, we should win through sheer numbers.
Unfortunately for humanity, Benedict Arnold scientists have already created self-assembling robots8. These robots consist of simple cubes which function similarly to the cells of the body. Each cube contains identical machinery and the complete program to assemble the robot. The cubes use electromagnets so they can interact with other cubes, attaching or detaching as the situation requires. A complete robot consists of four cubes, and each cube can swivel (Figure 1A) providing the ability to move and bend over to pick up additional cubes (Figure 1B). When the robot encounters single cubes, it stacks them on top of each other to create an exact replica of itself (Figure 1C). Both these robots are then capable of assembling more replicas of themselves. The entire process of a 4-cube robot finding 4 individual cubes and constructing an identical 4-cube robot takes 2.5 minutes.
Gaze upon these self-assembling cubes, and shudder.
Robots with Emotion

Figure 2. An example of a robot learning emotions. On the left side are the 8 possible emotions that the robot can feel. The color bars correspond to the values associated with the parts of the dimensions that the robot is extracting from its visual field as shown on the right. At the start (leftmost graph) the robot is randomly assigned values to each part of each dimension. But as it receives feedback from a human, it updates its beliefs about how it is supposed to feel depending on the values it extracts from the visual environment.
Alright, even if robots can replicate we still have something they do not. We have the resolve, the determination, the blood, sweat, and tears of our emotional character. It is not in humans to be defeated; From a population that may have dwindled to 10,000 at one point in history9 we now number over 7,000,000,000. The indomitable human spirit will not be overcome.
But some Judas Iscariot scientists are right now hard at work providing robots the ability to experience emotions from visual imagery10. As a starting point the scientists chose three dimensions of visual experience theorized to affect a human’s moment-to-moment emotional experience: color, face-like patterns, and fractal patterns. Each dimension was broken up into several parts (for example, color was broken into hue, saturation, and intensity). The robot scans the environment and obtains a value for each part of each dimension and then enters these values into an equation with different emotions associated with different numerical outcomes (Figure 2). It would vocalize this feeling to a human researcher (‘I feel surprised and a little angry’) who would respond ‘yes’ or ‘no’ and the robot would update its beliefs about how it was supposed to feel in different situations.
Teaching a robot proper emotion? Might as well just push the nuclear button right now.
Robots with Neural Tissue

Figure 3. The brain (consisting of rat neural tissue on a multielectrode array) is held in the scientists hand while the body (a Miabot robot) waits below.
The situation is looking dire. Self-assembling, emotional robots threaten our very existence. But we still have our ace card. We still have the creative edge associated with the wondrous lump of 100,000,000,000 cells we call a brain. Surely our ingenuity and imagination will allow us to triumph over the robot hordes.
Unless some Marcus Junius Brutus the Younger scientists are hard at work creating hybrid neural/machine interfaces11. This hybrid consists of rat neural tissue communicating directly with a mobile robotic platform. For the neural tissue, the neural cortex is removed from the fetus of a rat; Individual neurons are disconnected from each other using enzymes, then placed on a multielectrode array and bathed in a nutrient-rich bath. Over time (about a week), these neurons reconnect creating a densely interconnected network. The microelectrode is capable of measuring the neuronal activity (receiving output signals from the brain) and can stimulate the neurons in turn (sending input signals into the brain). By attaching this output/input system to a mobile robotic platform, an intimate connection is created between a machine and a brain of ~100,000 neurons.
When the hybrid is placed inside a corral it can drive around without bumping into the walls (Figure 3). The robot is outfitted with an ultrasonic sensor which sends an electrical signal to to the brain when it detects an approaching wall. When the brain receives this wall signal, it responds with electrical activity which is relayed within 100ms back to the robot. The robot interprets the brain’s electrical activity as a turn signal and is able to turn to avoid the wall. Interestingly, the brain exhibits basic learning through repetition. Early in its life the robot would not always send turn signals reliably when receiving a wall signal, or would send turn signals when no wall signal had been received. But over time, the robot improved its behavior, avoiding the wall more effectively. The robot improved its performance over time…do you hear the same ominous music I do?
I hesitate to point you towards more soul-crushing evidence of our coming ruin, but in case you desire more reason to give up the will to fight here is a jumping off point to some truly terrifying reading.
Foolish scientists! You have doomed us all.
1. The Matrix
2. I Have No Mouth, and I Must Scream
5. Deep Blue
6. Watson
8. Zykov, V., Mytilinaios, E., Adams, B., & Lipson, H. (2005). Self-reproducing machines. Nature, 435(7038), 163-164.
9 Dawkins, Richard (2004). “The Grasshopper’s Tale”. The Ancestor’s Tale, A Pilgrimage to the Dawn of Life. Boston: Houghton Mifflin Company. p. 416. ISBN 0-297-82503-8. ISBN
10. Wong, A. S., Hong, K., Nicklin, S., Chalup, S. K., & Walla, P. (2013). Robot emotions generated and modulated by visual features of the environment. In IEEE Symposium on Computational Intelligence for Creativity and Affective Computing.
11. Warwick, K. (2010). Implications and consequences of robots with biological brains.Ethics and Information Technology, 12(3), 223-234. doi:10.1007/s10676-010-9218-6
Reblogged this on Whats in a brain? and commented:
I’ve recently joined this great group of neuroscience grad students from NeuWrite San Diego. This week, in honor of Halloween, there will be a (semi) related post every day. I especially like this one about the possibility of robots overthrowing humanity.
That last one is really cool, but I’m glad to see the paper has a strong discussion of the ethical issues involved if we continue making robots along these lines that exhibit more complex behaviors. What I’m most curious about though, is why on earth the robot cares to learn. What does the neural culture gain from not bumping into walls? What’s the reward, or the punishment?
I am no expert on neuronal learning at this level, but the authors speculate that the initial learning they witness is based on ‘habitual process’. I assume they mean that these responding neurons were active in response to input from the beginning, so there was some innate connections that predisposed the neurons to respond. Through continued electrical input, these predispositions were strengthened and became more regular (Hebbian Learning, which they bring up, definitely touches on this sort of relationship). So there is no real reward/punishment for the culture as a whole in the same way that a human would experience a reward. Neurons are simply set up to work in certain ways, and this system appears to tap into that neuronal machinery to accomplish a goal-although a goal that the culture likely is unaware of and doesn’t care about. If I think too deeply about that, I’m sure I would end up with some very depressing thoughts about human behavior and determinism.
They speculate that some form of reinforcement learning (which you bring up) would be a more effective teaching method, but write ‘One major problem with this is deciding what exactly the culture regards as a reward and what as a punishment’. I like that they state such a deep question (which I think has deep implications for humans) in one simple sentence and then move briskly along to the next section.
Happy Halloween!
Robots will continue to improve and will replace some human jobs, but there are some very deep and fundamental reasons it’s very hard to mimic human intelligence. We can actually learn a lot from what is hard for AI about human cognition. Not entirely the same kind of work, but our lab here at UCSD uses neuroscience to help improve design of robots, and robots as an opportunity to study perception and social cognition.
Here’s a CNN story “Why zombies, robots, clowns freak us out” http://www.cnn.com/2012/07/11/health/uncanny-valley-robots/ . There is also a neuroimaging paper in which Saygin et al have cited an H.P. Lovecraft story. Happy Halloween!!
Pingback: The Dark Art of Computer Learning | NeuWrite San Diego
Pingback: Telepathy, possibly? | NeuWrite San Diego