The Language of Arrival

Lots of sci-fi movies might begin with funny-shaped vessels landing on earth, but very few of them end with a (female!) linguist helping to save humanity by learning to speak the language of their inhabitants.

As an only occasional viewer of science fiction movies, I was pleasantly surprised by the limited number of explosions and larger number of thoughtful moments—not to mention the thoughtful treatment of “aliens” as cognitively advanced beings and the beautiful illustration of their visual language—in Arrival. And as a grad student who studies how the temporal dynamics of how the human brain processes language, the movie left a striking impression.

After watching the film, I quickly laid hands on its inspiration, a novella by Ted Chiang called The Story of Your Life. Both the book and the film involves the exploration of an alien communication system that is radically (pun intended) different from human language. To my eye, the novel dives in deeply in a few places where the film is wanting–but both encourage us to reconsider our own stories by taking a different vantage point on our own lives.

A warning: SPOILERS AHEAD! There’s no way to talk about the most interesting parts of Arrival without spoiling this film, so I won’t even try.

Learning the language of the heptapods

After the aliens land in oblong spaceships across the world, the U.S. government recruits a scientist, Louise, to learn their language. As a field linguist, Louise has experience working with under-studied languages, though nothing can compare to the complex visual communication system she is about to uncover.

The aliens, who have seven limbs, are dubbed heptapods. Over time, Louise discovers the heptapods communicate using intricate symbols, which she terms logograms, that do not correspond to their verbal communication system.

These logograms are complex, with many meaningful pieces, and they are produced all at once. In the novella, the little bits of meaning in the logograms can be arranged in all sorts of configurations, meaning they must be perceivable from any angle.

Meanwhile, as Louise begins to understand the language of the heptapods, she starts to experience memories that initially appear to be “flashbacks” from her life–mostly of Louise’s daughter at various ages. Eventually, we come to understand that Louise’s daughter will die at a young age, and that these memories are “flash-forwards,” and not flash-backs. As Louise learns the heptapod language, she begins to “remember” the future – the way she thinks has apparently started to change based on the new language she is learning.

Incremental language processing

One of the most fascinating things about the heptapods’ communication system is that it’s non-linear. Their logograms are intended to be interpreted all at once – and they can be quite complicated. Chiang’s novella describes a logogram as being able to contain an arbitrary amount of information (words, sentences, paragraphs).

An implication of the film is that the heptapods are able to entertain all of the intricacies of one of these logograms at once. In the novella, heptapods have a series of eyes going all the way around their heads – they can see in 360 degrees at once, and don’t appear to distinguish between “forwards” and “backwards” as they have full perception in any direction.

In English, and in all known human languages, processing is thought to be incremental–it proceeds bit by bit as the signal (e.g., speech) unfolds. To study real-time language processing, researchers often track people’s eye movements while they look at a collection of objects or a scene and listen to spoken language. For example, imagine you are looking at a display on a computer of four objects: a pirate ship, a treasure chest, a cat, and some bones. Then you hear a sentence which begins: “The pirate …” At this point, people are more likely to look towards the two pirate-related items (the chest and the ship) than towards the other items. The sentence continues: “The pirate chases…” and at this point, people most often look at the object that the pirate is most likely to chase: the ship. Even though the word ship hasn’t been spoken yet, knowledge of pirates, chasing, and how typical sentences go can lead you to look at the most predictable outcome [1].

Eye-tracking studies are just one way to measure how people understand language in real time. Another way is to record ongoing electroencephalogram (EEG)—that is, electrical activity that can be recorded at the scalp—while people listen to spoken language or read words one a time. Brain waves that are synchronized to words or other stimuli are called event-related potentials (ERPs).

ERP studies have shown that brain activity reflects the relationship between the meaning of the the ongoing context and the meaning of new input (e.g., a word) [2-3]. When people read or listen to language, they continuously update their model of what’s being said, and they are sensitive to the predictability of an upcoming word [2].

It seems safe to say that for humans, as soon as information is available, the language processor (i.e., your brain) begins to make use of it – but there is still a funnel that the information goes through: time. In Arrival, the heptapods have managed to do away with this constraint.

Language, thought, and the boundaries of human cognition

The beautiful implication of the heptapods’ language is that with whole sentences or paragraphs of language available for consideration at one time, heptapods might have a completely different way of thinking. With everything available at once, there is no forwards or backwards – directionality doesn’t exist. Though they still exist in time, time has a different meaning for them.

This is one of the points that the short story explores much more deeply. In The Story of Your Life, Louise describes the heptapods’ language as existing without causality – or at least with a different kind of causality. Rather than a human’s decision – even a decision to take the action of speaking a particular word – causing another event to happen, all sequences of such events are seen as a larger whole, with the goal of an entire life being to satisfy some greater purpose.

The specific words we use help us to break up our thoughts into communicable chunks, and in some cases they help us break up the world into categories. Human languages can do this in different ways – for example, some languages have as few as five color words [4] while a language like English uses many more words to break up the spectrum of colors.

Many language researchers have considered how the language people speak may relate to the way people think. But most studies comparing different languages show only small, if any, effects of language on human cognition and behavior. Sticking with the color example, people who speak Russian label different types of blue in different ways: lighter blues are ‘goluboy’ and darker blues are ‘siniy’. One study found that Russian speakers were faster to discriminate between these two shades (which have different word labels) than they were to discriminate between two shades within each category (i.e., two ‘siniy’ or two ‘goluboy’ colors) [5]. English speakers did not show this difference—presumably because all shades of blue are ‘blue’ in English (tautologically enough). [For more on Arrival and the relationship between language and thought, check out this great post by another linguistically-inclined NeuWriter.]

Though languages can differ in vocabulary, grammar, and other properties, one thing that remains the same across all human languages, whether they are spoken out loud, written down in symbols, or signed, is that they are sequential. Let’s imagine that we really could teach humans to communicate using the heptapods’ holistic logograms. Could it really change how we think?

Looking forwards (and backwards)

It seems preposterous to claim that using a different linguistic system could allow us to know the future. There’s too much uncertainty in how others (let alone how we ourselves) might act. But as language scientists suggest, the idea of prediction on the scale of a few hundred milliseconds, or even a few seconds, isn’t laughable at all. And indeed, our brains must plan out our physical actions (speech acts, grasping for an object) prior to the actions themselves.

Imagine that instead of the scale of milliseconds, the resolution of our perception of time expanded to the scale of a few decades. Or as Louise from The Story of Your Life puts it:

“Occasionally… I experience past and future all at once; my consciousness becomes a half-century-long ember burning outside time. I perceive—during those glimpses—that entire epoch as a simultaneity.”

In The Story of Your Life, the idea not of predicting, but of purposefully enacting, one’s life events evokes ideas of determinism (perhaps pessimistically) but also of mindfulness (perhaps more optimistically). And the ambiguity between two world views—one of direction and causality, and the other of a larger-picture harmony—is hard to reconcile.

As we head into the new year, I’ll opt for the optimistic lens. Perhaps another reminder from this film is that each individual plays only a small role in the cosmos, but we are all a part of a large story. Though we can’t know the future, we can all take the time to try and imagine life through the lens of another person—or heptapod.

References

1. Borovsky, A., Elman, J.L., Fernald, A. (2012). Knowing a lot for one’s age: Vocabulary skill and not age is associated with anticipatory incremental sentence interpretation in children and adults. Journal of Experimental Child Psychology112, 417-436.

2. Kutas, M. & Hillyard, S.A. (1984). Brain potentials during reading reflect word expectancy and semantic association. Nature307(12), 161-163. 

3. Metusalem, R., Kutas, M., Urbach, T.P., Hare, M., McRae, K., & Elman, J.L. (2012). Generalized event knowledge activation during online sentence comprehension. Journal of Memory and Language, 66(4), 545–567.

4. Roberson, D., Davies, I., & Davidoff, J. (2000). Journal of Experimental Psychology: General129(3), 369-398.

5. Winawer, J., Witthoft, N., Frank, M.C., Wu, L., Wade, A.R., & Boroditsky, L. (2007). Russian blues reveal effects of language on color discrimination. PNAS104(19), 7780-7785.

Featured image credit: Wired.com