Singing in the [B]rain

Music has always wielded a disconcerting power over me. In times of overwhelming emotions, listening to a sad song or playing Chopin’s Nocturne in E-flat on the piano has propelled me into cathartic fits of sobbing. Songs with escalating intensity and complexity (such as San Fermin’s Parasites) have made me feel as if a wave of dopamine – the “rewarding” neurotransmitter – is flooding through my system. Attending a Philip Glass concert has made me “glimpse eternity”, somehow driving me into such a state of disorientation that I was convinced I would die in that concert hall. And more times than I can count, my attraction to people has skyrocketed upon hearing them sing or demonstrate almost any form of musical prowess, even if the attraction had not existed a moment before.

Why does music send me and so many others on such an emotional roller coaster? As a matter of fact, why is music so distinct from other forms of complex auditory stimuli? To quote Buddy the Elf, is singing “just like talking, only longer and louder”, or are the neural processes underpinning music and language distinct? If you’ve been wondering what on earth is going on inside your head when you are experiencing music and why you just can’t stop listening to it, read on!

basilar membrane

This depicts a schematic of the basilar membrane deep inside the inner ear and its similarity to a xylophone. Like a xylophone, it is smaller but thicker at the base and longer and thinner at the apex, and this results in higher frequency sound waves displacing the membrane at its base and lower frequency sounds at its apex.

The Sound of Music

Pitch, loudness, timbre, rhythm; these are just a few of the dimensions that make up music [1]. Each component is processed in its own “compartment” of the brain, and when they come together, we perceive a cohesive piece of music.

Pitch, which is our perception of the frequency of sound (pressure) waves, is one of the best understood aspects of music and sound perception in general. Deep inside our ear, there is a membranous structure that is displaced when it’s hit by sound waves. These sound waves are like a mallet and the membrane is like a xylophone (see right figure); a sound at a particular frequency (pitch) hits the membrane at a specific spot, and this begins a cascade of signaling from the ear to the brainstem through the rest of the brain’s auditory pathway. The signal ultimately (in a fraction of a second) reaches the auditory cortex, which is similarly organized according to pitch. Thus, when a particular part of the auditory cortex becomes active, this is essentially your brain processing the pitch of a sound. Of course, what is unique to music versus other sounds is the importance of the relations between pitches; this is what allows us to recognize the Star Wars theme song regardless of whether it starts on a C or an E-flat, as long as the
intervals between pitches are preserved (interestingly, animals do not seem to share this ability to recognize transposed melodies as the same [1]). This more cognitive aspect of music processing may take place in yet further steps down the auditory pathway, when information about individual pitches is combined in more frontal regions of the brain [1].

musicinbrain

After sound comes through the ear (through the purple structure called the cochlea, which contains the membrane depicted above), it makes many stops through the brainstem and lands in the auditory cortex. From there, the signal continues to the frontal cortex and elsewhere for further, more complex processing.


But simply hearing the notes won’t give us the full music experience; if that were the case, songs like
Drop It Like It’s Hot would have virtually no musical appeal. Music also enlists our perception of rhythm, and when it comes to popular music in particular, a “beat”. A lot of different brain structures govern our appreciation of Snoop Dogg and his catchy beats, such as the cerebellum, basal ganglia, and premotor and supplementary motor areas [1]. Interestingly, all of these brain regions are also involved in movement; perhaps this is why we sometimes find ourselves spontaneously tapping our feet or bopping our heads in time with the music [2]. One interesting study measured brain activity using functional magnetic resonance imaging (fMRI) while subjects listened to different rhythms, some that sounded like they aligned to a “beat” and some that did not. The basal ganglia and supplementary motor areas were particularly active in the perceived presence of a beat, even though the subjects were only listening to the rhythms [2]. This link between the auditory and motor systems as an integral part of music perception is especially interesting because it seems universal in humans (although a study of a cockatoo that dances to the beat of the Backstreet Boys’ Everybody suggests that this may not be entirely unique to humans [3]). Moreover, the late Oliver Sacks has written about the utility of this link for potentially “jumpstarting” the motor system, such as in Parkinson’s patients who undergo music therapy [4]. Even though pitch and rhythm are just a few examples of components of music, they clearly illustrate how music in the brain is processed through a unique ensemble of systems.

Let the Music Do the Talking

You might still be unconvinced that the brain processes music any differently than it does language. After all, pitch can be important for understanding language, especially in tonal languages like Mandarin or Cantonese. Moreover, language certainly possesses some rhythm, even if it is less strict than in music. Although music and language almost certainly do share similar mechanisms, there are also distinct differences. Perhaps the most convincing evidence that they are not identical comes from cases of individuals with amusia who have normal hearing and language but cannot perceive the harmonic relationships within music; in contrast, patients with aphasia have impaired language but their perception of music is left intact [5]. Moreover, language and music seem to be biased to opposite sides of the brain, with the former eliciting more activity in the left hemisphere and the latter in the right [1]. On the other hand, music and language possess similar syntax; for instance, unexpected words in sentences and chords in chord progressions elicit similar neural signatures as measured by EEG [6]. A conciliatory viewpoint proposed by Ani Patel suggests that representations in language and music – say of words and notes – are stored in distinct modules of the brain, and this is why one can be impaired while the other is spared. At the same time, there may be “processing regions”, such as in the frontal cortex, that are shared between language and music [5].  Thus, although music certainly has a lot in common with language neurologically-speaking, it is still unique inside the brain.

I’ve Got the Music in Me

Neuropsychology studies of patients whose tastes for music drastically change when their brains change have given fascinating insight into music’s unique representation inside the brain. In his book Musicophilia, Oliver Sacks describes a patient named Tony Cicoria (see video below) who was struck by lightning and recovered completely normally, except for the fact that he had contracted an extreme case of “musicophilia”. Although he previously had not been particularly musical, after the incident, he developed an intense craving for music of all sorts, taught himself to play piano and began to compose incessantly. Although it is not known what exactly the lightning strike did to his brain (bizarrely, he was completely unchanged except for his taste for music and his MRI scans looked normal), Sacks speculates that the lightning may have set off brief seizures in his temporal lobes – the part of the brain near our “temples” that is especially important for hearing and language – that may have produced lasting functional changes not apparent in anatomical brain scans [4].

In Musicophilia, Sacks describes many other cases of individuals with temporal lobe seizures, damage, or degeneration, particularly to the left hemisphere of the brain, who develop various other manifestations of musical fervor. On the other hand, he also describes people who have lost appreciation for music following strokes or other brain damage, mostly to the right hemisphere. Case studies of patients like these have inspired the hypothesis that a sort of equilibrium exists between the temporal lobes of the left and right hemispheres, and that perturbations to the temporal lobes (as in Tony Cicoria and these other patients’ cases) can disrupt this equilibrium. Although the differences between the “left and right brains” are often overstated (see a discussion of the two hemispheres in a recent NeuWrite post), some differences do exist. For instance, as previously mentioned, language processing is usually biased to one “dominant” side over the other (which for most people is the left side), whereas music is typically biased to the non-dominant side (usually the right) [1]. Thus, the hypothesis states that the dominant hemisphere normally has a “foot on the brakes” of the non-dominant hemisphere, but when the dominant side is damaged, the brakes are partially released. In the cases of musicophilia described above, it is possible that damage to patients’ left hemispheres unleashed some of their non-dominant hemispheres’ repressed cravings for music [4]. Although this is a difficult hypothesis to test, it’s a nice thought that listening to music may be giving our non-dominant hemisphere a chance to take the wheel.

Please Don’t Stop the Music

Now that we have a basic understanding of how our brain processes music, where does the whole “roller coaster” experience come from? Why does listening to music we love give us the “chills”?

First, I’m sure we can all agree that music is profoundly emotional – you may have your favorite music that calms you down and helps you focus, your “pump-up jams” that get you in a party mood, or that one song that reminds you of a particular break-up. In fact, there has been considerable research on the neuroscience of music-evoked emotions in the last 20 or so years. In particular, the amygdala (an important emotional hub), hippocampus (particularly involved with memory but also implicated in social attachment), and ventral (lower) striatum, including the nucleus accumbens, have emerged as key players in the experience of emotions through music [7]. 

musicbrain

Some of the key brain regions involved in music. Emotional brain centers are on right panel [1].

Still, the involvement of these emotional brain centers doesn’t explain our unique experience of “chills” during music that we love. To address this, Robert Zatorre and his team designed an experiment with two primary questions: a) do people really get a rush of dopamine – a neurotransmitter that acts as a signal between neurons and is especially associated with reward-related behavior – when they are listening to music they really like? and b) does this rush happen at the time of, or in anticipation of, the music? The researchers used a combination of two brain-scanning techniques – positron emission tomography (PET)
to assess the amount of dopamine release in different brain regions, and fMRI to measure
when changes in activity occurred in brain regions of interest – as well as various physiological measurements including skin conductance, heart rate, and patterns of autonomic nervous system activity that signify “chills”. Subjects listened to musical excerpts, some of which were from their favorite music and some of which were favorites of other subjects but were neutral to them (a clever control so that all subjects heard the same excerpts).

Salimpoor pic

Figure from Salimpoor et al. (2011). Activity in the right nucleus accumbens peaked during subjects’ peak experience of music they liked, whereas activity in the right caudate peaked earlier, putatively in anticipation of subjects’ peak of pleasure [8].


Sure enough, the researchers found that more dopamine was released during songs rated as pleasurable than ones rated as neutral in the regions deep within the brain often referred to as the
striatal reward system for their role in pleasurable and reinforcing behaviors. Moreover, specific regions within that system were differently affected as the music unfolded over time. At the time subjects experienced pleasurable chills, activity peaked in the right nucleus accumbens (which is implicated in music-evoked emotions, see above). Meanwhile, activity in the right caudate peaked during the ~15 seconds prior to the apex of subjects’ pleasureable experience, suggesting that this region is involved in the anticipation of pleasure while listening to music [8]. Perhaps that helps explain why Americans, on average, spend over 4 hours a day listening to music [9] – it’s truly rewarding, according to our brains.

So the next time your favorite song comes on the radio or you get the goosebumps at a concert, imagine the whole orchestra of processes in your head working together to give you your experience of music. You don’t have to be listening to a Mozart symphony to be exercising your brain; a little Taylor Swift will do just fine (and maybe even give you a bigger dopamine rush). Embrace your inner musicophiliac – you can blame it on your brain.

For further fun reading that is accessible to all audiences, check out Musicophilia by Oliver Sacks and This is Your Brain On Music by Daniel Levitin.

References:

  1. Levitin, DJ & Tirovolas, AK (2009). Current advances in the cognitive neuroscience of music. The Year in Cognitive Neuroscience 2009: Ann. N.Y. Acad. Sci., 1156, 211–231.
  2. Grahn, JA & Brett, M (2007). Rhythm and beat perception in motor areas of the brain. Journal of Cognitive Neuroscience, 19(5), 893–906.
  3. Patel, AD et al. (2009). Experimental evidence for synchronization to a musical beat in a nonhuman animal. Current Biology, 19(10), 827–830. http://dx.doi.org/10.1016/j.cub.2009.03.038
  4. Sacks, O (2007). Musicophilia: Tales of Music and the Brain. New York: Alfred A. Knopf, Inc.
  5. Patel, AD (2003). Language, music, syntax and the brain. Nature Neuroscience, DOI: 10.1038/nn1082.
  6. Patel, AD, Gibson, E, Ratner, J, Besson, M, & Holcomb, PJ (1998). Processing syntactic relations in language and music: an event-related potential study. Journal of Cognitive Neuroscience, 10(6), 717-33.
  7. Koelsch, S (2014). Brain correlates of music-evoked emotions. Nature Reviews Neuroscience, 15, 170–180.
  8. Salimpoor, VN et al. (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nature Neuroscience, 14(2), 257-62.
  9. http://www.billboard.com/biz/articles/news/digital-and-mobile/6121619/how-and-how-much-america-listens-have-been-measured-for

Images:

Advertisements