QUANTA

Tuesday, April 19, 2011


To Tug Hearts, Music First Must Tickle the Neurons

By PAM BELLUCK

The other day, Paul Simon was rehearsing a favorite song: his own “Darling Lorraine,” about a love that starts hot but turns very cold. He found himself thinking about a three-note rhythmic pattern near the end, where Lorraine (spoiler alert) gets sick and dies.

“The song has that triplet going on underneath that pushes it along, and at a certain point I wanted it to stop because the story suddenly turns very serious,” Mr. Simon said in an interview.

“The stopping of sounds and rhythms,” he added, “it’s really important, because, you know, how can I miss you unless you’re gone? If you just keep the thing going like a loop, eventually it loses its power.”

An insight like this may seem purely subjective, far removed from anything a scientist could measure. But now some scientists are aiming to do just that, trying to understand and quantify what makes music expressive — what specific aspects make one version of, say, a Beethoven sonata convey more emotion than another.

The results are contributing to a greater understanding of how the brain works and of the importance of music in human development, communication and cognition, and even as a potential therapeutic tool.

Research is showing, for example, that our brains understand music not only as emotional diversion, but also as a form of motion and activity. The same areas of the brain that activate when we swing a golf club or sign our name also engage when we hear expressive moments in music. Brain regions associated with empathy are activated, too, even for listeners who are not musicians.

And what really communicates emotion may not be melody or rhythm, but moments when musicians make subtle changes to the those musical patterns.

Daniel J. Levitin, director of the laboratory for music perception, cognition and expertise at McGill University in Montreal, began puzzling over musical expression in 2002, after hearing a live performance of one of his favorite pieces, Mozart’s Piano Concerto No. 27.

“It just left me flat,” Dr. Levitin, who wrote the best seller “This Is Your Brain on Music” (Dutton, 2006), recalled in a video describing the project. “I thought, well, how can that be? It’s got this beautiful set of notes. The composer wrote this beautiful piece. What is the pianist doing to mess this up?”

Before entering academia, Dr. Levitin worked in the recording industry, producing, engineering or consulting for Steely Dan, Blue Öyster Cult, the Grateful Dead, Santana, Eric Clapton and Stevie Wonder. He has played tenor saxophone with Mel Tormé and Sting, and guitar with David Byrne. (He also performs around campus with a group called Diminished Faculties.)

After the Mozart mishap, Dr. Levitin and a graduate student, Anjali Bhatara, decided to try teasing apart some elements of musical expression in a rigorous scientific way.

He likened it to tasting two different pots de crème: “One has allspice and ginger and the other has vanilla. You know they taste different but you can’t isolate the ingredient.”

To decipher the contribution of different musical flavorings, they had Thomas Plaunt, chairman of McGill’s piano department, perform snatches of several Chopin nocturnes on a Disklavier, a piano with sensors under each key recording how long he held each note and how hard he struck each key (a measure of how loud each note sounded). The note-by-note data was useful because musicians rarely perform exactly the way the music is written on the page — rather, they add interpretation and personality to a piece by lingering on some notes and quickly releasing others, playing some louder, others softer.

The pianist’s recording became a blueprint, what researchers considered to be the 100 percent musical rendition. Then they started tinkering. A computer calculated the average loudness and length of each note Professor Plaunt played. The researchers created a version using those average values so that the music sounded homogeneous and evenly paced, with every eighth note held for an identical amount of time, each quarter note precisely double the length of an eighth note.

They created other versions too: a 50 percent version, with note lengths and volume halfway between the mechanical average and the original, and versions at 25 percent, 75 percent, and even 125 percent and 150 percent, in which the pianist’s loud notes were even louder, his longest-held notes even longer.

Study subjects listened to them in random order, rating how emotional each sounded. Musicians and nonmusicians alike found the original pianist’s performance most emotional and the averaged version least emotional.

But it was not just changes in volume and timing that moved them. Versions with even more variation than the original, at 125 percent and 150 percent, did not strike listeners as more emotional.

“I think it means that the pianist is very experienced in using these expressive cues,” said Dr. Bhatara, now a postdoctoral researcher at the Université Paris Descartes. “He’s using them at kind of an optimal level.”

And random versions with volume and note-length changes arbitrarily sprinkled throughout made almost no impression.

All of this makes perfect sense to Paul Simon.

“I find it fascinating that people recognize what the point of the original version is, that that’s their peak,” he said. “People like to feel the human element, but if it becomes excessive then I guess they edit it back. It’s gilding the lily, it’s too Rococo.”

The Element of Surprise

Say the cellist Yo-Yo Ma is playing a 12-minute sonata featuring a four-note melody that recurs several times. On the final repetition, the melody expands, to six notes.

“If I set it up right,” Mr. Ma said in an interview, “that is when the sun comes out. It’s like you’ve been under a cloud, and then you are looking once again at the vista and then the light is shining on the whole valley.”

But that happens, he said, only if he is restrained enough to save some exuberance and emphasis for that moment, so that by the time listeners see that musical sun they have not already “been to a disco and its light show” and been “blinded by cars driving at night with the headlights in your eyes.”

Dr. Levitin’s results suggest that the more surprising moments in a piece, the more emotion listeners perceive — if those moments seem logical in context.

“It’s deviation from a pattern,” Mr. Ma said. “A surprise is only a surprise when you know it departs from something.”

He cited Schubert’s E-Flat Trio for piano, violin and cello as an example. It goes from a “march theme that’s in minor and it breaks out into major, and it’s one of those goose-bump moments.”

The departure “could be something incredibly slight that means something huge, or it could be very large but that’s actually a fake-out,” Mr. Ma said.

The singer Bobby McFerrin, who visited Dr. Levitin’s lab and walked through several experiments, said in a video of that visit that “one of the things that I have found valuable to me in a performance, whether I’m performing or someone else is, is a certain element of naïveté,” as if “as we’re performing we’re still discovering the music.”

In an interview, the singer Rosanne Cash said the experiments showed that beautiful compositions and technically skilled performers could do only so much. Emotion in music depends on human shading and imperfections, “bending notes in a certain way,” Ms. Cash said, “holding a note a little longer.”

She said she learned from her father, Johnny Cash, “that your style is a function of your limitations, more so than a function of your skills.”

“You’ve heard plenty of great, great singers that leave you cold,” she said. “They can do gymnastics, amazing things. If you have limitations as a singer, maybe you’re forced to find nuance in a way you don’t have to if you have a four-octave range.”

The Musical Brain

The brain processes musical nuance in many ways, it turns out. Edward W. Large, a music scientist at Florida Atlantic University, scanned the brains of people with and without experience playing music as they listened to two versions of a Chopin étude: one recorded by a pianist, the other stripped down to a literal version of what Chopin wrote, without human-induced variations in timing and dynamics.

During the original performance, brain areas linked to emotion activated much more than with the uninflected version, showing bursts of activity with each deviation in timing or volume.

So did the mirror neuron system, a set of brain regions previously shown to become engaged when a person watches someone doing an activity the observer knows how to do — dancers watching videos of dance, for example. But in Dr. Large’s study, mirror neuron regions flashed even in nonmusicians.

Maybe those regions, which include some language areas, are “tapping into empathy,” he said, “as though you’re feeling an emotion that is being conveyed by a performer on stage,” and the brain is mirroring those emotions.

Regions involved in motor activity, everything from knitting to sprinting, also lighted up with changes in timing and volume.

Anders Friberg, a music scientist at KTH Royal Institute of Technology in Sweden, found that the speed patterns of people’s natural movements — moving a hand from one place to another on a desk or jogging and slowing to stop — match tempo changes in music that listeners rate as most pleasing.

“We got the best-sounding music from the velocity curve of natural human gestures, compared to other curves of tempos not found in nature,” Dr. Friberg said. “These were quite subtle differences, and listeners were clearly distinguishing between them. And these were not expert listeners.”

The Levitin project found that musicians were more sensitive to changes in volume and timing than nonmusicians. That echoes research by Nina Kraus , a neurobiologist at Northwestern University, which showed that musicians are better at hearing sound against background noise, and that their brains expend less energy detecting emotion in babies’ cries.

Separately, the Levitin team found that children with autism essentially rated each nocturne rendition equally emotional, finding the original no more emotionally expressive than the mechanical version. But in other research, the team found that children with autism could label music as happy, sad or scary, suggesting, Dr. Levitin said, that “their recognition of musical emotions may be intact without necessarily having those emotions evoked, and without them necessarily experiencing those emotions themselves.”

A Matter of Time

The ability to keep time to music appears to be almost unique to humans — not counting Snowball the cockatoo, which dances in time to “Everybody,” by the Backstreet Boys, and became a YouTube sensation. Both the Levitin and the Large studies found that the timing of notes was more important than loudness or softness in people’s perceptions of emotion in music.

This may be a product of evolutionary adaptation, said Dr. Kraus, since “a nervous system that is sensitive and well tuned to timing differences would be a nervous system that, from an evolutionary standpoint, would be more likely to escape potential enemies, survive and make babies.”

Changes in the expected timing of a note might generate the emotional equivalent of “depth perception, where slightly different images going to your two eyes allows you to see depth,” said Joseph E. LeDoux, a neuroscientist at New York University.

And musical timing might relate to the importance of timing in speech. “The difference between a B and a P, for example, is a difference in the timing involved in producing the sound,” said Aniruddh D. Patel, a music scientist at the Neurosciences Institute in San Diego. “We don’t signal the difference between P and B by how loud it is.”

Michael Leonhart, who played trumpet and produced for Steely Dan, said he thought “the ears of most people have started to become less sensitive to dynamics” as music recordings crank up the volume and “the world has become a louder place.”

Subtle timing differences, on the other hand, are critical, Mr. Leonhart said, citing a triplet figure in the beginning of Steely Dan’s song “Josie.”

“The tendency is to start rushing it, to get excited,” Mr. Leonhart said. But the key is “to lay it back, don’t rush, make sure it’s not ahead of the snare drum. It changes the slingshot effect of where things snap and pop.”

Mr. Simon plays with timing constantly, surfing bar lines. He squeezes lyrics like “cinematographer” — six short notes — into the space of a two-syllable word, and will “land on a long word with a consonant at the end, so that you really hear the word,” he said. “My brain is working that way — it’s dividing up everything. I really have a certain sense of where the pocket of the groove is, and I know when you have to reinforce it and I know when you want to leave it.”

Musicians like Mr. Simon consider slight timing variations so crucial that they eschew the drum machines commonly used in recordings. Dr. Levitin says Stevie Wonder uses a drum machine because it has so many percussion voices, but inserts human-inflected alterations, essentially mistakes, so beats do not always line up perfectly.

And Geoff Emerick, a recording engineer for the Beatles, said: “Often when we were recording some of those Beatles rhythm tracks, there might be an error incorporated, and you would say, ‘That error sounds rather good,’ and we would actually elaborate on that.

“When everything is perfectly in time, the ear or mind tends to ignore it, much like a clock ticking in your bedroom — after a while you don’t hear it.”

Unknown, Maybe Unknowable

Of course, science has not figured out how to measure other elements of musical expression, including tone, timbre, harmonics and how audience interaction changes what musicians do. While there may be some consensus about what makes music expressive, performers say it is hardly immutable.

“Every day I’m a slightly different person,” Mr. Ma said. “The instrument, which is sensitive to weather and humidity changes, will act differently. There’s nothing worse than playing a really a great concert and the next day saying, ‘I’m going to do exactly the same thing.’ It always falls flat.”

Ms. Cash, who on a recent road trip listened to multiple versions of Chopin nocturnes and quizzed herself on which pianist she preferred, learned a lot about musical flexibility after developing polyps on her vocal cords in 1998.

“Because of these little polyps I’ve had to learn how to resing some of our songs, use breath where I used to use force, use force where I used to go delicate,” she said.

“The World Unseen,” on her album “Black Cadillac,” “gained some curves and some sweetness that I didn’t realize was there,” she said. “We recorded that really late at night, a live track, and it wasn’t that good of a vocal. The producer said he wanted to get a better vocal so we did it a few more times, but we kept going back to that live version. I keep it in a certain part of my voice. If I do it too breathy it sounds cloying. If I hit it too hard, it sounds like rock.”

But thinking things through goes only so far. For one melody, Mr. Simon started out using the words “going home,” he said. “But I said I’m not going to write ‘going home.’ Nothing interesting about that,” he said. “Then I stumbled on this word, ‘Kodachrome,’ which of course, had no meaning.”

In Dr. Levitin’s lab, Mr. McFerrin gamely tried several experiments, including seeing how long he could hold his hand in ice water while listening to different types of music (an effort to find out if music can ameliorate pain). He described a story by Hermann Hesse in which a violinist, granted his wish to be the best musician he can be, vanishes as soon as he starts to play.

“He completely disappears into the music,” Mr. McFerrin says on the video. “And I think that’s actually a big key to a successful creative moment for me, is when I disappear, and maybe the audience disappears into the music and becomes so engaged in the music that you forget that you’re even there.”

As Ms. Cash put it: “Some things you can break down, and some things are ineffable. Some things are just part of that mystery where all creative energy comes from. It’s part of the soul. Music is an ever-moving blob of mercury.”


Source and/or read more: http://goo.gl/IZanf

Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://3.ly/rECc