• Details
  • Transcript
  • Audio
  • Downloads
  • Extra Reading

How is music able to convey and trigger such range and depth of emotion? Why does it elicit joy, sorrow, consolation and the chills?

Employing research and theoretical models from neuroscience, psychology and musicology, we examine the extraordinary ways that primal and conditioned listening combine to such complex emotive effect.

Examples from pop, jazz, rock, film, global, traditional and classical forms are presented under the light of nostalgia, visual imagery, emotional contagion, rhythmic entrainment, aesthetics, expectation and the extra-musical.

Download Transcript

Why Music Moves Us

Professor Milton Mermikides

14 September 2023

 

A World of Music

Despite the wide diversity of human activity, we are yet to discover a single culture on Earth that does not engage with music in some way. It is such an essential part of human culture that seems to precede many other aspects of language. For example, there exists a tribe – the Pirahâ of the Brazilian Amazon – who have no words for numbers or colours, no creation myth, but have a highly developed and active music culture. Music-making also stretches back millennia. We have unambiguous examples of musical instruments dating back at the very least 40,000 years (predating money, agriculture, and writing). One example is a flute in Southern Germany carved from the tusk of a woolly mammoth - which shows admirable if rather foolhardy commitment to music making. Given our species emerged from Africa not Germany, musical instruments – as musicians know too well – are delicate, any object can be used for music and the voice – which leaves no musical trace – is likely our first instrument, the invention of music – if such a thing exists – is likely long lost in deep history. And so, music is both ancient and universal - it has antiquity and ubiquity. It is also persistent and shows no sign of abating, and although its fashions, styles and cultural usage may change over time, the music itself has never been abandoned for a moment. It resists, evades, and transforms in the face of cultural and religious oppression, and always seems to accompany every cultural turn. Music has no clear beginning and no end, regardless of the culture, age, and political systems it finds itself in.

Not only is music embedded in the history of our species, but through our personal lives as well. It acts as markers at every significant event in our lives, from a prelude of rhythms in the womb, the first connection to a mother’s song, every birthday, wedding to our final send-off. Every day in between is drenched in music as self-administered medicine to alter our moods or manipulations from adverts and films, we turn to it like an understanding friend when needed. And we have a lot of it. We make more music than any of us could ever listen to in our lifetimes. In 2022 would take over two centuries to listen to all music on Spotify - which is itself just a captured fraction of all music that is and has been made but unrecorded. And as if that wasn’t quite enough, in 2023 thanks to AI technology the number of songs on Spotify doubled. We dedicate attention and for some of us the majority of our lives to its training. An orchestral musician will typically have a dozen (or many more) years of hard graft in their fingers - making a doctor or lawyer’s training feel closer to GCSEs. And still, very few musicians feel like they have completed the task or mastered their craft.

This all begs the question of why. Why is music so important to us? Why spend so much of our time, resources, and digital storage on it? What does it communicate that text and speech alone cannot, why do we constantly turn to it to connect to other humans even when separated by language, geographical space, and historical time? Why do we use it as a sense of belonging, and turn to it like an old friend to celebrate our brightest, and console us in our darkest moments? And why – some might ask – despite music’s universality and the strong evidence musical and aural training has for social, linguistic, and cognitive development, is it so often treated in the educational system as an extracurricular pastime, first in line for budgetary cuts?

Despite its ever-presence, and the resources and attention we pay to it, music remains – at least to many of us – a mystery fraught with paradoxes.  How can it feel both universal and deeply personal? if it’s a language then what is it saying and why does it evade simple translation? Music may well be the food of love, but unlike food or love, there is no simple evolutionary explanation for its existence. Intriguing hypotheses abound, but we may never fully answer such questions. Nonetheless, even though we only recently have been able to capture sound, music has left its permanent traces on the archaeological and cultural history of the world, deep in the biological structures of our brains and a sense of personal meaning in our lives. But still, it seems we do, always have, and always will make music.

In order to start to explain music’s hold over us, we must start with open minds and ears, and from a long time ago.

In short: Music is universal and ancient, an essential part of being human, but why?
 

An Endless Lullaby

Our species Homo Sapiens emerged around 300,000 years ago from Africa and spread with a wanderlust across the planet. This long history and geographical diversity have led to an epic unsupervised musical experiment: What happens if you sprinkle humans across the planet for thousands of years, with long periods of relative isolation, and ruptures of cultural collisions? What music do we make? How diverse, similar, or translatable is that music? Are there any common features? While we may not understand much or any of each other’s spoken and written languages, is there something in each other’s music that speaks directly? 

Let’s examine this with one example of countless we could choose. Around 1969, the Swiss-French ethnomusicologist Hugo Zemp embarked on one of what was to be a lifetime of expeditions to discover, capture, analyse and celebrate global music forms. He travelled to the Solomon Islands, in the Pacific Oceania region. This cluster of volcanic islands has been inhabited for some 30,000 years but became isolated with the rising sea level. The first settlers and later migrant waves (particularly from Austronesian Lapita people) lived in relative seclusion for millennia. It was only discovered by European navigators very briefly in the late 16th Century, until trading and a series of sometimes violent and bitter interactions from colonial contact in the late 18th Century, and a battle zone between Allied and Japanese forces in World War II, ruptured this isolation.

What music did Zemp find there, in this - for the most part - isolated archipelago of the world? On the Northern Island of Malaita, he recorded on tape, a woman called Afunakwa singing an ancestral Beaggu lullaby known as ‘Rorogwela’.

I happen to find this major pentatonic melody – featuring distinctive minor 7th melodic leaps, a melodic arch, subtle glides, tender sibilance and a natural half-spoken voice – impossibly beautiful. More so when a translation of the Beaggu lyrics is made (a language now only spoken by only a few thousand). It tells of a young child being consoled by an older sibling, now that their parents are dead. To paraphrase some of the lyrics: “Do not cry, do not cry – I will carry you” and “Our parents now look over us with this endless lullaby, Do not cry – I will carry you”. Even without a translation the sense of bittersweet loss and consolation is somehow evident in this little melody. Rorogwela was released in 1973 on UNESCO’s Musical Sources collection, to only niche (i.e., academic) interest. That is until the French electronic duo Deep Forest sampled it in the early 90s in their eponymous album. The track ‘Sweet Lullaby’ used Afunakwa’s singing (without apparent credit) recontextualized with rainforest samples, electronic drums, and harmonic backing. The album was a huge hit, selling over 3 million copies and winning awards including in the relatively new categories of ‘World Music’. The album also includes samples of singing and water-drumming of the Baka tribe of the Central African region and opens with a Spinal Tap-esque narration about “little men and women” in “the jungle”. This conflation of references led to Norwegian saxophonist Jan Garbarek recording a contemporary jazz version of the melody with the misguided title ‘Pygmy Lullaby’. Given the distance between the Solomon Islands and the Baka tribe is a good 10,000 miles over land and sea, nearly half the earth’s circumference, it’s quite an accomplishment to be wrong.

Attempts to properly credit and find adequate compensation for Afunakwa and her culture were little and late. An attempt to find her in 2007, discovered she had died in the 1990s. But despite what may be a story of failure in accreditation, ethics, public knowledge, and the reputation of World Music award boards, we should see a near miracle of success. The ability of this little fragile melody – which is itself about fragility and survival – to survive through generations with only a chain of memories and the air to cradle it is quite staggering. That so many others now can hear some of its expressivity – even when decontextualized from language and culture - is a testament to music’s remarkable ability to endure and communicate.

In short: For much of our histories, music has survived through aural retelling and so relies on being memorable. Despite the geographical and cultural diversity of music making, some common features and emotional expressions emerge.

 

Plucked from Thin Air

No discussion of music should ignore its primary medium, sound. Even with this reasonable starting point, music evades easy categorisation as it can be experienced in silently as any earworm sufferer will tell you. And some (such as later Beethoven, and the percussionist Evelyn Glennie) manage to circumvent hearing disability to create music of exquisite power. Still, understanding how sound works, and how our brain processes it, is essential in appreciating its expressive power. Explanations for sound production are easy to find, but few can adequately capture just how extraordinary sound – and our ability to process it – actually is. Here’s the quick version. If any object is struck or otherwise excited it moves - often in an oscillating motion (a frying pan, tuning fork, or guitar string are quite readily visualizable). In the right environment (like air or water) this vibration will nudge nearby molecules together and apart in sympathy with this back-and-forth motion. As the object (let’s say the prongs of a tuning fork) moves outwards it squashes the molecules in the air together the air pressure increases (known as an area of compression) and as the prong pulls away there are fewer air molecules (rarefaction). The rate at which this cycle of high to low pressure occurs is known as its frequency (which is connected to our perception of pitch) and the degree of displacement of its amplitude (which we might perceive as loudness).  The actual displacement of each molecule is minuscule, but this cycle of alternating high and low pressure propagates outwards, like a Newton’s cradle, transferring this pressure wave in all directions at a high speed for some distance. The analogy often given is of a pebble dropped in a body of water, the displacement of water propagating outwards although the water itself hardly moves towards us. From this simple analogy, hearing is the near-impossible act of observing the waves on the shoreline and from just this working out what objects have been dropped and where. The reality of listening is even more extraordinary - sound waves are hurtling towards us face-on at about 343 metres per second. At this speed, we can sense waves from about 15 metres down to 2 cm in length. Even more dumbfounding is the sensitivity to amplitude, even for loud sounds it is in the order of micrometres (µm – a thousandth of a millimetre) far thinner than the width of a human hair. In the quietest sounds we can hear, the molecules hardly move past their own diameter. The fact that we can not only perceive this invisible blast of microscopic waves but routinely decode it, is worth a moment of marvel.

And yet this is what we routinely do. This superpower of being able to sonically scan our environment is an evolutionary price worth paying, and over millions of years, our bodies and brains have bent, morphed and evolved to develop extraordinary listening skills: our ears are so shaped to amplify and funnel sound waves to the ear canal. The eardrum (or tympanic membrane) – the width of a piece of paper – vibrates in sympathy retracing the pattern of air pressure. This oscillation in turn is transferred and amplified by the ossicles - the three smallest bones in the human body – to the cochlea of the inner ear. At which point tiny hair cells transduce the movements of a sound wave into electrical signals to the auditory nerve. Just as a microphone converts acoustic vibrations to an electrical signal our bodies make this conversion. And that sound wave exists momentarily in our minds not as an acoustic wave but as an analogous wave of electrical patterns, just like a microphone cable, record or digital file can carry the information. Professor Nina Kraus and her team at the Auditory Neuroscience Lab (aka Brainvolts) have even been able to reverse the hearing process by converting these brain signals of sound back to sound itself. In a healthy brain, the original sound is clear recognizable. Large areas of our brain are employed in decoding this tiny wave – as Kraus says the hearing brain is ‘vast’. We process this constant sonic onslaught: its frequency content, how it changes, testing it against a bank of memories, predicting its behaviour lights up diverse regions of the brain, even more so when we listen to music rather than everyday sound, and when making music it lights up like fireworks.

So, all the sound and music that has ever existed from Bach to Beatles to Billie Holiday, and all that can be imagined, can be described in this little wiggly line. But as emotionally and informationally potent as the soundwave is, it is temporary and evaporates into thin air rapidly, dead in a matter of seconds. Of course, we can capture elements of the sound using symbols in text or music notation but the sound itself is quickly gone. For most of Earth’s history, all sound will never be recovered: the dinosaurs, asteroids, Neanderthals, and all musicians from Pythagoras to Paganini, to Bach and Schubert are lost forever. But in the middle of the 19th Century, we managed to trap this wave and capture it through recording.

It now seems that the first to succeed in this act of sonic domestication was not Edison but the French printer, bookseller, and inventor Édouard-Léon Scott de Martinville (1817-1879). Inspired by the new-fangled craze of photography, Édouard-Léon designed and patented in 1857 the phonautograph – a sound writing machine. It traced the motions of a stylus onto a soot-covered surface, capturing airborne sound waves for the first time in human history. Quite charmingly, although he broke new ground by being able to capture sounds, he was not able to play them back. He didn’t even seem to consider that possibility, rather seeing these as visual transcriptions. These recordings remained unheard until just 2007 when David Giovannoni and the team at First Sounds managed to use contemporary technology and recover Scott’s voice from the literal ashes. These tracings include an 1860 wobbly but charming rendition of the French folksong Au Clair de Lune, as well as some scales and jaunty comic opera. To hear the first recording of a human voice feels like eavesdropping through time, not even Scott had that privilege.

Since Scott, we have domesticated the soundwave completely, curating it onto electrical current, solidifying it into a spiral onto wax and vinyl, as patterns of magnetism on tape (as Afunakwa did for Zemp) and sliced it into numerical values storable and highly editable as digital. We can transfer it now silently through the air as digital data, or in an extraordinary act of invention piggybacked onto radio waves which can travel as light beyond the Earth itself. And in the case of NASA’s golden record aboard Voyager I, beyond our solar system. On this disc lies waveforms of a broad selection of our music from Bach to Chuck Berry and the Solomon Islands, in the hope that extraterrestrial beings can also make sense of music and that it is truly universal.

Although now emancipated from its acoustic frailties, sound retains its intimate hold over us. To listen feels like a direct human connection. In our histories, this was limited to our local environment, but that connection now extends beyond time and space.

In short: The medium of sound is complex and rich, and our bodies and vast regions of our brain are dedicated to processing it. Until only recently it could not be stored, so it retains a sense of close connection across time and space.

 

The Sonic Canvas

Although sound waves carry sound and music, it is not intuitive to understand what they contain just by looking at them, or how our brains make any sense from them. Looking at the shape of a sound wave we do get an impression of their volume over time and some rhythmic patterns, but not much of its content. A sound wave represents amplitude over time, how the molecules in the air (and our microphone or ear drum) move over a given duration. However, how our brains process this information is through a sort of decryption. What we do is decode this pattern of movement by deconstructing it into its composite frequencies – which we perceive as a series of high and low pitches. These frequencies have varying amplitudes (which we perceive as loud to inaudible). In short, we draw out from the soundwave its frequency content, what pitches are happening when and how loud. It is somewhat analogous to perceiving a piano chord as one object, and then in terms of individual notes of varying volumes. What we do is convert the sound wave’s back-and-forth movement (which can be represented on an oscillogram) into a spectrum of different pitches at various loudness levels over time (which we can represent on a spectrogram). We can demonstrate this using the simplest wave – a sort of sonic atom – the sine wave. Sine waves with their distinctively smooth motion have the feature of only containing one frequency. Perfect sine waves can only be generated electronically but a tuning fork gets pretty close, and since it focuses on one pitch, they are useful tools for tuning, Here’s a sound wave - an oscillogram – of a tuning fork. It gets louder (more ‘fat’) as it’s brought to the microphone and dies away.

Other than sine waves, all sounds are associated with not one but a collection of frequencies, this is in fact how we can tell apart different instruments even when playing the same note, they each have an identifying signature frequency pattern. Depending on their relationship with the lowest or central pitch these additional frequencies are known as harmonics, overtones, partials or - in the case of speech - formants. Musical pitched instruments, say a trumpet or violin tend to have layers of regular frequencies (‘harmonics’) above their lowest and loudest ‘fundamental’ frequency. More percussive and ‘noisy’ instruments have less clear fundamentals and more complex frequencies (partials) – At its most extreme white noise - like white light - contains all perceptible frequencies. A piano note starts with a noisy impact but settles into a simple pattern of harmonics. The pattern of frequencies and how they change over time, all contribute to the timbre of a sound. They allow us to recognise the instrument - or agent - of the sound, its fundamental pitch, and internal colourations. For example, here is the spectrogram of a solo female singing three separate pitches (A3 C4 Bb3) - we can see the up-down contour of the lowest white stripe as we travel horizontally. Each of the notes is sung on three different syllabic sounds - or phonemes (“She-ah-day”) indicated by the subtle patterning above each pitch. Note the blast of white noise at the initial ‘sh.’ sound.

Our brains are incredibly adept at decoding these spectrograms in real-time, we can hear a pattern of frequencies and integrate them together, recognising it as a vocal. We can also tell what it is saying or singing based on how the frequencies change. But we can also segregate a vocal from background noise, or a piano, which we can recognise as its own sonic agent. We accomplish this because our brain contains what are called tonotopic surfaces. These are essentially biological spectrographs converting the incoming soundwave into spectrograms with areas devoted to a gradient of pitches. Specific regions of our brains are hit by a piercing high sound, others by low sweeps, and this information is coordinated into a powerful sensory experience. Music exploits this innate visceral analysis of the frequency spectrum trusting us to integrate and segregate components, but also forcing us to hear novel spectra born of instrumental and harmonic combination, Ravel was a master of such orchestration, sometimes integrating the orchestra into an extraordinary spectral force, our inner ears fusing traditional instruments into impossible imaginary instruments. Entire music movements (such as the aptly named Spectralists) treat composition as an act of painting compelling acoustic brushstrokes on this sonic canvas. Electronic musicians have few limitations in this domain, assaulting our listening brains with extraordinary sonic trajectories, and even sneaking visual images onto this sonic canvas, drawing images as spectrograms which are decoded back to a single oscillating waveform.

What’s essential to note is that despite the brain’s extraordinary ability to identify, integrate and segregate this sonic canvas, this is an active and subjective process. Music can integrate sounds that don’t emerge from the same agent, and conversely, a solo piano piece can sound like many agents. Furthermore, our ability to recognise sounds is on the knife edge of perception. Our expectations can influence what we hear as much as the actual sounding objects. If you’ve mistakenly heard your name called out, misheard lyrics, or fallen prey to auditory illusions such as the McGurk effect (where the perception of the Fs and Bs can be switched by visual cues) you will know that our listening experience involves both perception and imagination. Listening – particularly to music – exists at the intersection of actual sounding events being passed to the brain (bottom-up or afferent listening) and our knowledge base, expectations, and imaginations (top-down or efferent listening). We are at once hearing the objective sounds in the outside world and constructing them into a subjective and vivid imaginary landscape.

In short: The brain decodes the oscillations of a sound wave into a spectrum of frequencies of different amplitudes which can be presented on a ‘spectrogram’. We integrate and segregate these frequencies to recognise ‘agents’ (like a voice from other sounds) as well as the content of each sound (its pitch, rhythm, and timbre), Music relies on – and exploits – this ability. Listening is not passive; it is a combination of afferent (bottom-up) and efferent (top-down) processing.

 

The Music of Language  

Music is often referred to as the ‘universal language’, and it does feel like it is somehow speaking to us, even when there are no words in the music. Language and music share characteristics in that they are particulate, they make sense from building blocks from which elaborate hierarchical structures may be built and repurposed (e.g., phonemes within words within phrases within sentences vs. notes and rhythms within motifs, melodies, and compositions). However, music stands apart from conventional knowledge in that it is somehow – as the French anthropologist Claude Lévi-Strauss said – ‘Intelligible but untranslatable’ (1964) or from the 19th Century music critic Eduard Hanslick “A language we speak and understand but are unable to translate”. Why music is untranslatable becomes clear with a moment's thought. Although music and language share many features, music lacks referential symbols to the outside world. If we want to refer to say a pineapple, a child, or clouds there will be words in English, French, Greek and most other languages that will point to those things unambiguously. The words will differ between languages, but it is usually possible to translate successfully between those languages, with minimal informational loss. There is however no musical word for pineapple. We could name a piece after a pineapple and try and musically capture it by evoking a sense of its tropical zestiness or its Fibonacci spiralling, but music lacks that specific literal connection to the outside world. As it happens, music is perfectly capable of building such a referential vocabulary. There are more than enough pitches, chords, rhythms, and instrumental timbres to map onto meaningful words, just as conventional languages use vocal sounds to construct that meaning. In fact, composers since J.S. Bach have adopted such an idea with the use of musical cryptogram which maps alphabetic letters to note names so they can spell out words as melodies (often as a form of tribute or creative constraint). In the German cryptogram ‘BACH’ spells out a chromatic four-note motif (Bb-A-C-B) and ‘GRESHAM’ (G-D-E-Eb-B-A-E). Morse code is readily available for such translation, with its mapping of rhythms to alphanumeric characters. The British composer and electronicist Delia Derbyshire for example embedded hidden messages in her music using morse code rhythms painstakingly spliced on tape. And if you are familiar with Lalo Schifrin’s Mission Impossible theme tune, you may be surprised to learn that the iconic long-long-short-short rhythmic motif was derived from the Morse code for M (dash-dash) and I (dot-dot) (MI = Mission Impossible).

Still, this is generally not how we use or communicate through music, the Mission Impossible rhythm works primarily in an expressive sense as well as (or despite) its literal reference. We can gain some ideas of this non-literal expressiveness by observing how we use spoken language: There is a quasi-musical nature to how we speak. Prosody - the intonation, timbre, rhythm, stresses, and pitch contours of language - can radically change the meaning of what we say. This is not just in the level of pronunciation (e.g., entrance vs. entrance) or context (such as the use of parenthetical ‘asides’ like this one) but in terms of a post-lexical expression – a meaning beyond words. We can for example say “thank you’ politely, enthusiastically, or sarcastically. A request given with evenly paced low pitches can reveal a seething anger. A question can be made to prompt a yes-no answer, an explanation, or no response at all, just through the contours of speech melody. We naturally talk to children with an exaggerated form of speech melody (known as IDS – infant-directed speech) as if to teach them the hidden musical rules of spoken language.

This language of speech finds its way into music. We respond to the speech contours of anger, tenderness, and exuberance in music, despite the lack of referential meaning. There is grammatical structure with phrases, sub-clauses, and internal repetition, we hear the tension of an insistent question which requires a response and have a reaction to meaningful pauses. The link from language to music can run surprisingly deep. For example, if we compare the speech characteristics of French and English, we find French is more rhythmically and melodic as compared to the bouncier rhythms and pitches of English. Neuroscientist Aniruddh Patel and his collaborators revealed compelling evidence that this distinction is echoed in a comparison of a large body of works by 19th-century French and English composers. The music of their language it seems made it into the language of their music.

Language seems to be ‘in the ears’ of musicians and listeners when engaging with music. This is usually entirely intuitive, however, there are moments of conscious translation such as Steve Reich translating fragments of speech into string motifs in Different Trains, or Elgar’s immortalising of his friend Dorabella’s stuttering speech and laughter in Variation X of the Enigma Variations.

It is as if there is a musical wrapping that adds pragmatic and emotional meaning to language, and this expressive musical layer of speech can be peeled away from the literal meaning and set free in music. Perhaps this is why music sounds meaningful even if we don’t know the specific message. Music somehow adopts the pitch, rhythm, timbre, and grammatical structure of language to become the primary materials for communication.

In short: There are musical qualities to spoken language communicating pragmatic meaning, emotion and structure which appears to be shared and harnessed by music. Unlike conventional languages, music lacks referential links (sonic symbols) to the external world, it is a uniquely abstract language.

 

The Language of Music

The previous section saw music as emerging from language, an expressive layer which has developed into a sort of artistic prosody or melodic poetry. It may be true that music is linked to spoken language but is not a subsidiary of it. Neuroscientific research shows that there are resources in the brain used by both language and music, but also areas just for language and others favoured for musical processes. This is supported by several case studies of patients who through brain damage have lost language (aphasia) without affecting musical processing, and others who have lost musical awareness (amusia) without affecting language ability. Music has its own brain estate, and musicians’ brains do develop differently from ‘normal’ people - particularly in the corpus callosum – a strip of tissue that connects the left and right hemispheres of the brain: a fitting metaphor of the marriage of logic and emotion that music seems to involve.

Music seems to have a language of its own, with its own abstract vocabulary. We can shed some light on the language of music using another moment in musical history.

Béla Bartók was a Hungarian composer who was not only one of the most productive and influential composers of the 20th Century, an accomplished pianist but also a dedicated scholar and documenter of folk music, and a founding father of comparative musicology and contemporary ethnomusicology. And his music was a visceral, uncompromising, and gorgeous blend of 20th-century modernism and Eastern European folk melody, rhythm, harmony, and timbre.

Bartók – often together with his kindred spirit Zoltan Kodaly - dedicated many years to transcribing and recording folk music on location. At first by transcribing them by ear but then – in order to capture deeper musical nuance – travelling with a bulky wax-cylinder recording device. He recorded, transcribed, analysed, and published around 10,000 Hungarian, Serbian, Slovak, Romanian, and Bulgarian folk melodies. The influence of these – sometimes with direct quotations – is integral to his musical language.

Let’s take just one example. In November 1907, Bartók travelled to what is now Western Slovakia and recorded a Slovakian girl named Matilda Kolárová singing a haunting modal melody ‘I Know a Little Forest.’

Bartók conducted a quick on-site melodic transcription (pictured above) but over the years conducted additional transcriptions capturing the recording’s pitch and rhythmic detail. Bartók had a huge respect for folk music and wanted to capture its core essence, as well as the expressive details. Notation is not a soundwave, and it is conventionally far simpler than a spectrograph, reducing rhythm and pitch to simple geometric objects in a matrix of fixed grid points. A series of abstract shapes and vectors, Platonic ideals rather than the messy brushstrokes of a spectrogram. Timbre is often drastically reduced to a handful of articulations and volume levels, if not ignored entirely. A geometric conception (on paper or in our minds) is hugely ‘lossy’ in terms of sonic detail, however, what it can reveal is the abstract essence of musical objects, the core syntax of musical language which is retained from performance to performance.  Bartók later borrowed the melody of ‘I Know a Little Forest’ in a solo piano piece For Children no.34 Románc BB53. The melody is in a distant key from the original (in fact at a tritone - as far away as we can get), it is a piano, not a voice, the melody has been inventively harmonised, and fragments of the melody are ingeniously used as additional material. It is still undeniably and instantly recognisable showing the power of geometric musical conception. Expression and variation are still possible, but there is a permanence to these musical objects; they survive retelling, different instruments, and interpretations. And these geometric musical objects get caught on paper and in the head.

Notation may be hugely limited in terms of sonic information and expressive intent, and subject to stylistic misreading and bias, but it seems to reflect to some extent how we remember and conceive certain musical material – as geometrical shapes in an abstract space – even if we don’t read notation. As a demonstration of notation’s ability to preserve musical objects, even in the absence of a sonic record, let’s take this example from an anthology of Lithuanian melodies (Anton Juszkiewicz’s Melodje ludowe litewskie) Number 157 is listed Tu, manu seserėlė “Oh My Sister”) a traditional wedding song.

The notation contains no timbral or tempo information and only the barest melodic skeleton, yet a simple vocal reading reveals a haunting beauty, and one that may be strangely familiar, like someone you pass in the street you know that you know.

A copy of this anthology was in the hands of a young Igor Stravinsky when composing the Rite of Spring, and several of these Lithuanian folk melodies are featured. They are harmonized and orchestrated with radical invention, rhythms squashed and stretched, and the melodies are at times truncated, spliced and heavily decorated but they remain recognisable with their haunting quality intact. Here is the opening bassoon melody of The Rite of Spring with the Tu, manu seserėlė melody.

Like our memory for faces, we have a remarkable memory and thirst for these melodic objects and can recognize them in a range of stylistic contexts, keys, and tempos. That’s why we sing Happy Birthday in any key or tempo (usually at the same time). These musical objects survive retelling and interpretation, (that’s why we have Rorogwela after all) and also survive the transition to notation and back. Thanks to such documentation, we are gifted with a priceless canon of works from before the recording era. These objects also provide a reference point for interpretation and individual expression, they are malleable and can retain their identity allowing different performers to provide unique perspectives on the same material. As robust as a musical object is (such as a melody or motif) there must be a point when it is so distorted that it loses its identity – or adopts another. However, the limits are surprisingly broad when the tones are particularly in love with each other. Take for example the opening of George Gershwin’s wonderful I Got Rhythm variations for piano and orchestra. It is almost exclusively built on a simple and familiar four-note melody, but it is varied to ludicrous degrees. Not just transposed in pitch and displaced in time, but stretched vertically into alien keys, flipped upside down and backwards and expanded in duration so extremely it becomes an accompaniment to its myriad other transformations.

So, music exists not just alongside conventional language but in an abstract language all of its own, a geometric space (in notation and our minds) where musical objects have robust identities but can interlock (and infect our minds) like Tetris blocks, break apart and recombine to produce endless musical material and experiences.

In short: A geometric representation of music, as objects in pitch and time, loses huge amounts of sonic information but can reduce musical aspects to an essential recognisable core. This abstract musical space has shown to be remarkably useful in the storage of music in notation, how we perceive, interpret, and create music.

 

The Language of Music

Music psychologists have long been interested in how the abstract language of music intersects with emotional experience. Music psychologists research the musical features that correspond to specific emotions in a listener.

The emotion of ‘fear’ for example tends to be elicited and conveyed by fast tempo, unstable rhythmic and motivic content, varied articulations, a wide pitch range, low drones, minor modality, and dissonance.  “Tenderness’ on the other hand is generally presented with major mode consonance, a slow tempo, legato articulations, and a far narrower pitch range.

What of other ‘in-between’ or more complex emotions or other combinations of musical features? A natural place to start is to take existing psychological models of emotion and see how they relate to musical parameters.

The circumplex model below takes some common emotional states and maps them out over two dimensions. 1) Their level of arousal – how active or passive the emotion is (excited is more active than content for example. 2) The ‘valence’ of the emotion how pleasant or unpleasant it is (‘calm’ is more positively valent than ‘bored’). A range of emotions can be placed across these two dimensions.

We can then map musical parameters to these axes and see how they line up with these emotions. For example, we could equate tempo and loudness with the level of arousal and the valence axes with a major key to a minor key (perhaps with neutral share notes in between). That way ‘Happy’ is loud, in a major key and medium tempo, ‘Bored’ is slow with neutral tonality, and ‘Angry’ is allegro, moderate volume in a minor key. As silly as this is, it does admittedly work quite well (at least for making music for a cooking reality show). However, its limitations appear quickly. For example, ‘loudness’ is not the same as ‘tempo’ – we can have fast and quiet music after all – so there would now need to be an additional dimension (making the circle into a sphere). How do we disambiguate a major key elaborated with chromatic notes, from a minor key that isn’t? What about sharp attacks rather and onsets? (Which are again different from loudness). Music is also dynamic changing colour and texture, and allows multiple contrasting layers. How do these appear on our map? What of the myriad other emotions (envy over anger, tranquillity over contentment etc.)? There are countless musical parameters that change, interact and contrast over the course of a piece of music, and these maps struggle to capture this dynamic and complex aspect of the musical experience. There are in fact valuable and illuminating music-emotional maps revealing important aspects of musical expression. However, an important – and one of the most beautiful – aspect of music is that we do not react to it in a linear or binary fashion, where the surface emotional descriptions of music have a corresponding effect. We do not experience or use music this way. Dissonant, angry music can be a joyful, cathartic experience for example. A transition from a minor key to a parallel major key may be more uplifting than a major key throughout. Furthermore, some of the most devastating pieces of music are in a major key, especially when associated with a sense of loss. If grief may be defined as ‘love with nowhere to go’, then perhaps music gives grief a home, a safe place for it to rest in refuge from the pain of loss.

In short: Musical parameters can be linked to specific emotions, we might recognise a loud, fast, dissonant piece as being ‘angry’ and a soft, consonant, slow piece as ‘serene’. However, music contains many parameters and is dynamic, contrastive and multi-layered, so such simple mappings have limitations.

 

EVERBEAM – Illuminating Musical Emotion

Music’s emotional power comes not from one single mechanism or another but through the complex combination of pathways. At its core, the brain is just trying to help us survive. It perceives the outside world, draws upon memory and predictive faculties to pay attention to what’s important, and evaluate what is a positive opportunity, what is a danger, and how best to act. We are constantly and dynamically updating these internal predictions (through a mechanism of active inference) and thus gain a more accurate model of the external world. Emotions (surprise, fear, confusion, interest, anger, disgust etc.) help motivate us to modify predictions, and act optimally in a range of situations. There is no convincing neuroscientific evidence that music has unique emotions, but it is exceptionally adept at triggering a huge range, dynamism and combination of responses in the perceptual, emotional and motor regions of the brain, inducing powerful experiences. Some of these access points are presented below:

Neuroscience and music psychology have made great progress in identifying how music hijacks the brain’s insatiable predictive activity to induce emotional responses. Most satisfyingly it recognises these mechanisms not as singular events but as parallel processes, spread across diverse regions of the brain related to hearing, logical prediction, movement, visual imagery, memory and emotion, allowing for a range of visceral and aesthetic listening experiences. These mechanisms were first introduced by music psychologist Patrik Juslin and colleagues under the acronym BRECVEMA, presented in order of the mechanism development in the human lifespan. I prefer to use the more elegant EVERBEAM acronym (an acronym suggested by one of my past undergraduate students at the University of Surrey, James Tate), as it is more memorable and better reflects the lasting power of music.

Emotional Contagion

This mechanism relies on the survival incentive or enhancing group cohesion and social interaction such as between a parent and child. In music, it involves an emotional reaction triggered by recognising an emotion in another person (or ‘agent’) and mimicking it internally - as if it’s contagious. This is readily evoked by listening to singers as we are so primed to recognise emotion in other’s voices, but it can come from gesture, and when musical instruments imitate vocal expression in soaring violin vibrato, slide guitar weeping, and tender synthesizer tones.

Visual Imagery

Relying on our vital ability to imagine and plan movement through space, this mechanism is induced when music conjures up visual images, which in turn induce an emotional state. This might include visualization in the programmatic sense: pastoral scenes and open vistas, but also an abstract space through which the music travels such as soaring, falling or abrupt turns. Vaughan Williams's The Lark Ascending manages to capture a programmatic and abstract sense of soaring,

Episodic Memory

This is connected to our brain's capacity for memory retrieval and self-identity and involves music as a ‘retrieval cue’ to a specific emotionally charged moment in one’s life. The music triggers a set of memories which involve a range of associated emotions, particularly that of nostalgia. The relationship between the music ‘cue’ and the emotional trigger can be entirely arbitrary or contain relevant - even if ‘retro-fitted’ - meaning. Studies suggest that typically these emerge from around the mid-teens, but by pairing music with emotional moments in one’s life there is always an opportunity to create more of these.

Rhythmic Entrainment

Our brains contain regions in the cerebellum and sensorimotor regions to facilitate movement during activity: such as walking, running and physical work with others. Musical rhythm taps into this particular sense compelling us to move in a coordinated efficient manner. Heart rate and/or respiration can become synchronised with the pulse in music, and it is intriguing that the range of a typical metronome (40-200 beats per minute) matches quite well that of the human heart. (Breath rate is typically in the 1 or 2-bar phrase range depending on the level of arousal). This mechanism induces emotion through a sense of uplift during rhythmically active music, group cohesion, or soothing at slow tempos like being gently rocked to sleep. However, this is a complex mechanism which induces nuanced emotion when listening to ambiguous, complex, and modulating rhythms.

Brain Stem Reflex

This is the fastest musical mechanism we possess, is developed in humans prior to birth and is hardly impacted by our cultures. It engages the oldest part of our brains and is linked to our need to recognise important changes in our environment: a sudden noise (or sudden silence), faint footsteps approach, a low rumble, high activity, dissonance (and other spectrographic gestures). Music routinely exploits this mechanism with sudden changes of dynamic, a sense of ‘environmental’ change, it is linked to controlling a sense of excitement, alertness and physiological responses like heart and breath rate, with accompanying emotional impacts including surprise, anxiety, alertness, and humour.

Evaluative Conditioning

Evaluative conditioning relies on our ability to associate events or objects with positive or negative outcomes, and is a highly cultural mechanism, linking musical and sonic material through repeated exposure to various emotive or cultural associations. Violins to sophistication or love, an improvising saxophone to a city street, an electric distorted power chord to irreverence, 8-bit audio to geekiness. There is nothing inherent in the sound that makes these associations, they are all learned through experience, but they play crucial roles in the recognition of musical style and emotional associations. They are far more powerful and subconscious than we may imagine – but advertisers and film composers well know. Within seconds music can place us in a historical time, geographical location, and cultural space. This can be incredibly nuanced, a grand piano and a wobbly tuned upright piano with a noisy pedal can have distinct emotive associations of sophistication grandeur, and warm homeliness respectively despite their instrumental similarity.

Aesthetic Judgement

Aesthetic judgement stands apart from other musical mechanisms as it is not so much triggered by the musical content itself, but an aesthetic appreciation about it. We can for example be moved with awe, amusement, admiration and disbelief by the skill, creativity and accomplishment of a performer or composer and by the perception of aesthetic beauty within the music: the music as an object of art. We can gain emotional affects by contemplating such things as the skill J.S. Bach took to write a 6-part fugue on a chromatic subject, an 8-year-old girl’s ability to play and sing a Bossa with such stylistic awareness, and Django Reinhardt’s ability to play with such expression, fluency, and influence despite his damaged left hand. Such thoughts can contribute to our appreciation and emotional experience of the music itself.

Musical Expectancy

Musical expectancy is a mechanism connected to areas of the brain associated with the processing of symbolic language and pattern-seeking (such as the left perisylvian cortex, ‘Broca’s area’, the dorsal region of the anterior cingulate cortex). Emotions and experiences may be induced because a piece of music confirms, delays, or subverts a listener’s expectation. A listener’s expectations may be built by familiarity with similar musical content and logical sequencing, for example, if we hear a scale pattern of up-up-down-up-up-down-up (see the black dots in a)) we may expect the next note to ascend (grey dots). A rhythmic cycle of two notes separated by 300ms and 500ms (b) can set up an internal virtual ‘predictive grid’ of 8 notes at 100ms divisions (c), as this grid prepares likely scenarios for future events. We are remarkably adept at making and updating these predictions (based on surprises and new information), and are doing so with pitch, rhythm, structure, and other simultaneous parameters. Listeners conduct this live analysis – often completed subconsciously – building complex abstract schemata. Emotional responses are elicited when predictions are confirmed or thwarted and are not simple binary satisfaction/dissatisfaction distinctions but nuances of surprise, pleasure, delight, and befuddlement. The process by which we process these multiple layers is of intense research interest. Some fascinating findings are presented by Vuust, Kringelbach et al at the Centre for Music in the Brain.

 

Extra-Musical Information

Juslin’s BRECVEMA model is well established, but has vast, complex implications and continues to be interrogated and tested. Still, I like to add one more mechanism to the mix (which I don’t feel is adequately covered under Aesthetic Judgement), and that is the emotional impact of extra-musical information. This might include lyrical content and its intersection with the music, but also information about the music more generally. We might for example enjoy hearing Villa-Lobos’s New York Skyline Melody as a conventional piece of music with its Latin rhythms and strange alluring jagged melody. However, learning that the melody is a faithful transcription of a photograph of the New York skyline itself, adds a layer of emotional meaning to already beautiful music. Music beyond music.

In short: Music induces emotions through a number of mechanisms including Brain Stem Reflex, Rhythmic Entrainment, Evaluative Conditioning, Emotional Contagion, Visual Imagery, Episodic Memory, Music Expectancy, Aesthetic Judgement and Extra-musical Information. These engage with a wide range of brain regions from the innate to the cultural to the personal and combine to elicit a countless number of complex musical experiences and emotions.

The Multiplying Chills.

All these mechanisms combine into a heady mix – lighting up the whole brain like a Christmas party – and are also associated with a cocktail of hormones: Oxytocin (the ‘love drug’) particularly when feeling vocally and rhythmically bonded in a group, serotonin and endorphins from pleasurable music sensations, adrenaline from high musical activity, and crucially dopamine which is particularly linked to anticipation and prediction. Why do our brains reward anticipation and prediction? The answer is we have evolved to hunt prey and potential partners and scan the environment for dangers. If we were only rewarded when eating or procreating, we would not get far as a species, we must also enjoy and crave the anticipation and prediction process that leads to our desired goals.

With such a rich array of mechanisms and physiological responses to music, it is no surprise that music can create profound and rare sensations, which brings us to the experience of frisson or musical chills. This is a sensation lasting a few seconds, can manifest in ‘waves’ and induces some combination of goosebumps (raised hair follicles), a cold sensation, and shivers down the neck and spine. Studies have shown that in a general sample of people, just over half have experienced chills, however, that figure is about 90% in music students. This suggests that either the chills experience increased with music exposure, or I like my hypothesis - the chills are Siren’s call to musicianship.  You may be pleased to know that with the Gresham College audience, the figure is over 96% but I suspect there’s some selection bias going on. It’s a little more common in females than males, and interestingly more common in those who aren’t thrill-seekers (perhaps skydiving blunts the musical experience). The type of music that induces chills is wildly varied in content but common themes (and echoed in the audience) include sudden change, textural or harmonic contrast, novel expectation, a sense of anticipation, build and ‘coming together’, a surprising dissonance and a cathartic point of release. High volume and intensity, particularly with violins or unified vocals. Quieter ‘haunting’ passages are reported to induce chills, but these are often in anticipation or contrast to louder sections. These are not universal features however and musical styles are wildly varied.

How and why music evokes such a physiological response (one which is triggered otherwise by disgust, fear, excitement, fingernails on a blackboard, rapid temperature change, as well as orgasm and feelings of awe and reference). There are two compelling hypotheses. The music psychologist David Huron suggests that they are caused because some of our mechanisms (like Brain Stem Reflex) are fast and ‘pessimistic’ in that they exhibit a bias towards fearful signals and danger. Initial reactions include an element of fear and tension. Our more leisurely appraisal responses, however, can assess the outcome of an event more realistically and positively, it’s only music after all. This rapid change from this state of fearful tension to a pleasurable resolution – particularly if providing some endorphin rush – triggers this physiological response. A suggested analogy is that of the initial fear and shock when surprised at a birthday party giving way to the happy laughter of realization (that is if you happen to like parties).

An alternative – and possibly compatible and parallel - explanation is Panksepp’s Separation Distress theory. This connects frisson with the distressed cry of a child. When babies cry, they emit energy in a frequency band to which we are particularly sensitive in the region of 1-6kHz (especially 3-4kHz - the very top of a grand piano and violin). Our sensitivity may possibly be adaptive. This ‘distress call’ triggers an urgent need to attend, console and reconnect to the dependant and may also induce a cold chill. Incidentally, this might be consistent with the higher reported chills in female listeners if they have historically taken on the childcare role. An adult scream, high violins, the operatic squillo technique and the ‘acoustic ping’ of emotive singing also occupy this frequency range. You can see this on the following spectrogram of Merry Clayton’s isolated vocal performance on The Rolling Stones’ ‘Gimme Shelter’ which – for me at least particularly in isolation – creates reliable chills.

In short: Music involves a heady mix of several expressive mechanisms and hormones and can trigger intense emotions. A remarkable physiological experience is that of frisson - chills evoked by music. Hypotheses for its experience include Huron’s Contrastive Valence theory (the close following of tense anticipation with pleasurable relief) and Panksepp’s Separation Distress theory (related to the particular frequency band present in a baby’s cry, to which we are particularly sensitive).

 

Summary: Why Music Moves Us

Music has the power to elicit and convey ‘everyday’ emotions such as calmness, humour, excitement, irritation, and joy, but also evoke profound and rare emotional moments which evade easy description. Such a complex mix requires a complex explanation including our relationship with sound, language, and social interaction. But here’s the most succinct I can make it:

Music moves us in such diverse and profound ways because it interweaves three broad aspects of our being:  1) Music engages with our most primal and deeply rooted responses to sound, for which our brains and bodies are evolved. How we react to loudness, dissonance, rhythmic pulses, and surprise, triggering our insatiable and intuitive need to coordinate with others, predict, calculate, learn, and survive: It reflects what we are as a species. 2) Music relies on a shared vast network of cultural knowledge, conditioning and associations, and an internal library of melodies, stylistic understanding, and societal connections: It reflects where we are in history, society, and culture, and finally 3) Music is entangled with our own individual histories, memories, experiences, loves, pains, and dreams. It changes our brains, and so our experiences and lives. Lives that will never be lived again, to which music brings order, solace and meaning. And so, music reflects who we are as individuals.

 

© Professor Milton Mermikides 2023

 

 

References and Further Reading

Music in the Brain (Latest neuroscience of predictive coding in music listening)

Vuust, P., Kringelbach, M.L. Heggli, O.A., & Friston, K.J. (2022). Music in the brain. Nat Rev Neurosci 23, 287–305. https://doi.org/10.1038/s41583-022-00578-5

Music and sound processing in the brain:

Kraus, N. (2021). Of sound mind: How our brain constructs a meaningful sonic world. The MIT Press.

Musical expectancy:

Huron, D. (2008). Sweet Anticipation: Music and the Psychology of Expectation. MIT Press

Music and Language:

Patel, A. D. (2010). Music, language, and the brain. Oxford University Press.

A short overview of the history and concept of music:

Cook, N. (2000). Music: A very short introduction. Oxford University Press.

Music and Emotions (short read):

Chapters 21 and 21 of Juslin, P. N., & Sloboda, J. A. (Eds.). (2011). Handbook of music and emotion: Theory, research, applications. Oxford University Press. [Chapters 21 & 22 particularly]

Music and Emotions (In-depth but accessible):

Juslin, P. N. (2019) Musical Emotions Explained: Unlocking the Secrets of Musical Affect (Online edition) https://doi.org/10.1093/oso/9780198753421.001.0001

An Introduction to World Music:

Titon, J. T., & Cooley, T. J. (Eds.). (2016). Worlds of music: An introduction to the music of the world’s peoples (Sixth edition). Cengage Learning.

Understanding of Frequency (accessible):

Mainwaring, R. (2022). Everybody Hertz: The amazing world of frequency, from bad vibes to good vibrations. Profile Books.

 

Bibliography:

​​Alluri, V., Toiviainen, P., Jääskeläinen, I. P., Glerean, E., Sams, M., & Brattico, E. (2012). Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm. NeuroImage, 59(4), 3677–3689. https://doi.org/10.1016/j.neuroimage.2011.11.019

Cook, N. (2000). Music: A very short introduction. Oxford University Press.

Feld, S. (2000) A Sweet Lullaby for World Music, Public Culture.

Huron, D. (2008). Sweet Anticipation: Music and the Psychology of Expectation. MIT Press

Juslin, P. N. (2019) Musical Emotions Explained: Unlocking the Secrets of Musical Affect (Online edition) https://doi.org/10.1093/oso/9780198753421.001.0001

Juslin P. N. (2013). From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Physics of Life Reviews, 10(3), 235–266. https://doi.org/10.1016/j.plrev.2013.05.008

Juslin, P. N., & Sloboda, J. A. (Eds.). (2011). Handbook of music and emotion: Theory, research, applications. Oxford University Press. [Chapters 21 & 22 particularly]

Juslin, P. N., & Västfjäll, D. (2008). Emotional responses to music: the need to consider underlying mechanisms. The Behavioral and brain sciences, 31(5), 559–621. https://doi.org/10.1017/S0140525X08005293

Kraus, N. (2021). Of sound mind: How our brain constructs a meaningful sonic world. The MIT Press.

Levitin, D. J. (2008). This is your brain on music: Understanding a human obsession (Paperback ed). Atlantic Books.

Margulis, E. H. (2014). On repeat: How music plays the mind. Oxford University Press.

Mermikides, M. (2010). Changes over Time: Theory & Practice. PhD Thesis. University of Surrey.

Mithen, S. (2006). The singing Neanderthals: The origin of music, language, mind and body (Paperback Edition). Phoenix.

Patel, A. D. (2010). Music, language, and the brain. Oxford University Press.

Schlaug, G. et al. (2010). "Increased Corpus Callosum Size in Musicians". Neuropsychologia. 25 (4): 557–577. doi:10.1177/0743558410366594

Schneider, A. (2004) Ice-age musicians fashioned ivory flute. Nature.

Taruskin, R. (1980). Russian Folk Melodies in “The Rite of Spring.” Journal of the American Musicological Society, 33(3), 501–543. https://doi.org/10.2307/831304

Titon, J. T., & Cooley, T. J. (Eds.). (2016). Worlds of music: An introduction to the music of the world’s peoples (Sixth edition). Cengage Learning.

Vuust, P., Heggli, O.A., Friston, K.J. & Kringelbach, M.L. (2022). Music in the brain. Nat Rev Neuroscience 23, 287–305    . https://doi.org/10.1038/s41583-022-00578-5

 

Media resources:

Rorogwela Lullaby: https://www.youtube.com/watch?v=eGjgLrWbIfQ

Catalogue of Bartoks’s recordings, transcriptions and compositional reuse of Folk music: http://bartok-nepzene.zti.hu/en

Tu, manu seserėlė “Oh My Sister” Lithuanian Wedding Song. https://www.youtube.com/watch?v=YDz4HCqTRPs&t=23s

Interactive sine wave resource: https://jackschaedler.github.io/circles-sines-signals/sincos.html

Hidden spectrograms in electronic music: https://twistedsifter.com/2013/01/hidden-images-embedded-into-songs-spectrographs/

An animation of brain responses listening to Astor Piazzolla:

https://www.youtube.com/watch?v=D9EcsYTEcg8

References and Further Reading

Music in the Brain (Latest neuroscience of predictive coding in music listening)

Vuust, P., Kringelbach, M.L. Heggli, O.A., & Friston, K.J. (2022). Music in the brain. Nat Rev Neurosci 23, 287–305. https://doi.org/10.1038/s41583-022-00578-5

Music and sound processing in the brain:

Kraus, N. (2021). Of sound mind: How our brain constructs a meaningful sonic world. The MIT Press.

Musical expectancy:

Huron, D. (2008). Sweet Anticipation: Music and the Psychology of Expectation. MIT Press

Music and Language:

Patel, A. D. (2010). Music, language, and the brain. Oxford University Press.

A short overview of the history and concept of music:

Cook, N. (2000). Music: A very short introduction. Oxford University Press.

Music and Emotions (short read):

Chapters 21 and 21 of Juslin, P. N., & Sloboda, J. A. (Eds.). (2011). Handbook of music and emotion: Theory, research, applications. Oxford University Press. [Chapters 21 & 22 particularly]

Music and Emotions (In-depth but accessible):

Juslin, P. N. (2019) Musical Emotions Explained: Unlocking the Secrets of Musical Affect (Online edition) https://doi.org/10.1093/oso/9780198753421.001.0001

An Introduction to World Music:

Titon, J. T., & Cooley, T. J. (Eds.). (2016). Worlds of music: An introduction to the music of the world’s peoples (Sixth edition). Cengage Learning.

Understanding of Frequency (accessible):

Mainwaring, R. (2022). Everybody Hertz: The amazing world of frequency, from bad vibes to good vibrations. Profile Books.

 

Bibliography:

​​Alluri, V., Toiviainen, P., Jääskeläinen, I. P., Glerean, E., Sams, M., & Brattico, E. (2012). Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm. NeuroImage, 59(4), 3677–3689. https://doi.org/10.1016/j.neuroimage.2011.11.019

Cook, N. (2000). Music: A very short introduction. Oxford University Press.

Feld, S. (2000) A Sweet Lullaby for World Music, Public Culture.

Huron, D. (2008). Sweet Anticipation: Music and the Psychology of Expectation. MIT Press

Juslin, P. N. (2019) Musical Emotions Explained: Unlocking the Secrets of Musical Affect (Online edition) https://doi.org/10.1093/oso/9780198753421.001.0001

Juslin P. N. (2013). From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Physics of Life Reviews, 10(3), 235–266. https://doi.org/10.1016/j.plrev.2013.05.008

Juslin, P. N., & Sloboda, J. A. (Eds.). (2011). Handbook of music and emotion: Theory, research, applications. Oxford University Press. [Chapters 21 & 22 particularly]

Juslin, P. N., & Västfjäll, D. (2008). Emotional responses to music: the need to consider underlying mechanisms. The Behavioral and brain sciences, 31(5), 559–621. https://doi.org/10.1017/S0140525X08005293

Kraus, N. (2021). Of sound mind: How our brain constructs a meaningful sonic world. The MIT Press.

Levitin, D. J. (2008). This is your brain on music: Understanding a human obsession (Paperback ed). Atlantic Books.

Margulis, E. H. (2014). On repeat: How music plays the mind. Oxford University Press.

Mermikides, M. (2010). Changes over Time: Theory & Practice. PhD Thesis. University of Surrey.

Mithen, S. (2006). The singing Neanderthals: The origin of music, language, mind and body (Paperback Edition). Phoenix.

Patel, A. D. (2010). Music, language, and the brain. Oxford University Press.

Schlaug, G. et al. (2010). "Increased Corpus Callosum Size in Musicians". Neuropsychologia. 25 (4): 557–577. doi:10.1177/0743558410366594

Schneider, A. (2004) Ice-age musicians fashioned ivory flute. Nature.

Taruskin, R. (1980). Russian Folk Melodies in “The Rite of Spring.” Journal of the American Musicological Society, 33(3), 501–543. https://doi.org/10.2307/831304

Titon, J. T., & Cooley, T. J. (Eds.). (2016). Worlds of music: An introduction to the music of the world’s peoples (Sixth edition). Cengage Learning.

Vuust, P., Heggli, O.A., Friston, K.J. & Kringelbach, M.L. (2022). Music in the brain. Nat Rev Neuroscience 23, 287–305    . https://doi.org/10.1038/s41583-022-00578-5

 

Media resources:

Rorogwela Lullaby: https://www.youtube.com/watch?v=eGjgLrWbIfQ

Catalogue of Bartoks’s recordings, transcriptions and compositional reuse of Folk music: http://bartok-nepzene.zti.hu/en

Tu, manu seserėlė “Oh My Sister” Lithuanian Wedding Song. https://www.youtube.com/watch?v=YDz4HCqTRPs&t=23s

Interactive sine wave resource: https://jackschaedler.github.io/circles-sines-signals/sincos.html

Hidden spectrograms in electronic music: https://twistedsifter.com/2013/01/hidden-images-embedded-into-songs-spectrographs/

An animation of brain responses listening to Astor Piazzolla:

https://www.youtube.com/watch?v=D9EcsYTEcg8

This event was on Thu, 14 Sep 2023

Milton Mermikides

Professor Milton Mermikides

Gresham Professor of Music

Milton Mermikides is a composer, guitarist, technologist, academic and educator in a wide range of musical styles and has collaborated with artists and scientists as...

Find out more

Support Gresham

Gresham College has offered an outstanding education to the public free of charge for over 400 years. Today, Gresham plays an important role in fostering a love of learning and a greater understanding of ourselves and the world around us. Your donation will help to widen our reach and to broaden our audience, allowing more people to benefit from a high-quality education from some of the brightest minds.