We’ve all heard the saying, “Where words fail, music speaks” – and now, there’s a study to prove it.
New research from Harvard University shows that music carries a set of unique codes and patterns, which are in fact universally understood.
Conducted by the university’s science department, the aim of the study – titled Universality and Diversity in Human Song – was to explore whether or not music from different cultures share certain qualities.
In the abstract, it explains: “Music is often assumed to be a human universal, emerging from an evolutionary adaptation specific to music and/or a by-product of adaptations for affect, language, motor control, and auditory perception.
“But universality has never actually been systematically demonstrated, and it is challenged by the vast diversity of music across cultures.”
To carry out the research, the team examined ethnographic data and gathered over a century’s worth of music from across 315 different cultures – surely, therefore, a milestone in helping us understand our relationship with sound.
The findings show that by analysing a song’s acoustic features, such as tonality, ornamentation and tempo, it’s possible for people to understand its meaning, regardless of its cultural background.
So, whether you’re listening to a dance track, love song, healing song or lullaby, it seems a song’s psychological purpose can be easily identified.
“Music is in fact universal,” the study concludes. “It exists in every society (both with and without words), varies more within than between societies, regularly supports certain types of behaviour, and has acoustic features that are systematically related to the goals and responses of singers and listeners.
“But music is not a fixed biological response with a single prototypical adaptive function: It is produced worldwide in diverse behavioural contexts that vary in formality, arousal, and religiosity.”
“Music does appear to be tied to specific perceptual, cognitive, and affective faculties, including language (all societies put words to their songs), motor control (people in all societies dance), auditory analysis (all musical systems have signatures of tonality), and aesthetics (their melodies and rhythms are balanced between monotony and chaos).”
The team behind the study said: “We propose that the music of a society is not a fixed inventory of cultural behaviours, but rather the product of underlying psychological faculties that make certain kinds of sound feel appropriate to certain social and emotional circumstances.”
Published last November, the study was led by Samuel Mehr, a fellow of the Harvard Data Science Initiative and research associate in psychology, Manvir Singh, a graduate student in Harvard’s department of Human Evolutionary Biology, and Luke Glowacki, a former Harvard graduate, now professor of anthropology at Pennsylvania State University.
Meanwhile, another study led by Alan Cowen at the University of California, Berkeley, investigated how many emotional experiences music could evoke in 1,591 participants from the United States and China, by making them listen to 2,168 musical samples.
Researchers found that both cultures recognised 13 different categories of emotions – including amusing, annoying, anxious or tense, beautiful, calm or relaxing or serene, dreamy, energising, erotic or desirous, indignant or defiant, joyful or cheerful, sad or depressing, scary or fearful, and triumphant or heroic.
Cowen said: “Music is a universal language, but we don’t always pay enough attention to what it’s saying and how it’s being understood.
“We wanted to take an important first step toward solving the mystery of how music can evoke so many nuanced emotions.”
In future, the researchers hope that their work will complement traditional medicine by helping psychologists and psychiatrists to develop better therapies using the power of music.