Thursday, January 26, 2023

Are we hardwired for happy vs sad music? I doubt it

Blogger Tales of Whoa says this:

A paper in Psychological Studies showed that newborns, when played music judged by listeners as "happy" or "sad," responded differently -- and that it seems to be independent of tempo ("happy" music generally having a faster rhythm than "sad" music). Newborns listening to the tunes judged as "happy" showed greater focus, calmer facial expressions, reduced heartbeat, and less movement of the hands and feet; "sad" music produced no such effect.
So the hallmarks of a happy piece of music -- things like being in a major key, less harmonic dissonance, and wide pitch contours -- are markers we either learn prenatally, or else are (amazing as it may seem) hard-wired into our neural network.

Color me skeptical, as I tweeted him, and am expanding here.

First, the whole idea that music in a minor rather than a major is sad is culturally learned.

Second, within the West, the major/minor system is only 500 years old. Before that, it was church modes, and classical Greek modes before that. Gregorian chant, in the Catholic tradition, might be called "serene," if one were using emotion words. Neither "happy" nor "sad" need apply. 

And, even beyond that, in modern classical music, dividing major and minor into "happy" and "sad" doesn't exactly square up, and is simplistic enough to exclude other emotions.

Third, even within major/minor, what about the time before equal tuning? Hold on to that.

Fourth, what about non-Western systems with more than, or less than, 12 tones? And without major/minor structures?

The only things I see "hardwired" are tempo, rhythm, and the original Pythagorean ratios, of which only the octave is universal and only the perfect fourth and fifth in modern Western music.

And, we haven't even discussed differences in what is "harmonic" or "dissonant" across all these different musical systems. Or, in the West, what's harmonic today vs 300 years ago.

Grokking the study also shows me it is, first of all, per the above, "WEIRD." Western music only; I assume the "raters" of "happy" vs "sad" are educated. Industrialized, to be in a neonatal ward. Presumably rich and developed.

It also shows me that it's not really double-blinded and other things. And, it's HUGELY "small sample size" with just 32 involved. Two "culls" may have been done for good reasons, but that arguably also eliminated blinding or double-blinding.

Worse yet? In his March 9 post, he arguably undercut himself, while at the same time, more arguably undercutting his musical background by continuing to mainly limit his musical discussion to the modern Western major/minor scale system.

Several pullouts needed. First is from his overview from the study he cites:

(W)hat this team discovered is something startling; there's a tribe in the Amazon which has had no exposure to Western music, and while they are fairly good at mimicking the relationships between pairs of notes, they seemed completely unaware that they were singing completely different notes (as an example, if the researchers played a C and a G -- a fifth apart -- the test subjects might well sing back an A and an E -- also a fifth apart but entirely different notes unrelated to the first two).

Followed by a second from that study:

The results suggest the cross-cultural presence of logarithmic scales for pitch, and biological constraints on the limits of pitch, but indicate that octave equivalence may be culturally contingent, plausibly dependent on pitch representations that develop from experience with particular musical systems.

Then, there's his reaction:

It makes me wonder if our understanding of a particular kind of chord structure isn't hardwired, but is learned very early from exposure

There you go.

I even asked on Twitter if this didn't undercut the piece from January and haven't heard back.

And, I haven't even talked about whether or not there's that great of a degree of emotional constancy across cultures around the world of "happy" vs "sad."

==

Update, March 29: He's at it again, now talking about a study that claims AI has led us to hear amino acids. No such thing. Rather, an AI music program has riffed off the quantum vibrational frequencies of them to create musical notes, then to "interpret" how things such as folding might sound. He admits that himself:

What's cool is that the musical note that represents each amino acid isn't randomly chosen. It's based on the amino acid's actual quantum vibrational frequency. So when you listen to it, you're not just hearing a whimsical combination of notes based on something from nature; you're actually hearing the protein itself.

Since our ears don't hear in the quantum level, we're not hearing the vibrations of proteins.

And, ultimately, technology is not science, so ultimately, this is about technology, not science.

No comments: