Skip to main content
Macmillan Childrens Publishing Group

Every Song Ever

Twenty Ways to Listen in an Age of Musical Plenty

Ben Ratliff




We are listening in the time of the cloud. First there was a person making up a song, as ritual or warning or memorial. Then there was a person singing an old song that someone had made up. Then there was music in the church and the concert hall and bar and bordello; then the wax cylinder, gramophone, radio, cassette, CD player, downloadable digital file. And then there was the cloud. Now we can hear nearly everything,* almost whenever, almost wherever, often for free: most of the history of Western music and a lot of the rest.

We know all that music is there. Some of us know, roughly, how to encounter a lot of it. But once we hear it, how can we allow ourselves to make sense of it? We could use new ways to find points of connection and intersection with all that inventory. We could use new features to listen for and new filters to listen through. Even better if those features and filters are generated more from the act of listening itself than from the vocabulary and grammar of the composer.

* * *

The most significant progress in the recent history of music has to do with listening. How we listen to music could be, for perhaps the first time in centuries, every bit as important to its history and evolution as what the composer intends when writing it.

By “how we listen to music” I am not referring to a change in our neural processing of music. (This is not a scientific book.) I mean a change in how we build a conscious framework or a rationale to listen to all kinds of music. Culture is built on ready availability, and we have suddenly switched from being a species that needed to recognize only a few kinds of songs—because only a few kinds were readily available to us, through the radio, or through record stores, if we were lucky enough to live near one—to a species with direct and instant access to hundreds of kinds, thousands of kinds, across culture and region and history. Listeners have become much more powerful. Perhaps we should use that power to learn how to listen to everything.

Here’s an image from real life. A teenage boy, on a bus in the Bronx, in a puffer vest and bright kicks and a close haircut, just old enough to have figured out how to dress with authority, listening to a song by Jeremih, phone to ear. Maybe he bought the song; more likely he found a way to download it for free, or is streaming it from YouTube or Spotify. The song is about luxuriant sex, as are most songs by Jeremih. The teenager listens with near boredom and absolute confidence. The position of the phone in his palm, the angle of his hand and wrist, the focus of his eyes as he surrounds himself with the song’s information—this is all part of his creativity. He is engaging, identifying with the song; he has a sense of dominion over the song and the medium. He can take that song or leave it. There are a million others like it. He’s got the power. He’s the great listener of now.

He can listen to more, or he can listen to less. He can hear a musician perform twenty times without paying admission or traveling anywhere, through live streams on screens. If he finds his way to the right free software, he can time-stretch a song while keeping it at the same pitch, and turn its emotional experience upside-down, as has been done to records by Justin Bieber and to the Jackson 5. He can fuse elements of two different songs—say, a Biggie Smalls rap and a children’s television-show theme—and can learn, when boomeranging it through social media, that a lot of people (mostly young people) really, really like stark musical juxtapositions.

In the store where he bought his sneakers he might have heard a digital playlist on shuffle, playing a Don Omar reggaeton track after a Latin freestyle hit from the 1980s. On the bus, he can stream the same five Drake mp3s from the cloud without owning anything he’s hearing, or he can listen just as easily to recent field recordings of Saharan music, possibly made on a cell phone. At home, he can watch television shows that use recorded music pulled from any tradition of the last hundred years in order to give extra meaning to a scene or a character; if he likes what he hears, his cloud-based playlists might appear to follow no associative logic of sound or style. Later on, if he becomes more engaged with music, he can—let’s say—train as a violist and feel moderately sure that he will work with electronic-music composers or singer-songwriters or Berklee-trained guitar improvisers or rappers from South Africa. He can walk out of whatever styles of music raised him, and into others as yet unknown to him, where he has complete access because listening gave it to him. He doesn’t have to wait for music to define him. He can define it.

Music is everywhere. It has gained on us as our waking life turns into one long broadcast, for better and for worse—often for worse. But we have gained on it, too, learning how and when we want to absorb it. The unit of the album means increasingly little to us, and so the continent-sized ice floes of English-language culture that were Beatles and Michael Jackson records are melting into the water world of sound. (For efficiency we’ll download just one song and ignore the other twelve, but we could likely have them all for free: we have a new assumption that music is ours to take, just as soon as it is ready to be sold to us.) We might get our cues about what to listen to from our Facebook feed, or from sources that use music as almost neutral content in a mediated environment—talent shows, talk radio, football-game ad spots. Background-music services have been vastly improved, thanks to the information yielded by our online listening activity. Pandora’s so-called Genome recommendation model reminds us that there is more to be heard within a similar style, based on that style’s small or large characteristics. Other sophisticated music-data algorithms, such as those created for Spotify and other clients by music-data companies like the Echo Nest, profile your taste in music as a condition related to who you are in general—where you live, how old you are, how you are likely to vote. With these advances we can essentially be fed our favorite meal repeatedly. We develop a relationship of trust with—what? Whom? A team of programmers? Our own tastes, whatever that means, translated into a data profile?

This all sounds very bad. It probably is very bad. Infinite access, unused or misused, can lead to an atrophy of the desire to seek out new songs ourselves, and a hardening of taste, such that all you want to do is confirm what you already know. But there is possibly something very good, too, about the constant broadcast and the powers of the shuffle and recommendation effects. There is a possibility that hearing so much music without specifically asking for it develops in the listener a fresh kind of aural perception, an ability to size up a song and contextualize it in a new or personal way, rather than immediately rejecting it based on an external idea of genre or style. It’s what happens in the moment of contextualization that matters: what you can connect it to, how you make it relate to what you know.

* * *

The old way of “correct” listening involved more preconditioning. It meant not only knowing where a piece of music came from historically and nationally, but physically: an oboe sounded like an oboe. A celesta sounded like a celesta. A viola like a viola. These machines had right and wrong ways to strike the ear. One understood those sounds by imagining those instruments within an ensemble or orchestra arranged on stage and facing the audience. A certain language of rhythms and harmonies, signposts and cues, became consensual within a culture.

But since and after the 1970s—when studio recording suddenly advanced beyond the limitation of eight tracks, synthesizers and then samples became common, and various extremes of volume or experimentation in progressive rock and jazz and electronic music developed their own traditions in popular culture—the listening experience has been changing. You often don’t know what you’re hearing. Pierre Schaeffer, the French composer, saw that coming in the 1950s. “The lessons of the linguists must be born in mind,” Schaeffer wrote, speaking of the failure of Western notation to encompass all music. “A foreign language cannot be reduced to the familiar patterns of our mother tongue. We have no doubt that other civilizations probably have other instruments and other ideas, a solfège of their own, perhaps more refined than ours.”

And that’s what listening can be today: an encounter with civilizations other than your own, perhaps on a daily or weekly basis, no matter who you are. Older listeners might feel it more intensely: having grown up with predigital sounds, some feel that nearly everything they hear through the channels of popular culture is strange or even unknowable. But even younger listeners feel something like this, too. Even if they’ve used Garageband, even if they’ve used digital editing programs to make a YouTube video, they may still be disoriented by the intensity, the sounds and swells and curves, of a Max Martin or Maybach Music or DJ Mustard production, or all that flows from those headwaters. Sounds are running ahead of our vocabularies for describing them. Oh, we have a general idea—those sounds come from digital sources—but perhaps we don’t expect the frequencies of those sounds, or how they will be arranged.

The feelings of disorientation, of not knowing what process makes what sound, of not really understanding what “producers” do, are question marks now built into our hearing. We have not been thinking so much about the old definable coordinates. We have been thinking, when we hear something that is new to us, more about affect and magic. We are redefining our terms every time a new piece of music arises in the shuffle rotation, because there is a greater chance that we will be surprised by its juxtaposition with what came before, if only in volume: the very loud mastering of the Black Eyed Peas, let’s say, coming after the dun-colored restraint of a Waylon Jennings record from the mid-’70s.

In many cases, having rapidly acquired a new kind of listening brain—a brain with unlimited access—we dig very deeply and very narrowly, creating bottomless comfort zones in what we have decided we like and trust. Or we shut down, threatened by the endless choice. The riches remain dumb unless we have an engaged relationship with them. Algorithms are listening to us. At the very least we should try to listen better than we are being listened to.

* * *

To a certain way of thinking, understanding Beethoven’s or Bach’s use of melody, harmony, rhythm, tone color, and compositional structure might have taught you how to listen well in 1939, when Aaron Copland published his popular book What to Listen for in Music. Copland called this the “sheerly musical plane” of listening—the state of being alive to what he called music’s “actual musical material,” “the principles of musical form.” It was an ideal of listening according to an imagined sense of what the composer would have wanted you to understand. But Beethoven and Bach, even combined—and great as they still are—do not prepare or condition you for the range of music that in 2015 is already, or could easily be, part of your consciousness. It is up to you to come up with reasons for engaging as a listener that can encompass Beethoven and Bach as well as Beyoncé, Hank Williams, John Coltrane, Drake, Björk, Arvo Pärt, Umm Kulthum, and the Beatles. They don’t all come from one tradition, and their principles of form are different. They’re not all standing on one sheerly musical plane.

Perhaps those reasons for engagement could be articulated in a language that isn’t specifically musical, or identified with composers and players, as Copland would have wanted, but rather a language that refers to generalized human activity. Therefore, perhaps not “melody,” “harmony,” “rhythm,” “sonata form,” “oratorio.” Perhaps, instead, repetition, or speed, or slowness, or density, or discrepancy, or stubbornness, or sadness. Intentionally, these are not musical terms per se. You know what repetition is even if you’ve never had the first thought about how a song is written. You know because you experience it in your average day or week. Why is it all right to categorize music this way? Because it has to be all right. Music and life are inseparable. Music is part of our physical and intellectual formation. Music moves: it can’t do anything else. The same goes for us. Everything has a tone and a pitch, and rhythms—or pulses, at least—surround us. We build an autobiography and a self-image with music, and we know, even as we’re building them, that they’re going to change. Most human beings impose their wills on the world partly with and through music, even if they are not musicians. The way they hear—you can call it taste, if you want—is in how they move and work and dress and love.

Repetition, for instance: repetition in music works best when the quality of repeated tones and their patterning remind you of breathing or walking or running. Crucially, the effect of repetition depends not on one figure being repeated identically and unaccompanied, but on a relative change moving against a relative constant, which is really the key to life’s riddle of time and gratification. Once you establish that, you can hear it in a piece of music by Rihanna and then make connections to other examples of musical repetition: James Brown, and Steve Reich, and Cortijo y su Combo. All those entities may belong to different radio or streaming-service playlists. But so what? When the first order of business is to sort music out by genre or structure or language—to determine whether a song is indie-folk or classical or R&B or whatever—that’s a direct route to the bottomless comfort zone.

And so, back to the question. We can listen to nearly anything, at any time. How are we going to get to it? How are we going to access it, and how can we listen to it with purpose—meaning, how can we pay just enough attention to it so that it could change our lives? And again: How are we going to listen better than we are being listened to?

This book is a series of essays about different things to listen for in music, now that the circumstances have changed since Copland’s time. Nobody can love everything, of course; the urgent thing, now that we have so much catching up to do, seems to be how to access a strategy of openness, a spirit of recognition. It means rolling the microscope back from issues of form and genre to find general associative qualities that have to do with the actual experience of listening, such that you might perform your own version of “If you like X, you’ll like Y,” in which X and Y may have been conceived centuries apart, for totally different audiences, and yet they’re both in front of us, equally accessible. I am attempting to respond to a situation of total, overwhelming, glorious plenty.

* * *

This new kind of thinking about listening—if it is new—will be speculative and somewhat subjective. It uses “I,” “we,” and “you” in a generalized way. (Of course, the “we” might have a little more to do with me than you, and the “you” might also have little to do with you. It’s a rhetorical conceit.) It talks about some very simple notions, such as repetition, and a few that are more abstract and intuitive, such as what I call “linking,” and that might require a little more squinting and imagining on the part of the reader. Listening—reacting to music and putting yourself in its spaces—is an abstract and intuitive job.

But no reaction to music is universal. The old way, of learning to listen through the lessons and aims of the composers, could be speculative, too. The journalist and music-appreciation writer Henry Krehbiel, a democratizing force for general audiences around the end of the nineteenth and the beginning of the twentieth century, speculated all the time. In How to Listen to Music: Hints and Suggestions to Untaught Lovers of the Art, from 1897, he wrote statements such as “the lifeblood of music is melody,” or “the vile, the ugly, the painful are not fit subjects for music.” Some might only partially agree with him now. Some would say he was entirely wrong.

I am not going to give you an algorithm for finding new music to know and love. It’s not my business to anticipate what you might like. I am suggesting a strategy of openness, and a spirit in which to hear things that may have been kept away from you. The suggestions I’m offering for how to hear are based on certain kinds of affinities between pieces of music. The affinities are not based in genre, because genre is a construct for the purpose of commerce, not pleasure, and ultimately for the purpose of listening to less. (I sometimes use words and phrases that have to do with formal structure and genre in this book, but where it is possible I try not to. Most of all, I am trying not to use those terms as boundaries or to confer value.) This book is about listening for pleasure, and about listening to more.

Copyright © 2016 by Ben Ratliff