Several years ago I got an internship at a recording studio. Most of the work I did during my time there was in service of the recording of a rather elaborately constructed pop album by a large group, featuring a number of orchestral instruments as well as modern electronic sounds. The leader went so far as to compose his work using classical notation, and the result was superior. I truly enjoyed the company of the studio’s owner (also the album’s producer) and everyone involved in the band.
At some point during my time there, the owner/producer presented to myself and another intern his philosophy on Groove, and machine-driven rhythms vs. those created by people. Rhythmic patterns, he said, are a purely mathematical phenomenon, sure; can be expressed through points on a grid and translated into rhythm by a machine with infallible accuracy; but to reach beyond science and achieve true Groove—for a drum beat and everything following it to really Swing and be Funky, for them to connect with your gut in a visceral way—the beats have to land loosely, somewhere outside the rigidly divided hit-points of a given tempo, walking a tightrope between sloppiness and excessive (inhuman) tightness. This is true for any performer taking part in a bodily-inclined musical venture, but the rhythm section has to take the lead, and the drums are the most important part. After the talk, he played us a few examples: OutKast, J Dilla, earthy rap songs of the early 2000s. We listened very carefully to the loosely programmed drums of each song and discussed them at length. It was agreed that the best-sounding drums tended to land slightly late on the meter, that the kick had a greater margin of error than the snare, and that sometimes (though rarely) it was even better for the snare to come in slightly early, depending on where the other hits landed. The hi-hats needed to be the tightest in any scenario. Finally all on the same page, we set out to apply this concept to the pop album.
As was/is standard, all of the music was recorded and produced using Pro Tools software, which organizes recordings and all relevant data into “sessions”. These can grow very large in size, depending on how much raw audio is involved. The sessions for this particular project were huge, and took 15-20 minutes each to load. While Pro Tools sessions load into memory, a window pops up to display progress of different aspects: audio files (usually in the dozens but sometimes in the hundreds), effects processors, edit points. Let's focus on that last point: one song in particular—most of them exceeded five minutes in length, but “Song X” was much more of a straightforward pop song and clocked in at around four—contained roughly 20,000 edits.
Now most of you who are not musicians—hell, many of you who are—will probably not think of music in terms of raw data and edit points. It helps to think of a song as a short film: just as an editor organizes the raw images of a film shoot into a satisfying whole, part of a music producer’s job is to shape the raw output of musicians into a finished song. Generally speaking, that requires a fair amount of editing: for example, you record a dozen or more takes of the guitar and stitch together the best parts of each one. Still, though, a guitar track made of twelve takes of guitar is only a dozen edits, maybe fourteen. TWENTY THOUSAND edits in the aforementioned four-minute “Song X” is more than 80 edits PER SECOND. How can something like this happen? Well: The core takes of “Song X” were recorded as a basic rock quartet—guitar, bass, drums, vocals. A tempo was set in Pro Tools and a metronome was used to make it easier on everyone involved. The orchestral instruments and layers of synthesizer were overdubbed later. Now, keep in mind that a drum set is technically a composite of several different individual percussion instruments: in this case, a hi-hat, a couple differently-sized cymbals, two toms, snare, and kick. (Bear with me, this is super-important.) Each of those instruments is recorded by its own microphone to maximize your options in terms of mixing and production. On top of that, the whole set is recorded by two “ambient” mics, AND our producer wanted the snare and the kick to be miked from different positions to capture the discrete timbres of each. As a result, the drums took up ten audio tracks.
Once everything had been recorded, the producer decided that the drumming was too imprecise or not Funky enough or both—remember, human imperfection was only desirable to a point—and needed to be overhauled. With that in mind, we used a program within Pro Tools called Beat Detective, which finds each point of impact within each drum track and isolates it into its own region. (Did the drummer hit a snare four times in eight seconds? That’s four different regions and and four different edit points. “Song X” used a basic rock beat and had about 100 beats per minute, meaning about 400 edits for the entire song just for the snare.) Once all the drum hits were isolated, we had Beat Detective automatically align the regions to a grid. Even though he was using a metronome, the drummer had been playing fluidly, with human arms; he was never going to have 100% perfect timing. However, thanks to Beat Detective, the patterns he played had become digitally perfect, snapped to a robotic frame.
We took an 8-bar (or 16, I can’t recall) segment of those ten tracks of sliced and quantized drums, reverse-engineered a Groove (that is, manually nudged each hit further down the timeline one by one until it sounded plausibly loose: with great care and no small expense of time), and looped that segment throughout the song. We then ran Beat Detective on every other instrument we could, using the drum track as a template with which to tighten up the rest of the band. Instruments for which the Beat Detection algorithm did not work, we repeated its effects manually as needed. The result was a four-minute pop song with 20,000 edits: one in which the human element had been stripped out almost entirely to make way for a different, reverse-engineered and ultimately false appearance of a human element. The original drummer was nowhere to be found on the finished product; merely his timbre sitting at the bottom of the pile like a marionette, with our engineers’ voices collectively farting from his mouth. This “sampling” and manipulation of a band on the molecular level is a widespread practice, reaching back at least as far as Def Leppard’s Hysteria album (look it up), but being involved in it on such an intimate and extensive level was mind-blowing.
Let’s shift focus then to an old soul song: a cover of the Beatles’ “Tomorrow Never Knows” by Junior Parker. I wrote about it on my Found Music blog in 2011: Parker’s version “is condensed from the original's wild acid ride into a minimal soul gem. The arrangement here is so spare and light it will carry you away if you close your eyes: one bass note, a couple guitar licks, two or three drum hits, an electric piano you might not even notice the first time around. Parker's smooth voice is on top of all this, singing the Timothy Leary-inspired lyrics like they’re a lullaby.” I also pointed out a rhythmic mistake that the drummer makes about two and a half minutes in. About that: listen closely to those drums. For most of the song it’s nothing more than a metronomic tick of a cymbal, quarter notes exactly on the beat; every now and then you hear a stick popping against the snare rim but that’s as crazy as it gets. However—at that 2-and-a-half minute mark, as Junior’s singing, “But listen to the color of your dream / It is not leaving”—just after “leaving”, there’s a sudden leap of syncopation, two cymbal ticks crowded together, still technically on the beat but unrepeated anywhere in the song. It has to be a mistake, right? Nobody’s improvising; the entire band is locked into their respective parts. Something unforeseen happened there. The drummer’s hand slipped and he recovered like a pro. No one noticed or cared, and the song got pressed to wax.
Placing this second example in the context of the first, I have to think that if Parker’s version of “Tomorrow Never Knows” were recorded in the present day, the survival of such a mistake would be unthinkable. Someone would either run Beat Detective on the drum track or simply nudge the errant hit back into its proper place, and that would be the end of it. Why, though? What the hell? As I said, no human musician is going to be able to play with 100% accuracy. The producer of "Song X" was correct when he explained his philosophy of Groove as something best achieved by humans. So why do we now work so hard on erasing humanity from music? Why did I have to completely remove the human element from that pop song and replace it with an imitation? I get it to a certain extent: our current culture of instant gratification means that any imperfection in the music is like an air bubble in a syringe, a hazardous obstacle to be removed before injection. But when Junior Parker's drummer makes that split-second fuck-up, he is reminding us of his humanity, of the existence of a life dedicated to our entertainment. Though we may not realize it, this mistake connects us with him, with all of humanity, in a very real way. Humans are not perfect; music performed perfectly is not human, and fundamentally fails at being good music.
I don't mean to say that music is only good if it's sloppily rendered. Sloppiness (or even the minor imprecision of Groove) in classical music actively removes me from my enjoyment; likewise, I listen to a lot of wonky digital shit that is by definition only ever performed by machines. The best of all that, however, always has bits of human intelligence shining through: the audible shift and coughing of an orchestra's players; the slumping, live-recorded beats of hip-hop producers like Madlib or J Dilla; little quirks of composition that a machine cannot yet conjure alone. Listen to RZA’s production for Ghostface Killah’s “Maxine”, in which he reproduced an old funk song using a live band and edited the new recording like a sample. You can hear digital clicks when the loops turn over. That has to be intentional; those clicks take seconds to fix, and you don’t spend as much time and money in the studio as he does without taking note of such things.
I make music almost entirely on a computer, using bits of pre-recorded sound and rarely playing a traditional instrument. In a lot of ways it's more programming than composing. In that setting, a mistake can be a misplaced kick sound; a badly-parsed loop; a chunk of sound that goes on for too long. Such slips of the brain frequently result in a better sound than was intended, and in any event I've trained myself to listen closely to what I've done before correcting anything. It's a way of preserving a sense of humanity within purely electronic music, in hopes of connecting with an audience I may never meet.
I listen to music in part to remind myself that I'm alive. Music, then, should be a reflection of that life--of all life. Everyone makes mistakes. Music should make mistakes too.
by Mark Sanders