Every couple of months, I see another post in my Facebook feed about a band that was cut off by an 18-wheeler or skidded on a patch of black ice and rolled their van into a ditch. Some members are injured, and they’re launching a Kickstarter campaign to pay for medical bills and to get back on their feet.
My heart (and often, money) goes out to them. But if you need to crowdfund your hospital costs, you were never on your feet to begin with. After many years as a touring artist myself, I’m honestly surprised that the person in that ditch has never been me.
Touring is, of course, the most ancient business model available to artists — and in many ways, it remains a vital part of their livelihood, even while the surrounding industry undergoes major upheaval to accommodate the new paradigm of streaming music. In response to the shift in revenue sources, standard recording contracts now intrude into the numerous nonrecording aspects of an artist’s career. But the advice given to the creative generators of this multibillion dollar industry is still one that would be recognizable to a medieval troubadour: Go on tour.
And yet from a business standpoint, it’s hard to find a model more unsustainable than one that relies on a single human body. This is why we have vice presidents, relief pitchers and sixth men. When applied to music’s seemingly limitless streaming future, the only scarce resource left is the artists themselves. You would think the industry would protect such an important piece of its business model, but in fact, the opposite is true.
The contribution of live touring to the music industry’s bottom line is enormous, and the number is only growing. Consider Taylor Swift: According to Billboard, her live show grossed $30 million in 2013, with another $10 million in merchandise sold. And depending on whom you believe, she made anywhere from $500,000 to $6 million from her catalog on Spotify that year. While she is certainly making money in retail sales and digital downloads, both of those metrics are spiraling downward as people migrate away from the concept of owning music at all. Nielsen recently released numbers indicating substantial drops in both CD and digital-track sales, which are down almost $100 million year over year from 2014; streaming music continues to grow, but the revenue it generates isn’t close to making up the difference, yet.
This means that the bulk of Swift’s income rides on her ability to get to venues safely and perform. It also makes her much-examined decision to pull her 2014 release “1989” from Spotify the financial equivalent of her taking a few months off. Regardless how you look at it, the health of her singing voice is far and away the single most important aspect of her business.
Record labels have followed the money and addressed these changes in the contracts they offer to recording artists. In the predigital era, labels profited only from the physical recordings they funded, but as that income began dwindling, a new logic was applied to the artist-label relationship. Labels argued that by promoting the recordings they owned, they were also promoting the artist’s career as a whole, and were entitled to profit from the full spectrum of artist’s revenue streams — the “360 deal,” named for the totality of its coverage.
But labels do not take on the additional risks associated with their additional profits. Instead of protecting the health of their revenue-generating engine, they simply point to an artist’s independent-contractor status, which releases them from any liability they would be on the hook for if artists were labeled employees. Rather than sparking a labor dispute, these 360 deals quickly became the new normal. As a result, administrators, support staff and office spaces are insured against the risks of doing business, while the company’s income generators — the creators of their master recordings — are on their own.
Artists today are not only touring more to make up for their own lost recording-sales revenue; they’re also being compelled to by the labels that also stand to profit. This makes it a great time to be a fan of live music: From the rise of electronic dance music to the regular resurrections of the Grateful Dead, a major musical event is never far away. But the physical price that artists pay for this easy access is steep. Last summer, Foo Fighters’ Dave Grohl was forced to cancel shows when hefell from a stage in Sweden and broke his leg. Other artists with 2015 tour-date cancellations on account of injuries, surgeries and other health issues includedSam Smith, Miranda Lambert, Steve Aoki, Little Big Town, Meghan Trainor,Nickelback, the Black Keys and Kelly Clarkson.
That’s a lot of injuries — and millions of dollars lost. The European shows canceled by Foo Fighters alone, including a headlining slot at the Glastonbury Music Festival, cost the band nearly $10 million in fees and travel expenses.) And of all the instruments on a given tour, the vocal cords are the most vulnerable to the harsh environment the road virtually guarantees; basically anything that inconveniences the ordinary traveler becomes a business risk for the singer. Regardless of the circumstances, the singer has to call on this small, unprotected instrument to deliver on a daily itinerary that can extend from a morning drive-time radio show to the meet-and-greet after the performance.
From royalty rates to basic safeguards against the standard hazards of doing business, recording artists begin the negotiating process with a deck that is stacked against them. This lopsided balance of power allows labels to treat all artists as replaceable until proven otherwise, and both sides know that there is always a long line of hopefuls outside auditions for “The Voice” or “America’s Got Talent” to undercut a young artist’s bargaining power.
The question of why recording artists have been unable to organize and collectively bargain the way other artists have — actors and screenwriters, for example — is one that has dogged them since the dawn of the record deal. Musicians do have a union, the American Federation of Musicians, but it’s not a particularly strong one; it primarily represents members of symphonies, and it hasn’t been on a national strike in 70 years. Recording artists are not really considered core members, because their tenures within the union tend to be shorter than those of lifelong pit musicians and orchestra members. Music is also a traditionally decentralized, live art form with an ingrained renegade spirit. Hollywood, by contrast, has a single dominant hub.
Perhaps musicians’ renegade spirit is what ultimately will save the next generation of recording artists, who are increasingly forgoing record deals altogether and going it alone. As true independents, they work the margin between the technology that makes recordings cheaper to create and a public that is steadily buying fewer of them. Without a label taking a bite out of multiple revenue sources, the numbers can actually work. Others are coming together in groups centered on advocacy and pressing for changes to the laws that dictate royalty payments in the new streaming economy — something that could mean all the difference when injury, accident or age brings a touring musician’s career to a halt. But in the meantime, the vans and buses roll on.
Mike Errico is a writer, recording artist and professor of songwriting at NYU’s Clive Davis Institute of Recorded Music.
T Bone Burnett is an award-winning singer, songwriter and producer, whose numerous recognitions include 13 Grammy awards, an Oscar and a Golden Globe. He is a member of the Content Creators Coalition’s Advisory Board.
Music runs through America’s soul and makes us who we are — as individuals, as communities, as a nation.
It fuels all the other creative arts, as I have learned working on music-infused films such as “O Brother, Where Art Thou?” and television shows such as “True Detective.”
And it has driven the incredible boom in digital media that seems destined to define our age. Facts don’t lie — musical artists blanket the lists of “top most followed” on Facebook and Twitter, and “always-with-us” access to music is a big part of why smartphones and mobile broadband are the fastest-spreading technologies in human history.
But this brave new digital world has a dark side, too — and it is the responsibility of everyone who loves and cares about music to acknowledge and deal with this uncomfortable truth.
Too much of the emotional, cultural and economic value that music creates is simply lost now, slipping through the digital cracks in some cases, outright hijacked by bad actors and online parasites in others.
Artists, fans and responsible music and technology businesses alike all know this. When my friend Taylor Swift spoke up for the value of our work and the righteous claim of all artists to be paid for what they do, she was celebrated and applauded — not just by her colleagues, but also by teenagers who care about the people who create the music that means something to them and businesses such as Apple that fundamentally want to do what’s right.
How bad is the problem? Consider this: In 2014, sales from vinyl records made more than all of the ad-supported on-demand streams on services such as YouTube. I’m not running down vinyl — it is still the best-sounding, most durable medium we have for listening to music, by far. But why should a technology most people consider outdated generate more revenue than an Internet service with more than 100 million American users? That’s just wrong.
Just two decades ago, a music superstar was born when her record went gold, selling 500,000 units. Today, experts say it takes 100 million streams to match that kind of success. Even the most relentless year-round touring schedule or advertising licensing deals can’t match the income that a hit record once produced.
For small and up-and-coming artists, the income collapse has been even more severe; copies of one-penny royalty checks are rampant on the Internet. These artists are struggling American small businesses, and the deck is stacked against them.
So what’s causing this gap between the value artists create and the price today’s world puts on their work?
Part of it is that the legal mess of U.S. copyright law has anchored royalties for music creators far below fair market value. In some cases, such as satellite radio, the law actually says they can pay below-market rates for music. In others, such as AM/FM radio, it’s even more absurd — when music is played on traditional radio, artists and their labels get paid nothing at all (songwriters receive AM/FM royalties, but no one else does), even though corporate radio chains earn billions selling ads around our work. That’s a legally sanctioned slap in the face to everyone who ever picked up an instrument or sang into a microphone. It is a corrosive economic dust bowl in which giant corporations grow rich on others’ work while music creators try to survive on scraps.
But the problem runs even deeper than that. In the digital marketplace, everyone seems to have found a way to make a living off music except the creators who actually record the songs. Websites put up illegal copies of music — or turn a blind eye while others do — then sell ads micro-targeted at everyone who comes to listen. Eventually, a site may be forced to pull down the unlicensed (and for the artists and labels, completely unpaid) copy, but in the meantime, its owners have cashed in.
For more legitimate sites, creators are pressured to accept a Hobson’s choice between licensing their music at desperately low royalty rates or wading into the legal quicksand and sending thousands or millions of “takedown” notices under a broken and antiquated law called the Digital Millennium Copyright Act.
Fortunately, creators have begun to band together and speak out — the roster of those demanding reform is a who’s who of the music business, from Elvis Costello to Annie Lennox, from REM to Chuck D, and hundreds more. Congress is reviewing the copyright laws, and this time, we will be heard, and there will be no more backroom deals or giveaways. Powerful new legislation called the Fair Play Fair Pay Act is being championed by leaders in both parties who care about music and the people who make it. That would be a vital step forward — a milestone of progress in a debate that has been running in Congress since Frank Sinatra lobbied Paul McCartney, Ella Fitzgerald, Bruce Springsteen and others to join him in fighting for a radio performance right nearly 30 years ago.
Music is an important part of who we are, an indelible record of what we care about and how we live.
And if we let that slip away — whether through legal gridlock, cultural apathy or technological drift — we will have lost something irreplaceable and fundamental to our lives.
Turn the knob, have your perception analyzed (Source)
Desperate to get their music on the radio at all costs, record labels are employing powerful software to artificially sweeten it, polish it, make it louder— squeezing out the last drops of its individuality
There was once a little-watched video on Maroon 5’s YouTube channel (now deleted, but visible here and here) which documents the tortuous, tedious process of crafting an instantly-forgettable mainstream radio hit.
It’s fourteen minutes of elegantly dishevelled chaps sitting in leather sofas, playing $15,000 vintage guitars next to $200,000 studio consoles, staring at notepads and endlessly discussing how little they like the track (called “Makes Me Wonder”), and how it doesn’t have a chorus. Even edited down, the tedium is mind-boggling as they play the same lame riff over and over and over again. At one point, singer Adam Levine says: “I’m sick of trying to engineer songs to be hits.” But that’s exactly what he proceeds to do.
Note: This article originally appeared in the March, 2008 edition of Word Magazine. That was a long time ago—before YouTube started to usurp radio as the place where people discovered music, before music streaming services, before the vinyl revival and before audiophile digital music players like Neil Young’s Pono.
The final version of “Makes Me Wonder” came in three versions: Album, Clean (with the word ‘fuck’ removed from the chorus) and Super Clean (with ‘fuck’ removed more thoroughly, and ‘God’ removed from the second verse). It was a spectacular hit, number one in Panama, Croatia, Cyprus, South Korea and Hungary and many larger countries. Why? Because it was played on the radio over and over and over again.
When you turn on the radio, you might think music all sounds the same these days, then wonder if you’re just getting old. But you’re right, it does all sound the same. Every element of the recording process, from the first takes to the final tweaks, has been evolved with one simple aim: control. And that control often lies in the hands of a record company desperate to get their song on the radio. So they’ll encourage a controlled recording environment (slow, high-tech and using malleable digital effects).
Every finished track is then coated in a thick layer of audio polish before being market-tested and dispatched to a radio station, where further layers of polish are applied until the original recording is barely visible. That’s how you make a mainstream radio hit, and that’s what record labels want.
To be precise, “Makes Me Wonder” was particularly popular on U.S. radio stations playing the ‘Hot Adult Contemporary’ format, which is succinctly described within the radio industry as: “A station which plays commercial popular and rock music released during the past fifteen or twenty years which is more lively than the music played on the average Adult Contemporary station, but is still designed to appeal to general listeners rather than listeners interested in hearing current releases.”
Playlists of Hot Adult Contemporary stations are determined by a computer, most likely running Google-owned Scott SS32 radio automation suite, which shuffles the playlist of 400 to 500 tracks, inserts ads and idents and tells the DJ when to talk. The playlist is compiled after extensive research. Two or three times a year, a company like L.A.-based Music Research Consultants Inc arrive in town, hire a hotel ballroom or lecture theatre and recruit 50 to 100 people, carefully screened for demographic relevance (they might all be white suburban housewives aged 26–40). They’re each given $65 and a perception analyzer—a little black box with one red knob and an LED display. Then, they’re played 700 seven-second clips of songs. If they turn the knob up, the song gets played. If they turn it down, it doesn’t.
If a station needs more up-to-date information (bearing in mind that they’re “designed to appeal to general listeners rather than listeners interested in hearing current releases”) they can run a ‘call-out test,’ where people from the right demographic are cold-called and interrogated about 30 seven-second clips played over the phone.
So Maroon Five’s job is clear. Just as a modern politician’s job is to deliver seven second soundbites, their job is to deliver seven second audio clips which will encourage young-ish people with a high disposable income to turn a little red knob at least 180 degrees clockwise. No wonder they look so stressed.
Fortunately, there are armies of producers, engineers, software programmers and statisticians lining up to help our heroes to craft the perfect innocuous but shiny-sounding research-ready pop hit. “It’s like digital photography,” says the prolific producer John Leckie, who has worked Radiohead’s The Bends, the first Stone Roses album and A Storm In Heaven by The Verve. “Twenty years ago, if I showed you a picture of me standing next to the Pope, you’d believe it, and think I’d met the Pope. Today, you’d assume it was Photoshop.”
John’s career started as a tape operator at Abbey Road, where he witnessed Phil Spector recordingAll Things Must Pass with George Harrison. Phil wanted a big sound, so he filled the studio with musicians. The album was recorded pretty much live in one room with three drummers, two bassists, two pianists, two organists, six guitarists and horns, playing together onto six tracks of an eight track recorder. Vocals took up the last two tracks.
For many people, this was a golden age. Recording a group of musicians playing together in an acoustically pleasant space is a tremendously difficult business. It’s all about where you place the microphones to capture the instrument sounds, but also the room sounds. Recording engineers at Abbey Road wore white coats and spent years as apprentices before they knew enough to do the job properly. When you listen to a record made the old way—like the Buena Vista Social Club album—you’re hearing a recording of a room. Which happens to have some musicians playing in it.
In the early 70s, recording started to change. Four tracks turned into eight, then 16, then 24, then 48. Engineers looked for ways to get more control over the sound. They started to create dead rooms, with very dry acoustics. Microphones were moved much closer to instruments, which were recorded one by one. With a clean, pure sound on tape, they could add artificial room sounds afterward using echo chambers. There was an explosion in audio creativity, as people were able to experiment endlessly. Records like Tubular Bells or Queen albums would never have been possible in the 60s. The white-coated engineers were replaced with experimental producers like Trevor Horn.
The music sounded exciting and different and strange. If you stick your head really close to an acoustic guitar, or someone singing, or a piano, you’ll hear strange, unexpected things. The aggressive click of plectrum on metal. The ambient resonance of piano strings. The new studios could capture all this.
Compare an acoustic track from Neil Young’s Harvest (1972) with one from Johnny Cash’s American IV (2002):
Rick Rubin’s recordings of Cash are extraordinarily intimate and affecting. But they don’t sound anything like Johnny Cash sitting in your living room playing some songs. They sound like you’re perched on Johnny Cash’s lap with one ear in his mouth and a stethoscope on his guitar.
When people talk about a shortage of ‘warm’ or ‘natural’ recording, they often blame digital technology. It’s a red herring, because copying a great recording onto CD or into an iPod doesn’t stop it sounding good. Even self-consciously old fashioned recordings like Arif Mardin’s work with Norah Jones was recorded on two inch tape, then copied into a computer for editing, then mixed through an analogue console back into the computer for mastering. It’s now rare to hear recently-produced audio which has never been through any analogue-digital conversion—although a vinyl White Stripes album might qualify.
Until surprisingly recently—maybe 2002—the majority of records were made the same way they’d been made since the early 70s: through vast, multi-channel recording consoles onto 24 or 48-track tape. At huge expense, you’d rent purpose-built rooms containing perhaps a million pounds’ worth of equipment, employing a producer, engineer and tape operator. Digital recording into a computer had been possible since the mid 90s, but major producers were often sceptical.
By 2000, Pro Tools, the industry-standard studio software, was mature and stable and sounded good. With a laptop and a small rack of gear costing maybe £25,000 you could record most of a major label album. So the business shifted from the console—the huge knob-covered desk in front of a pair of wardrobe-sized monitor speakers—to the computer screen. You weren’t looking at the band or listening to the music, you were staring at 128 channels of wiggling coloured lines.
“There’s no big equipment any more,” says John Leckie. “No racks of gear with flashing lights and big knobs. The reason I got into studio engineering was that it was the closest thing I could find to getting into a space ship. Now, it isn’t. It’s like going to an accountant. It changes the creative dynamic in the room when it’s just one guy sitting staring at a computer screen.”
“Before, you had a knob that said ‘Bass.’ You turned it up, said ‘Ah, that’s better’ and moved on. Now, you have to choose what frequency, and the slope, and how many dBs, and it all makes a difference. There’s a constant temptation to tamper.”
What makes working with Pro Tools really different from tape is that editing is absurdly easy. Most bands record to a click track, so the tempo is locked. If a guitarist plays a riff fifty times, it’s a trivial job to pick the best one and loop it for the duration of the verse.
“Musicians are inherently lazy,” says John. “If there’s an easier way of doing something than actually playing, they’ll do that.” A band might jam together for a bit, then spend hours or days choosing the best bits and pasting a track together. All music is adopting the methods of dance music, of arranging repetitive loops on a grid. With the structure of the song mapped out in coloured boxes on screen, there’s a huge temptation to fill in the gaps, add bits and generally clutter up the sound.
This is also why you no longer hear mistakes on records. Al Kooper’s shambolic Hammond organ playing on “Like A Rolling Stone” could never happen today because a diligent producer would discreetly shunt his chords back into step. Then there’s tuning. Until electronic guitar tuners appeared around 1980, the band would tune by ear to the studio piano. Everyone was slightly off, but everyone was listening to the pitch of their instrument, so they were musically off.
Today, the process of recording performances, then editing them together into what the band and producer consider a finished track, is just the start. Record companies need to ensure they’ll get that perfect seven-second snippet for the radio testing session, so they’ve added yet more polishing processes.
Once the band and producer are finished, their multitrack—usually a hard disk containing Pro Tools files for maybe 128 channels of audio—is passed onto a mix engineer. L.A.-based JJ Puig has mixed records for Black Eyed Peas, U2, Snow Patrol, Green Day and Mary J Blige. His work is taken so seriously that he’s often paid royalties rather than a fixed fee. He works from Studio A at Ocean Way Studios on the Sunset Strip. The control room looks like a dimly-lit library. Instead of books, the floor-to-ceiling racks are filled with vintage audio gear. This is the room where Frank Sinatra recorded “It Was A Very Good Year” and Michael Jackson recorded “Beat It.”
And now, it belongs to JJ Puig. Record companies pay him to essentially re-produce the track, but without the artist and producer breathing down his neck. He told Sound On Sound magazine: “When I mixed The Rolling Stones’A Bigger Bang album, I reckoned that one of the songs needed a tambourine and a shaker, so I put it on. If Glyn Johns [who produced Sticky Fingers] had done that many years ago, he’d have been shot in the head. Mick Jagger was kind of blown away by what I’d done, no-one had ever done it before on a Stones record, but he couldn’t deny that it was great and fixed the record.”
When a multitrack arrives, JJs assistant tidies it up, re-naming the tracks, putting them in the order he’s used to and colouring the vocal tracks pink. Then JJ goes through tweaking and polishing and trimming every sound that will appear on the record. Numerous companies produce plugins for Pro Tools which are digital emulations of the vintage rack gear that still fills Studio One. If he wants to run Fergie’s vocal through a 1973 Roland Space Echo and a 1968 Marshall stack, it takes a couple of clicks.
Some of these plugins have become notorious. Auto Tune, developed by former seismologist Andy Hildebrand, was released as a Pro Tools plugin in 1997. It automatically corrects out of tune vocals by locking them to the nearest note in a given key. The L1 Ultramaximizer, released in 1994 by the Israeli company Waves, launched the latest round of the loudness war. It’s a very simple looking plugin which neatly and relentlessly makes music sound a lot louder (a subject we’ll return to in a little while).
When JJ has tweaked and polished and trimmed and edited, his stereo mix is passed on to a mastering engineer, who prepares it for release. What happens to that stereo mix is an extraordinary marriage of art, science and commerce. The tools available are superficially simple—you can really only change the EQ or the volume. But the difference between a mastered and unmastered track is immediately obvious. Mastered recordings sound like real records. That is to say, they all sound a little bit alike.
In a typical week, 30% of the U.S. Top 40 has been mastered at Sterling Sound in New York, which has seven studios working round the clock. There aren’t many mastering engineers in the world. The Strokes recorded Is This It on an old Apple Mac in Gordon Raphael’s basement studio. But it was mastered by Greg Calbi, who also did Born To Run and Graceland.
The business of mastering is infinitely complicated. Mastering engineer Bob Katz has written a 400 page book on mastering techniques, which ends with a poem about the art of mastering:
“I see:/a world which recognizes craft and training/
in audio itself which is not disdaining…”
The mastering engineer’s principle tool is compression. (Audio compression is completely unrelated to data compression, which is what turns a CD into a MP3 file.) It’s a simple-but-complicated audio technique. The loudest parts of a track are made quieter, which means you can turn the overall level up, without getting distortion, so it sounds louder. Why are TV ads so much louder than TV programs? Because their soundtracks are heavily compressed. Why are commercial radio stations much louder? Because they’re heavily compressed.
Bands, producers and record labels have always wanted to make loud records, for radio play and jukeboxes. At Motown, they realized that tambourines can cut through almost anything else. If you’ve got someone shaking a tambourine somewhere on a track, everyone in the pub can hear it when it comes on the jukebox.
With vinyl, there were clear physical restrictions about how wide the grooves could be, and how many grooves you could fit on a 7-inch single. Mastering engineer Bob Ludwig created ultra-loud master of Led Zeppelin II, but his version was pulled when it skipped on a record player owned by Atlantic boss Ahmet Ertegün’s daughter (if your copy has “RL” scratched in the run-out groove, it’s his master, and worth a bit on eBay.)
Radio testing makes loudness more important than ever before. Your seven-second sample has to cut through when played down the phone to a mum with a screaming kid in the background. Software like Waves L1 (which has now evolved to L3) takes a track and slams every millisecond to the maximum level. With multiband compressors, the track is split into three frequency bands. The bass, mid and treble are all independently made as loud as possible. That’s why you can still hear all the words on a Girls Aloud single playing on a transistor radio half a mile away.
Loudness is hugely controversial. In interviews, mastering engineers are always clear that they’d never push a track too far, that it’s all Some Guy’s fault. But 1,275 people have signed an online petition to get Red Hot Chilli Peppers’ Californication remastered because: “The music should not be mastered simply to make all of the songs sound as loud as possible when broadcast on radio.”
Excessive loudness doesn’t hurt sales. (What’s the Story) Morning Glory was one of the loudest CDs ever released until Iggy Pop broke the record with his unlistenably distorted 1997 remastering of The Stooges’ Raw Power.
So the track has been recorded, edited, mixed and mastered. It’s burned on CD and in the shops. Does the polishing stop? Not quite. Just as labels compete to get their music on the radio, so radio stations compete to sound loudest and brightest. Radio stations have always used compressors to help their programming sound clearer and cut through interference.
Now that radio stations are entirely digital, they can go much further. Commercial stations now routinely edit songs themselves, trimming intros, chopping out boring bits, editing in station idents and—I’m not making this up—speeding up songs which they think are too slow or boring for their demographic. Some stations routinely play every track at +3%.
Of course, not everyone does it like this, although most commercial releases will have at least the final layer of mastering polish. There are plenty of people who reject the polishing process, but they’re not getting much U.S. mainstream radio play: Aberfeldy recorded their debut album Young Forever in mono, using a single microphone to record the five piece band playing through battery-powered amplifiers. The White Stripes famously recorded Elephant on 8-track tape at Toe Rag studios, and the album was mastered by veteran vinyl cutter Noel Summerville (who mastered the Clash’s Combat Rock).
When old school producers and engineers talk about modern music, they’re convinced that better recorded music would save the music industry from itself. Producer Joe Boyd wrote of the Buena Vista Social Club album (4m copies worldwide): “Its success is usually ascribed to the film or the brilliant marketing. But I am convinced that the sound of the record was equally if not more important.” Beautifully recorded records by Norah Jones, Bob Dylan and others have certainly shifted units. But the Red Hot Chilli Peppers’ brutally mastered Californication has sold 15m copies worldwide.
Why does most music sound the same these days? Because record companies are scared, they don’t want to take risks, and they’re doing the best they can to generate mainstream radio hits. That is their job, after all. And as the skies continue to darken over the poor benighted business of selling music, labels are going to cling to what they know more fiercely than ever.
So is that it? Have we arrived? Will records continue to increase in loudness and homogeneity until literally everything sounds like Californication? Optimistic engineers dream of a day when the world’s music listeners spontaneously rebel against over-processed music. The Loudness War will end and people will stop buying Black Eyed Peas records. A new era of high-fidelity recording will be born, and men in white coats will once again stride confidently through acoustically-lively studios placing their vintage microphones with care.
Pessimistic engineers can see an endless war against fidelity, as ever-more sophisticated technology makes pop music louder and shiner than ever. As hi-fi systems are abandoned for earbuds and mobile phones, there will be no reason to make nice-sounding records. Worse still, the technology behind systems like Waves Ultramaximizer could easily be built into an iPod, automatically remastering all those dull old Neil Young records into BIG LOUD IN-YOUR-FACE BANGERS.
In reality, technology might save the recording process. At the moment, Pro Tools operates at twice (or four times) the resolution of a CD. A great deal of quality is lost as those huge files are squished to the CD format, before being further squished into MP3s on your iPod. In a very few years, we’ll have 1 terabyte iPods, easily capable of handling thousands of recordings in their original high-definition form. At the same time, every part of the signal chain—from earbuds to digital/audio converters—is improving and getting cheaper. Studio software is also constantly developing, so perhaps mastering and compression can become more subtle and less abrasive. It’s quite possible that we’ll look back at the first years of this century as a crude interval of low-fidelity sound. And maybe the record industry will even persuade us to re-buy all those old records yet again.
Tom Whitwell is a digital product consultant in South London. He is gradually re-purchasing his iTunes library on overpriced vinyl, and designs open source music hardware at Music Thing Modular. Follow Tom: Linkedin | Tumblr