Production Q & A #2

The three questions in this post came from people who wanted to remain anonymous, and the more I think about it, the more I see their reasons.  So I think from here out, any questions I field for the Q&A will only be posted anonymously.  Hopefully that will get more people to submit questions as well.  So, let’s get to it.

1.  Why do I need a better soundcard if I don’t record anything?  Is it really worth spending more than $1000 on?

In the past when most people were still using hardware synths and drum machines, not to mention other “real” instruments, it was easy to justify the expense of a really nice soundcard.  In fact, I often said it was one of the first things that people should upgrade, because having really good A/D’s (analog to digital convertors) made a very noticeable difference in the quality of anything you recorded.

These days with so much of the entire production process happening entirely “in the box” for some people, the advantages can seem less tangible.  It’s entirely possible to achieve 100% professional results with nothing but the built in audio interface in your laptop or desktop after all.  However, I do think there’s still some good reasons for going with a separate, reputable soundcard:

– Lower latency, typically.  For the most part, a professional soundcard is going to offer you lower latency with less CPU overhead.  I’ll be the first to admit I think some producers place too much emphasis on judging a card based on ridiculously low latencies, but to a certain extent it can help.  For those people performing with virtual instruments in real time, the benefits are certainly noticeable to a point.  For me, anything less than 128 samples is fine for all my needs, live or in the studio, but most of the really good soundcards can go still lower with out too much increase in CPU overhead.

Just remember that sound travels roughly 1 foot through the air per millisecond.  Musicians have been jamming for years and staying in time standing 10 feet or more away from each other.  So you don’t need super low latencies to get your point across.

– Stability.  Drivers from higher end soundcards tend to be updated more often, and of higher quality in terms of how often (or not!) they crash.  Cheap gaming soundcards might claim to be of the best quality, but more often than not it’s their owner’s who are the ones running into the most issues and posting for help on forums.

– Imaging.  Good convertors can do amazing things compared to just ok ones.  Your music will sound like it has a better sense of space, depth (front to back imaging), left to right localization (how easy it is to accurately tell where an instrument is in the stereo spread).  Just remember that this only affects what YOU hear though, the actual audio is going to be recorded the same no matter what D/A you’re using.

The benefit comes from the fact that this increase in clarity can help us better know how much reverb is too much, or when perhaps we’ve panned instruments just a little too close together, or if we’ve applied too much of that magical stereo widener plug in and the mix now sounds off-balance.  Basically, when you can hear better, you can make more accurate decision when writing and mixing your music.  And those decisions can even have an effect on how people with lesser playback systems hear what you are releasing.

At what point does this increase in quality start to outweigh the cost of upgrading?  If you’re looking for your first soundcard, or even just something portable to use live, then I think you should realistically be looking to spend around $200-300 at least.  You’ll get a noticeable increase over the stock soundcard (many of which aren’t THAT bad these days anyway), and you’re likely not spending more than you can hear anyway.

What I mean by that, is that it makes no sense to spend $2000 on a soundcard, if you’re still using $200 speakers in an untreated room.  All of these things work together, and I think most people will go through phases where they get the best results upgrading everything over time in cycles.  Speakers, soundcard, acoustics, speakers, soundcard, acoustics, etc.  Or you have a lot of money and get top-notch stuff right off the bat, boo hoo for you.  🙂

After the $200-300 price range, I think you’re realistically going to have to spend $500 -1000 on a card to get any real noticeable increase in audio quality.  For most producers, this is probably as much as they’ll ever spend on an audio interface, and unless you’re putting a lot more money into your monitoring chain and acoustic treatment, spending more might not really yield you that great of a difference in terms of pure audio quality.

It’s the law of diminishing returns, the more you spend past this point, the less the differences will be in terms of how much better things sound.  Sure a $3000 interface will probably sound better than a $1500 one, but if the difference is only around 5% better, is that worth another $1500?  I think at that point you’re in one of two scenarios.  You do this for a living and the difference is pretty noticeable and useful to your job, or the rest of your studio is up to where you want it, and then the price difference makes more sense.  Either way, for 95% of most musicians, I think spending more than $1000-1500 on an audio interface is probably not going to net you any huge advantages.

 

2.  Why do some people say normalizing my audio files is bad, and others say it’s not a big deal?

The thing to realize about digital audio, is that any time you perform ANY operation on an audio file, you are almost always destructively altering the file.  That is to say you are in some way (often smaller than you can imagine), permanently altering the file in a manner that is irreversible.  Digital audio processing involves math, and this kind of math often results in numbers greater than we can reasonably store 100% accurately, so things get rounded.

Now there are a lot of ways that we have learned to minimize the extent of this, through things like floating-point processing, or dithering for instance.  But from a theoretical standpoint, operations like normalization are a form of destructive processing, and to many people that should be avoided whenever possible.

But, let’s step back for a second and look at it from a practical standpoint.  Say you’ve rendered or exported your latest song, and you realize that the highest peak is at -5dBFS.  Of course this means that you’re not using all of the available bit resolution of your audio file, which to most people just simply means that it sounds kind of quiet.

In this scenario, the safest, and least intrusive method of raising the volume so we are using all of the file’s bit-depth, is to normalize it (with caveats, to be explained shortly).  We simply add 5dB to each and every bit, raising the overall volume with only some tiny rounding errors in the very lowest bit as a downside.  In comparison, what are the other ways that we can also raise the volume of the audio?

Well, the usual suspects are limiting, compression, or clipping, and I don’t think that anyone would argue that these alter the original audio less than normalizing does.  So from a practical standpoint, normalizing is the safest, and cleanest sounding way to raise the audio level to use as much of our recorded format’s available storage capabilities.

There is one downside though, and that is the fact that most of the time, normalized files really do use ALL of the available dynamic range, which means that they peak at exactly 0dBFS.  This can lead to a problem called Inter-sample Modulation Distortion.  Google it for the details, but the short version is that when you have multiple, continuous samples playing back at 0dBFS, some digital to analog convertors will actually produce small amounts of distortion.  The issue is less common these days than it used to be early on in the digital era, but it’s still something to be aware of.  Read on.

I usually tell people to think about WHEN they are normalizing, before they decide to or not.  For instance, if you’re working with audio files in your DAW, then normalizing is probably not a problem at all, because that audio file is not the final product going to the listener.  It’s still going to pass through the audio processing of the DAW, be turned down by track or master faders, or affected by master channel effects, etc.  Basically, the normalized audio file is not going to ever be played back at full scale (true 0dB), so Inter-sample Modulation Distortion (IMD) is not a concern.  Your D/A will never see that 0dB reading that the raw audio file is recorded at.

However, if you’re mastering your own music, or generating some other files meant to be listened to immediately afterwards (typically on CD), then you should rethink normalizing.  At the very least, use a normalizing tool that at least allows you to set the final output level to something other than 0dB.  Setting it to -0.3dBFS will change almost nothing in terms of how loud it’s perceived, but it’s just low enough to avoid almost all instance of IMD.

So, to summarize.  When you’re writing your track, normalizing audio files is fine, it’s probably the cleanest way to boost the volume of your audio files for whatever reasons you have.  When you’re generating the end product, something meant to be listened to by others with no other processing, then make sure you only normalize if you can manually set the final output level to something other than 0dBFS.

 

3. Why are Macs better than PCs?

HA!  I’m not touching that one, nice try!  Silly blog trolls….  🙂

——————————-

As always, I hope some people find all this useful.  Feel free to send me more questions, discuss this Q & A in the comments, or pass this on to anyone you feel might be interested.  I’m leaving tomorrow to play the Photosynthesis Festival I’ve been blogging about recently, so if I don’t reply right away I’ll do so as soon as I get back next week.

Don’t forget you can sign up for email or twitter notifications of new postings, and please click that “LIke” button inthe upper right of the page if you enjoyed reading this.  Thanks, and I’ll have some new posts soon!

7 Replies to “Production Q & A #2”

  1. Very much appreciated. Here is a Q: Best way to keep frequency-rich (at times, droney) ambient music parts separated in a mix? I hear beautiful droning musics where one hears every layer as if it was packaged together and simply rearranged by the artist. I wonder if you could give tips for mixing such sound-rich music sucessfully. Thx for _any_ advice & thx for this series 🙂

  2. Good stuff about too hot levels in digital audio, Tarakith. With 24 bit resolution nearly standard now, there’s NO reason to record hot to get detail. And I find that most CD/DVD players sound MUCH better if you leave a db of headroom in your mixes. Compress or limit all you want to gain apparent loudness, but give the consumer level DACs some room to breathe.

    Jasonswe,
    One trick for keeping drones and pads out of each others way is to use parametric eq to scoop out 2-3 db of competing frequencies in one track, and add the same amount to another fighting for the same spot. This leaves a nice sonic “hole” that lets details of each track come thru.
    For example if you have a nice shiny pad or lead, emphasize the 5-7khz range, and take the same freq out of your bass/string drones. I know cutting highs from anything hurts, but your ear will soon readjust to the new tone, and you’ll get better separation in your mixes.
    Another trick is to use nearly all “wet” reverb on slow attack/release material to push it back into the track. It doesn’t have to be a big hall or room either. even short nonlinear or early reflection type programs will do the trick.
    Lastly, once you’ve recorded your stereo layers, split it into two mono tracks and physically move the left or right side back 10-20 milliseconds. Do the same with the competing parts other side and they’ll step on each other less, and will open up the stereo field nicely.
    Good luck!

  3. Thanks a lot for the tips Xenobot! I’ve read about the cutting of frequencies before but I did not notice much difference when applying this technique. I have recently treated the room here and so I am hoping that my perspective during mixdowns will improve slightly. I appreciate very much all of the advice and time you took to post an answer! I will try these things you suggest! Have a wonderful Summer!

  4. You make many good points about normalizing. There is no “it is good” or “it is bad”. It depends.

    There is one bad about it though I think is worth mentioning which you didn’t mention. Normalizing raises the level of everything equally. The good, and the bad.

    Many people normalize to “make things louder” but this makes the bad parts louder too. For example; the noise floor of a recording. If you have a recording that’s really quiet, and you normalize it to make it louder, you not only made the parts you wanted to hear louder, but ALL of the ambient noise in the room. By normalizing quiet recordings you can take a quiet passage and render it unusable with whitenoise sounding static.

  5. I’ve heard Clint’s argument a few times recently, but it is a little moot. When you raise the level of everything equally, the relationship between the audio and the “bad parts” (noise) doesn’t change. Turning the volume knob on your stereo does the same thing. Your signal-to-noise ratio doesn’t change, so I don’t think the your point is a valid one. You cannot make a recording ‘unusable with whitenoise sounding static’ by normalizing unless there is already a bunch of static, which would make the track unusable in the first place.

  6. I think Clint’s point is worth mentioning to some extent though, and yes it’s something I probably should have touched on in the post so people are aware of it. But as Davis said too, if the noisefloor of the recording is so loud that after normalization it’s clearly audible and ruining the recording, then likely that would have been an issue no matter how you choose to raise the overall level.

    Probably one of those things to keep in mind while you’re actually capturing the source material in the first place. Record at a fairly decent level so the audio isn’t captured super-quiet, and it’s not going to be an issue later on. Not saying you need to record so hot that you’re worrying about clipping, but might be worth avoid recording things so they peak at -50dB too 🙂

  7. The more I spend practicing and producing the more productive I am rather then just reading and scouring the internet for more equipment or samples. Though once in a while I stumble upon little jewels of knowledge like this. thanks for the intel!

Leave a Reply