Production Q&A #7

1. Why can’t I get my latency down to 0?

Despite the wealth of information online about what latency is and how it affects computer-based musicians, it’s still the most common issue I see people struggling to come to grips with. Quite often it’s blamed for issues that have nothing really to do with latency, or people have unrealistic expectation about the settings in their DAW that relate to it.

At it’s simplest, latency is nothing more than the delay between when you initiate an action, and when you hear the result (typically expressed in milliseconds). This could be playing a note on your MIDI keyboard to trigger a software instrument in your DAW, turning a knob on your MIDI controller to change an effect parameter, or how responsive your guitar feels when using software modeled amps and effects.

But what causes this delay, and how can we minimize it? Or more importantly in my mind, what can we consider is an acceptable latency?

In the simplest of terms, audio latency is caused when your soundcard sends data to and from your computer. After an analog signal is converted to digital information, the soundcard stores chunks of this data in packets to send to the computer. It’s the size of these packets of digital information that determine your latency. The size of each packet of data is typically user-adjustable in your DAW, you’ll see it expressed as “samples”. So a setting of 512 means that there’s 512 samples in each packet of data.

The larger the packet size (i.e. the more samples it contains), the more reliable the data transfer, and the less CPU-strain on your computer. Of course, a larger packet size also means that your latency increases as the soundcard drivers have to wait longer to fill a packet before then sending it to the computer (and vice versa, from the computer to the soundcard). So setting your latency becomes a trade off between faster packet transfers, and lower CPU overhead.

There is no such thing as 0 latency, all soundcards need to transfer data this way.

Luckily most soundcards today can operate and remain stable at pretty low latencies. I typically keep my soundcard buffers set at 256 samples, which gives me a round trip audio latency of about 13ms. For me at least, this is a perfect trade off between having a responsive control over my software instruments, while maintaining a reasonable CPU overhead. Certainly many soundcards can go lower than this, but honestly I rarely find I need to do that myself.

It’s easy to fall into the trap of thinking you have to set your latency as low as possible, but it’s important to keep this all in perspective too. The average speed of sound in normal conditions is about 1 foot per millisecond. So having a latency of around 10ms is roughly equal to the time it takes sound to leave a speaker and reach your ears from ten feet away.

Bands and acoustic musicians have been performing with unparalleled sync amongst themselves for centuries at these types of distances (or even greater). Blaming your soundcard for not being able to achieve ridiculously low latencies as the reason your playing is sloppy seems a bit excessive, no? There’s nothing wrong with using lower latencies, but at some point it almost becomes academic IMVHO. Find a balance between responsive playing, and a manageable CPU load, and don’t worry if you can’t set your latency any lower.


2. People always say I should layer my drums to get really fat sounds, but the more layers I add, the worse it sounds.

Well, like a lot of things when it comes to making music, more doesn’t always equal better. When you layer sounds that share the same frequency ranges, some of those frequencies will cancel each other out, and some will sum to form louder frequencies. After awhile, you add so many layer that you end up just getting an undefined bunch of mush instead of a slamming drum sound.

Generally I find that two to three samples layered works well. The key is to not only choose great sounding drum samples in the first place, but also ones that compliment each other well. For instance, when layer kick drums, I’ll often use a really deep subby kick to provide the oomph to the sound, and a brighter kick with more click and beater head sound to provide the character.

The other thing to pay attention to is that the samples are lined up as close as possible so you don’t get flamming. This is when you hear a very short delay between the attack of one of the drum layer and the other. Instead of the two sounds combining to form something new, they end up sounding like just sloppily layered drums. Huge pet peeve of mine! Take the time to slide one of the samples forward or backward a couple milliseconds at a time until you find the best location where the samples join to form a single, cohesive sound.


3. What’s the best synth for making deep house type of music?

Error. Question does not computer. Error. 🙂

Honestly I don’t think I’ve ever seen a synth marketed at only one type of genre. Any good well rounded multi-purposes synth should have the facilities to allow you to sculpt whatever kind of sounds you want, regardless of the intended genre.

Except for NI’s Massive, only Dubstep people use that one (I kid, I kid!).

Instead of worrying about which synth other people use in their songs, instead focus on learning the synths you have at your disposal.  🙂


Well, that’s it for this week.  As always, if you have any questions you want me to answer on the blog, drop me an email or post it in the comments.  Just a quick reminder that all of my Production Guides are now in nicely formatted PDF versions too.  Great for E-readers or iPads if that’s your thing.

And if you haven’t already, please hit the “Like” button in the upper right side of this page, or consider a small $1 donation.  Thanks!

Best and Worst Part of Making Music?


Throwing the questions back to the readers of my blog once again.  This time I’m curious, what’s you favorite part of the music making process?  What’s your least favorite part?

As usual, I’ll post my answers after a few other people have chimed in.

Production Q&A #6

1. In your mixing tutorials you’ve always emphasized that one should listen to the mix on several different systems to see how it sounds. How can I translate those impressions to tweaks at my mix position to make the mix more portable? Is it just finding a balance between what I hear at the mix position and what I remember from listening on the alternate systems?

In general I think that’s the basic idea. What I usually recommend is that when people think their mixdown is sounding good in the studio, take it to different locations and listen to it. Bring a notebook and make notes about what you hear. For instance maybe some instruments might stand out too much, or aren’t heard clearly enough. Is the overall track too bass-heavy, or too bright sounding?

When you hear something that sounds wrong to you in one location, pay attention to that when you go to your other listening locations. The goal is to try and average all these notes you’re taking so that it sounds as good as possible in as many locations as possible. Try and pay attention to how you need to make it sound in the studio, so that it also sounds good in your car, living room, iPod, etc.

Over time you’ll start seeing trends and can compensate automatically when you’re doing the mixdown. For example, if you find that you consistently think the high hats end up being too loud when you listen to the track outside your studio, then you know you need to mix the high hats quieter than you normally would back in the studio.

It takes time, but with enough practice you’ll be able to adapt what you hear when you’re writing to how you know it will likely sound elsewhere.


2. How do you organize your samples?

I try and keep it as logical as possible, so that I don’t have to hunt for too long when I’m looking for a sample. In general I have my samples organized in Drums, Synths, Field Recordings, and a random Misc folder. Then I’ll have separate folders in each of these to further break it down into type.

So my Drums folder will have separate folders for kicks, snares, percussion, cymbals, etc. The Synths folder will have Pads, Leads, Basses, etc. Field Recordings will be broken into nature and city samples most of the time.

I also use file renaming apps to keep the individual samples named and numbered. I find it’s easier for me to just have all my samples named and numbered the same way, i.e. Snare01.wav, Snare02.wav, etc. For things like high hats, I put the numbering first so that the closed and open high hat samples are always located next to each other. This short and simple naming scheme also has the benefit of making sure I can always see the sample name in those apps and gear that only displays a few characters of the sample name.

For renaming the files on OSX, I personally like Name Changer, which is free:

If anyone has one on Windows that they like, please share it in the comments.


3. How do you keep your gear clean, get rid of all the finger prints and grime, dust, etc?

It’s a bit Suzy Homemaker of a question, but if anyone has seen my studio then I guess they know I like things tidy 🙂 I keep a couple micro-fibre cloths in the studio for dusting and keeping my gear and laptop clean. The micro-fibre cloths are great because they grab dirt and dust without having to use a liquid dusting spray. Great for cleaning computer monitors too.

For getting off finger prints and other grime, just a tiny bit of warm water on the cloths is usually enough to get it off. This is great for the laptop track pad too, as it’s much easier to use when it’s nice and clean I find. I’ve also heard of people using “Magic Erasers” for cleaning trackpads and computer keyboards. You don’t even need to use water with them supposedly.

A quick note about micro-fibre cloths though. Once you are done and you go to wash them, be sure you NEVER put them in the clothes dryer with other clothes. Dry them separately or air dry. If you put them in dryer, all the little fibers will grab all the lint from your other clothes and then they leave lint behind when you use them, instead of picking it up.

Finally, when I’m not using my gear, it all gets covered with custom  studio dust covers made by  Not the greatest website, but Larry’s rates are really cheap, and he does custom sizes for anything you need.


4. Why don’t you write tutorials for more advanced users?

Mostly because I find that by the time someone gets to a more advanced stage, they already have a pretty specific workflow figured out. Plus, a lot of the more advanced guides I could think of ideas for would require very specific combinations of gear, so not everyone would have access to that.

So while it might give a few people a new idea, in general not as many people seem to respond to them as my beginner tutorials. We’ll see though, I still do them now and then as you can see with my recent Octatrack sampling video.


5. Finally, a quick tip I find useful.

I’ve started putting PDF’s of all the manuals for my gear on my iPad, instead of using the hard copies. While I generally prefer having a physical manual when possible, it’s definitely been nice having all my manuals in one place where ever I am. Plus it speeds up looking for something too, since most manuals have indexes or as table of contents that allows you to instantly jump to a topic in the manual.

Might not be for everyone (then again not everyone reads manuals anyway!), but I’ve found it one of the more useful things I’ve done in the studio lately.


Well that’s it for this time, as always please feel free to send me any more questions you might have, or post them in the comments.  Thanks everyone!

5 More Questions

The last time I did a blog post asking my readers for their opinions on various topics (HERE) it seemed to go over pretty well.  It was a great way to get opinions from a really wide range of musicians, and not have it skew towards any one thing as typically happens on specific manufacturer forums. So let’s try it again, see where it leads this time.  I’ll add my own answers after a few other people have replied first.

1. If you could be proficient at any instrument, what would it be and why?  Bonus points if you don’t say piano!

2. What’s the one piece of music technology that you wish someone would invent. Again, let’s avoid the obvious brain to audio convertor.

3. What’s your favorite MIDI controller (hardware or app) and why?

4. If you could spend one day in the studio with any musician, who would it be?

5. What are you musical goals for 2012?

Thanks, and I look forward to reading everyone’s answers!

What Plug-In?

As I was cruising different forums this morning, it was pretty apparent that some people had gotten a bit of money for the holidays, and were looking to do some shopping.  I found it kind of funny though that so many people were looking to plug-ins to solve all of their music writing dilemmas.  Certainly some plug-ins are better than others for different tasks, and some have a sound we tend to prefer compared to others.  But what struck me was how many people were assuming that there was some magical plug-in that would instantly solve what in reality were nothing but simple audio engineering issues.

Some examples:

– What plug-in do I need to spread sounds around in the stereo field?

Instead of relying on a plug-in on your master channel, or applied during mastering, just use the pan control on each track in the DAW to spread sounds out.  This gives you greater control over which sounds are placed where, and can sound much more natural than some of the stereo “widener” plug-ins.  Additionally, by placing different instruments this way, you also free up room in your mixdown for all the sounds to be heard clearly.

Keep in mind that most of these widener plug-ins work by using phase-shifts or short delays to create a Haas effect.  So while it might sound good in headphones, you can run into issues on mono systems and other playback environments.  One of the most common things I find myself adjusting in mastering is compensating for when people overdo these types of effects, leaving the center of the stereo field with a big gaping hole where the main instruments should be.  If you do use plug-ins like this, you only need a little to achieve a lot.


– I just got some top-notch drum sample packs from “Producers X,Y,Z”, what plug-in do I need to make them work in my mixdown?

One of the great things about buying a well-produced set of samples, is that most of the time all of the processing has already been done for you!  More often than not those drum sounds were already compressed and EQ’d to sit well in a mix.  You’re paying not just for the source samples themselves, but also the preparation that went into them.

It’s a classic case of people who by default start to apply processing to sounds, without first perceiving a real need for it.  So try your new sounds in a song without applying any EQ or Compression to them, I bet you’ll be surprised at how good they sound as is.


– I heard a producer with 20 years more experience than me do a live set over the weekend, and his tracks sounded so much better than mine.  What plug-in should I put on my master channel to fix this?

While the performer you heard might have some type of processing applied on the master out of their set, it’s worth keeping in mind that there’s no magic “make me sound better” plug-in (or hardware box).  The artist sounds better than you because they ARE better than you most likely.  They have years more experience, a studio full of hand-picked gear that suits THEIR needs (and might not fit YOURS).  Not to be a downer, but it’s usually a bit unrealistic to expect you’ll sound as good as professional with decades of experience when you’ve only been doing this for a year or two.

So instead of approaching the situation looking for a magic solution that will solve all your problems, look at it instead as a goal post to reach, a milestone to set and achieve in the future.  Try and find other sets online by the same artist, and compare your productions or live sets to theirs.  Analyze the specific differences, and try to identify areas in your own productions that you need to work to improve on.  Break it down so you can approach it one piece at a time, instead of trying to to improve everything all at once.  Baby steps, as they say.  And keep your chin up, we’ve all been there in the past.  🙂


– Every time I try and spread cold butter on my toast, all I do is break into the bread instead of leaving a thin layer of butter.  What plug-in will fix this for me?

Vintage Warmer, obviously  🙂

Apologies to anyone if some of those look familiar, not trying to call anybody out, just using some random examples to make the point.  In a lot of ways I can understand people looking at their problems this way, after all in the last couple of years there’s been some amazing plug-ins released that have really changed what we think is possible in terms of audio processing (Melodyne anyone?).

But sometimes it’s worth stepping back and getting a little old-school when it comes to looking at a problem too.  You might be surprised to realize you already have the tools and the means to solve an issue you’ve run into, and at the same time you might save some money for something you really have a need for.

Just for fun though, reply in the comments with what you would consider your favorite plug-in and why.  You have to pick only ONE though, no putting a list or multiple options.


Well, this will likely be the last blog post I do before the end of the year, unless I suddenly get inspired in the next couple of days (and considering my Octatrack just got dropped off by UPS minutes ago, fat chance).  I just want to once again thank everyone who’s followed my ramblings over the last year, shared the blog with their friends, or left some insightful comments of their own.  I’ve got a lot of new ideas for the coming year, so be sure to check back often.

Special thanks to those of you who were kind enough to donate a couple bucks last week too. It helps a lot, so you have my sincere appreciation.

Now, who else is looking forward to a killer 2012?

Production Q&A #5

Well it’s taken a few weeks, but I’ve finally had enough people submit some questions that I can write up another Production Q&A. So, let’s get to #5 then:

1. Reverbs and delays are often used on aux channels to give some space and depth to a mix.  Seeing as how you are not actually putting these effects on any one track, what is a good way to audition reverbs/delays for use as a send track?

Generally I’ve always found the easiest way to audition send effects like reverbs is turn up the send amount on my snare track while I try different reverb settings.  More often than not, that’s one of the instruments I’m going to use a reverb on anyway, and since it’s typically a nice, short sound without a lot of tone (mostly noise), it’s easier to hear the actual reverb character.  High hats can work well for this too, unless you have a really busy high hat pattern playing.

One downside of this method, is that it’s easy to get too used to the sound of that much reverb on that particular sound.  So before you settle on a send amount, pull the send all the way down to zero and listen to the track without any of the reverb for a bit.  Just to reset your ears a little bit.  Then go back and adjust your actual send amounts on a track by track basis.

And keep in mind that a little bit of reverb can go a long way, you don’t always need to completely drench everything in the effect for it to do it’s job.


2. Is it better to worry about the mixdown while you’re working on a track, or focus on it after everything is written?

In general I don’t think there’s a right way or wrong way to approach this, I know people who are getting great mixdowns with both methods.  I typically tell people who are just starting out to maybe wait until everything is written before they worry too much about the final mixdown though.  It’s too easy when you’re writing a song to get wrapped up in any one sound, and when you’re focused on it that much, it’s hard to be subjective about the overall balance of all the sounds in the track.

I think the same advice applies if you’re getting close to the end of the writing process, and while you’re happy with all the elements in the song, something just doesn’t sound right or it lacks that cohesion you wanted.  It can be worthwhile to save a new copy of the song, reset all your volume faders and remove all your dynamic effects (compression, EQ, etc).  Then try doing the mixdown again from scratch, focusing first only on the volumes of everything, and then turning to dynamics processing if you hear a need for it.

But there’s nothing wrong with just making things sound good as you write the track too. Most people are probably doing this to some extent already, just so it doesn’t sound like crap while they’re working on the tune anyway.  I think as you get more experience and learn how things from your studio translate in the real world, it gets easier to just mix as you go.  I personally rarely go back and redo a mixdown at the end of the writing process in my own music these days, as I’m pretty comfortable with knowing how things will translate elsewhere (very handy in my line of work! :)).  It helps I don’t work really fast either, so there’s a lot of chances for me to come back to a song in progress with fresh ears and hear something that might be a bit off.


3. How can I get more people to give me feedback about my tracks?  I post them everywhere but no one ever leaves any comments!

Well, everyone wants people to listen to their music, so you have to keep in mind you’re one of thousands of people posting a new song each day.  A few general tips that might help:

– If you want people to spend their precious time listening to your song, than return the favor and proactively listen and comment on some of theirs first.  It’s just common courtesy these days on most forums, if your first post is something like “hey let me know what you think of my new song!”, chances are no one is going to bother listening.  Why should they take the time if you couldn’t?

– Sort of on that note, be a part of the community you’re trying to get feedback from.  Don’t just post new songs and hope people will take time to comment, get to know people who frequent the forum.  Spend time contributing in some way so people know who you are.

– Don’t resort to sneaky tactics to get comments, like misleading Subjects Titles, or false links. Those might work once, but it’s like crying wolf, people will remember that next time you post. Or my new favorite, people putting “Free Download!” in their subject lines.  “Free” works great at getting people’s attention if you’re well known and there’s already a demand for your music.  But if you’re a nobody (relatively speaking), it doesn’t mean anything to most people.  They EXPECT an up-and-coming producer looking for comments is going to make the track available for free.

– Catchy artwork, or a funny tag line can go a long way at making your track standout to people.  Do something to make them curious enough to listen, just avoid going overboard per my point above.  And for heaven’s sake, if you’re giving out MP3’s, take the time to at least fill in all the ID3 tag info too!


4. What’s the most common mistake you see when people send you songs for mastering?

If I had to really narrow it down to one thing, I would say not proofing the mixdown file before they send it to me.  Mistakes happen, and more than a few times I’ve had people realize after I’m done with the mastering that they had a part muted accidentally, or the song didn’t end  in the right place (loop braces were set before the end of the song for instance), or maybe they mistakenly sent an earlier version of the song.

I try and spot the more obvious issues and bring them to the producer’s attention before I start the mastering, but I can’t know everything they intended with the song.  Missing parts, or an effect that’s not turned on is a difficult thing to try and spot when you’re not involved in the creation of the song.

A lot of this just comes down to people rushing to get the track done, which is understandable when you’re excited about something new you created.  But if possible, I definitely recommend people render their mixdown, and then wait a day (or more) before they send it for mastering.  The next day, go back and listen to the mixdown file you made the day before (not the DAW project!) and make sure you’re totally happy with how things sound and that nothing is wrong.

It’s not easy to take that day off, but it would solve SOOOO many issues for people if they just took this one step.  Even if you’re going to master it yourself, giving yourself the time to listen again with fresh ears before you start will definitely make any issues that much more obvious.


Well that about wraps it up for this Q&A then, thanks again to everyone that submitted questions.  If anyone has anymore, please post them in the comments or send me an email and I’ll be happy to address them next time.


On a sort of related note, I posted this on my Facebook page earlier today, but thought might be worth posting here too:

On average lately, I get about 20-40 emails a day from people asking me to listen to their newest track and tell them what I think. I’m pretty accessible and love to help people when I can, but I hope people realize there is just no way I can listen to that many songs every single day and still find time to work on paying customer’s tracks (much less my own music, on the rare occasions I can find time for that anymore). Please try and understand if I don’t reply to you, or say that I don’t have the time. It’s not me being rude, it’s just the honest truth.

If you truly want my opinion, consider having even just one track mastered, as then I can spend the time working with you and answering your questions more fully. I love my job and I’m lucky to be in this position, but it still requires long hours and hard work everyday, just like all jobs do.

Thanks for your patience and understanding, as well as all your continued support! 🙂

Peace and beats,


5. Questions

So, this time I want to turn things around and learn a little about everyone who follows the blog.  Here’s a few questions if anyone wants to answer and share a little about themselves, just post your answers in the blog comments:

1. What is your all-time favorite piece of gear (hardware or software) for making music?

2. Best concert or club night you’ve ever been to?

3. Who’s the one artist or musician you look up to the most, the one who inspires you?

4. What’s the one aspect of writing or performing music that you find most challenging?

5. Realistically, where do you see your music taking you?  What do you eventually want to get out of it? (hookers and coke don’t count this time)


Thanks to anyone that can take the time to share!  I’ll post my answers after a few other people chime in.



Production Q&A #4

Before I start with this week’s Q&A, just a quick note about the blog notifications going forward.  If you like the blog and the things I post, please take a second RIGHT NOW to sign up for email notifications of new posts (on the right hand side of the page).  Or follow me on Twitter, RSS, or Facebook via the icons at the top of the screen.  This is the last time I’m going to announce new blog posts on the various forums I visit, unless the topic directly has something to do with one of those forums.  Going forward, new posts will only be announced via one of the methods above.  Sorry, but it’s starting to come across as a little spammy according to some people, and I don’t want to make that impression.  Thanks!

Right then, here’s this week’s questions:

1. Can you detail your process for getting big, warm bass, big kick drum, mixing them together and keeping them big without the inevitable frequency conflicts?

I think that a lot of times people struggle with this because they’re trying to fit a round peg into a square hole (or maybe that’s a sine wav into a square wav?).  By that I mean, more often than not, when you choose the right sounds that compliment each other in the first place, they fit together in the mix quite easily.  So I usually tell people to think up front about what sounds they want to use.

If you want a deep 808-style kick in your song, then obviously you need to be careful about what kind of bassline you write.  Either by choosing a sound that sits a little higher in the frequency spectrum, or by writing the bassline that doesn’t sound when the kick is playing.  That’s one reason off-beat basslines (one AND two AND three AND, etc) are so popular in dance music, they don’t interfere with the kick.

And the opposite is true as well.  If you listen to dubstep or drum and bass where really deep and powerful basslines are more important, more often than not the kick is really bright and short.  That way it can cut through the mix still, and not get drowned out by the bassline.

Of course, even if you do pay attention to this stuff, there’s just times when you need to get a little surgical to get everything to sit together perfectly on the low end.  Side-chaining the bassline to the kick is a popular trend these days, it just pulls the level of the bassline down some when the kick hits. Done right, it can be a pretty transparent way of getting things to gel nicely.  Alternatively, sometimes you can use EQ to notch out each sound so that things don’t clash too much.  A few dB reduction in the frequency where the kick and bassline clash can be useful in some cases.


2. Why isn’t stem mastering used more?  Does it sound worse than regular mastering?

I think historically stem mastering was frowned upon by mastering engineers for a few reasons.  First, because a lot of times it just meant that the client was having trouble making up their mind, at a time when they need to really be getting everything nailed down and ready to release.  If they can’t make up their mind if they like a mixdown or not, then likely they’re going to be the same way with the mastering process.  Ultimately, it can just mean the client will be difficult to work with.

The second reason is that the mastering engineer is supposed to be looking at the big picture, the album as a whole and how it all fits and sounds together.  When they’re stuck having to deal with a lot of stems, it’s more difficult to bounce between songs and get a feel for the songs and how they’ll fit into an album.  You can’t get that overview of everything when there’s still so many details to focus on.

From a client perspective, stem mastering takes longer compared to normal mastering, so it’s usually more costly to go this route.  Clients rarely like paying more money after all 🙂

I think today things are a little different, since so much of the mastering and production process in general is singles driven.  A lot of people only get one song mastered at a time, especially in the dance community, so some mastering engineers are more open to the idea of stem mastering.  There’s still some ME’s who swear it doesn’t belong in the mastering process, but I know that I personally am fine with it, provided the client is willing to pay the extra cost involved.

As for does it sound worse, I don’t think so.  On one hand it gives the ME a lot more flexibility in how they can fix any issues or make improvements, so you could say it could sound better.  On the other hand, it also means that the ME is going to be making a lot more of the decisions in how the final product sounds too, so there’s less of the uniqueness that the artists brings to the table.

Generally I think that stem mastering is one of those things better left to the ME to decide.  If they hear some issues that just can’t be best sorted in normal mastering, and they aren’t sure the producer has the tools or experience to handle it on their end, then perhaps stem mastering is the way to go.


3. How important is the acoustic treatment of the studio, in particular the treatment of bass?  Is it fundamental for the production, or is it something that’s just nice to have if you can?

I’m pretty biased on this, but I think that having acoustic treatment in the studio can be one of the best things you can do for your music making.  I’d go so far as to say it can sometimes be more important than what kind of monitors you use even.  Everything you do, every decision you make in the production process is going to be affected by what you hear in your studio.  When you have all sort of audio reflections interfering with that, or your room is a shape that just doesn’t allow you to hear things like they will sound on other systems, it can be a real problem.

I always tell people who are asking for monitor recommendations to split their monitor budget in half.  Spend half on the monitors themselves, and half on some acoustic treatment, and overall you’ll end up with a much better investment of that money.

And the good news about acoustic treatment, is that it often doesn’t take a lot to make a big difference, especially when we talk about early reflections.  Additionally, there’s a lot of DIY info available on how to make your own to save some money too.  Here’s a good place to start when it comes to understanding acoustic treatment, or how to build your own:

Also check these sites as well:

Finally, if you do decide to purchase acoustic treatment instead of making your own, I highly recommend GIK Acoustics:



Well, that’s it for this week’s Q&A session, hope some people found this useful.  As always, if you have a question you want me to answer, send me an email or post it in the comments below.

Production Q&A #3

Before I start the Q&A this week, I just wanted to take a second to thank EVERYONE for all the kind words and well-wishes about my studio appearing in Electronic Musician magazine this month.  Your support and positive comments just help reinforce my view that I’m lucky enough to be in a position to help so many talented musicians.  Thank you so much to all my friends, clients, and the hundreds of people I’ve met around the world so far on this incredible journey.

With that, on to Q&A #3:


What’s the best way to keep frequency-rich ambient music parts separated in a mix? I hear beautiful droning musics where one hears every layer as if it was packaged together and simply rearranged by the artist. I wonder if you could give tips for mixing such sound-rich music successfully.

I think the key to this is to realize that when you hear these dense, frequency rich ambient pieces from artists, often it’s the combination of all the parts interacting that give it that full sound.  There’s a fine line between dense in a good way, and muddy and cluttered in music, especially when you’re talking about ambient drones and pads and the like.

For me personally, I tend to focus on how all the sounds interact while I’m actually creating the song.  I make the adjustments to each song to get them to fit together as part of the process of shaping the sounds, either through synthesis or during the mixdown.  More often than not, I tend to start out with a single pad sound that will define not only the feel of the music, but also it’s tonal ebb and flows.

By that, I mean that the first part I record will dictate not only the key of the song, but also how it progresses in terms of feel and mood.  I can build up to higher notes to accent peaks in the song, or use deep and low notes to make things more introspective and anticipatory.  So when I record this first part, I tend to large and full sounding synth preset/sound/multi/etc.

Once the main element is recorded and I’m happy with the flow of the piece from start to finish, then I’ll start adding other pads and ambient elements to compliment the first one. It’s here that I focus on making each part fit together, so that they work together to create a texture, and hopefully aren’t fighting each other.  I’ll use filters to remove deep lows or highs that might clash, or even different EQs to accent certain frequencies I want to accent.

The other thing that I think is important, is to not only think about what parts of the sounds to focus on, but when they should be playing too.  A trippy ambient piece I do (like Dualate for instance) might have 5-6 pad and texture sounds in it, but it’s rare they are playing all at once.  You have to use each distinct sound in a way that supports the overall feel of the piece, without adding clutter to it.

Finally, as I mentioned, often times I’ll use EQs to isolate things even more in the mixdown.  The key here is to use only as much as you need, and to no arbitrarily cut or boost things if not needed.  All too often radical EQ shapes can detract from the feel and texture of the sound itself, so only remove or highlight those parts that really need it.  Like many other aspects of music making, many times less really is more, even with ambient music.



What are some tips for doing music production as a career?  Is it even a viable option anymore?

I’m going to repost something I said over at the forums last week, as I think this answers it perfectly given my thoughts on the matter right now.  I’ll probably go more in-depth into this topic in a separate blog post in the future too, as I get a lot of questions from people about it still.  With that….

I’m two years into making all of my income from mastering and doing the odd mixdown, though it took 8+ years of doing it on the side and making the right connections (building my client list) before I could make it my sole income.  Even then, I make enough to live well, but it’s nothing compared to the bio-tech job I gave up to do this.

Best advice I can give is to stop thinking about it from the standpoint of a musician or producer and instead approach from the standpoint of a businessman.  Making money from any art is never easy for the majority of artists, and often will require some tough choices and business decisions time and again to succeed.  Some classes in being an entrepreneur or even basic accounting will serve you better than you’d realize, especially early on.  A lot of people don’t realize how much of a difference there is between making some money doing your hobby, and actually making a living off it.  As producers, we tend to focus on the artistic or technical aspects of running a business, but that is only a very small part of it.  If you’re not skilled in those areas already, you’re never going to succeed, where you really need to focus now is how to grow a successful business.

Also, don’t get caught up in thinking it’s about having gear x,y,z, as that really only barely impresses other people you’re likely competing against for business anyway (i.e., other producers, studios, musicians).  your average customer only cares about how they are treated, and how good the end result is.  You’ll get far more bang for buck networking and really putting the effort into your people skills.  It’s one thing to get a new customer, keeping them coming back or recommending you to others is an entirely different story.  On that note, a referral is worth far more than any advertising you’ll waste your money on too.  So you need to focus not only on retaining your customers, but doing such a great job exceeding their expectations that they help advertise your services to their friends.

Be flexible and willing to adapt, but at the same time don’t try and do everything at once.  Find one aspect of the business you really enjoy and strive to be the best in that.  I think early on I tried too hard to go after every market I could, and once I pulled back and just focused on servicing one small group of musicians, the business really took off.  I still take the odd job here and there for extra money, but I’m no longer spreading myself thin always going after every little thing that came my way.

Finally, have patience and determination.  Running my own business is far harder than any other job I’ve had, and I’ve had some pretty high stress ones.  Being your own boss is great, but that also means you’re the only one who can pay you 🙂  I think the best advice I got when I first started doing this full time was this:

“90% of running your own business is GETTING the work, 10% of it is DOING the work”

I find that’s definitely the case.  Good luck!

How did you make that first lead sound in your song “Slope Lifter”?

Link to song


The main synth I used for that was Synplant, which is one of my main synths these days.  Like most of my patches in Synplant, I was just planting random seeds until I heard something I liked, then used the mod-wheel to fine tune it even more.  I love this way of working btw, forget knobs and filters and envelopes, just plant seeds until it sounds good and then record. 🙂  Here’s the patch if anyone with Synplant wants it:

Tenax Primo.synp

To play the synth sound, I created sort of a cheat rack since I happened to be tired that night, but feeling musical with the urge to write.  I used a few of Live’s MIDI devices to create an evolving, random arpeggio that always played in key.  You can view the whole effects chain here, just click on the image for a larger picture:

The first device is a Chord, which turns single notes I play into 4 different ones covering a full octave.  This then feeds a Random device to change up the chord voicings, before going into an Arpeggiator which turns the chords into single note riffs.  Finally, I use a Scale device to make sure that no matter what that crazy shit beforehand spits out, it’s always in the same key.  To complete my laziness, I copied this same Scale device to any new MIDI track I was using softsynths on, just to make sure anything else I played in the song was in this same key.  All of this fed Synplant, which was then sent to a Filter Delay.

I mapped Synplant’s Effect and Release controls to Live’s generic X-Y pad for the device, and I used this to quickly record automation of the Release and Effect getting longer and more prominent as the part played.  Right before the sound stops for good at the end of the drop, I scaled these back to make the drop more sparse right before the drums kicked back in.

I rarely, if ever, do this sort of MIDI device pre-effecting when I play and record my synth parts, but in this case I liked the randomness it added to the synth as I played it.  A fun tip, especially useful when you’re making music out and about without a MIDI controller like I was.

Just add water, duh.  🙂

Production Q & A #2

The three questions in this post came from people who wanted to remain anonymous, and the more I think about it, the more I see their reasons.  So I think from here out, any questions I field for the Q&A will only be posted anonymously.  Hopefully that will get more people to submit questions as well.  So, let’s get to it.

1.  Why do I need a better soundcard if I don’t record anything?  Is it really worth spending more than $1000 on?

In the past when most people were still using hardware synths and drum machines, not to mention other “real” instruments, it was easy to justify the expense of a really nice soundcard.  In fact, I often said it was one of the first things that people should upgrade, because having really good A/D’s (analog to digital convertors) made a very noticeable difference in the quality of anything you recorded.

These days with so much of the entire production process happening entirely “in the box” for some people, the advantages can seem less tangible.  It’s entirely possible to achieve 100% professional results with nothing but the built in audio interface in your laptop or desktop after all.  However, I do think there’s still some good reasons for going with a separate, reputable soundcard:

– Lower latency, typically.  For the most part, a professional soundcard is going to offer you lower latency with less CPU overhead.  I’ll be the first to admit I think some producers place too much emphasis on judging a card based on ridiculously low latencies, but to a certain extent it can help.  For those people performing with virtual instruments in real time, the benefits are certainly noticeable to a point.  For me, anything less than 128 samples is fine for all my needs, live or in the studio, but most of the really good soundcards can go still lower with out too much increase in CPU overhead.

Just remember that sound travels roughly 1 foot through the air per millisecond.  Musicians have been jamming for years and staying in time standing 10 feet or more away from each other.  So you don’t need super low latencies to get your point across.

– Stability.  Drivers from higher end soundcards tend to be updated more often, and of higher quality in terms of how often (or not!) they crash.  Cheap gaming soundcards might claim to be of the best quality, but more often than not it’s their owner’s who are the ones running into the most issues and posting for help on forums.

– Imaging.  Good convertors can do amazing things compared to just ok ones.  Your music will sound like it has a better sense of space, depth (front to back imaging), left to right localization (how easy it is to accurately tell where an instrument is in the stereo spread).  Just remember that this only affects what YOU hear though, the actual audio is going to be recorded the same no matter what D/A you’re using.

The benefit comes from the fact that this increase in clarity can help us better know how much reverb is too much, or when perhaps we’ve panned instruments just a little too close together, or if we’ve applied too much of that magical stereo widener plug in and the mix now sounds off-balance.  Basically, when you can hear better, you can make more accurate decision when writing and mixing your music.  And those decisions can even have an effect on how people with lesser playback systems hear what you are releasing.

At what point does this increase in quality start to outweigh the cost of upgrading?  If you’re looking for your first soundcard, or even just something portable to use live, then I think you should realistically be looking to spend around $200-300 at least.  You’ll get a noticeable increase over the stock soundcard (many of which aren’t THAT bad these days anyway), and you’re likely not spending more than you can hear anyway.

What I mean by that, is that it makes no sense to spend $2000 on a soundcard, if you’re still using $200 speakers in an untreated room.  All of these things work together, and I think most people will go through phases where they get the best results upgrading everything over time in cycles.  Speakers, soundcard, acoustics, speakers, soundcard, acoustics, etc.  Or you have a lot of money and get top-notch stuff right off the bat, boo hoo for you.  🙂

After the $200-300 price range, I think you’re realistically going to have to spend $500 -1000 on a card to get any real noticeable increase in audio quality.  For most producers, this is probably as much as they’ll ever spend on an audio interface, and unless you’re putting a lot more money into your monitoring chain and acoustic treatment, spending more might not really yield you that great of a difference in terms of pure audio quality.

It’s the law of diminishing returns, the more you spend past this point, the less the differences will be in terms of how much better things sound.  Sure a $3000 interface will probably sound better than a $1500 one, but if the difference is only around 5% better, is that worth another $1500?  I think at that point you’re in one of two scenarios.  You do this for a living and the difference is pretty noticeable and useful to your job, or the rest of your studio is up to where you want it, and then the price difference makes more sense.  Either way, for 95% of most musicians, I think spending more than $1000-1500 on an audio interface is probably not going to net you any huge advantages.


2.  Why do some people say normalizing my audio files is bad, and others say it’s not a big deal?

The thing to realize about digital audio, is that any time you perform ANY operation on an audio file, you are almost always destructively altering the file.  That is to say you are in some way (often smaller than you can imagine), permanently altering the file in a manner that is irreversible.  Digital audio processing involves math, and this kind of math often results in numbers greater than we can reasonably store 100% accurately, so things get rounded.

Now there are a lot of ways that we have learned to minimize the extent of this, through things like floating-point processing, or dithering for instance.  But from a theoretical standpoint, operations like normalization are a form of destructive processing, and to many people that should be avoided whenever possible.

But, let’s step back for a second and look at it from a practical standpoint.  Say you’ve rendered or exported your latest song, and you realize that the highest peak is at -5dBFS.  Of course this means that you’re not using all of the available bit resolution of your audio file, which to most people just simply means that it sounds kind of quiet.

In this scenario, the safest, and least intrusive method of raising the volume so we are using all of the file’s bit-depth, is to normalize it (with caveats, to be explained shortly).  We simply add 5dB to each and every bit, raising the overall volume with only some tiny rounding errors in the very lowest bit as a downside.  In comparison, what are the other ways that we can also raise the volume of the audio?

Well, the usual suspects are limiting, compression, or clipping, and I don’t think that anyone would argue that these alter the original audio less than normalizing does.  So from a practical standpoint, normalizing is the safest, and cleanest sounding way to raise the audio level to use as much of our recorded format’s available storage capabilities.

There is one downside though, and that is the fact that most of the time, normalized files really do use ALL of the available dynamic range, which means that they peak at exactly 0dBFS.  This can lead to a problem called Inter-sample Modulation Distortion.  Google it for the details, but the short version is that when you have multiple, continuous samples playing back at 0dBFS, some digital to analog convertors will actually produce small amounts of distortion.  The issue is less common these days than it used to be early on in the digital era, but it’s still something to be aware of.  Read on.

I usually tell people to think about WHEN they are normalizing, before they decide to or not.  For instance, if you’re working with audio files in your DAW, then normalizing is probably not a problem at all, because that audio file is not the final product going to the listener.  It’s still going to pass through the audio processing of the DAW, be turned down by track or master faders, or affected by master channel effects, etc.  Basically, the normalized audio file is not going to ever be played back at full scale (true 0dB), so Inter-sample Modulation Distortion (IMD) is not a concern.  Your D/A will never see that 0dB reading that the raw audio file is recorded at.

However, if you’re mastering your own music, or generating some other files meant to be listened to immediately afterwards (typically on CD), then you should rethink normalizing.  At the very least, use a normalizing tool that at least allows you to set the final output level to something other than 0dB.  Setting it to -0.3dBFS will change almost nothing in terms of how loud it’s perceived, but it’s just low enough to avoid almost all instance of IMD.

So, to summarize.  When you’re writing your track, normalizing audio files is fine, it’s probably the cleanest way to boost the volume of your audio files for whatever reasons you have.  When you’re generating the end product, something meant to be listened to by others with no other processing, then make sure you only normalize if you can manually set the final output level to something other than 0dBFS.


3. Why are Macs better than PCs?

HA!  I’m not touching that one, nice try!  Silly blog trolls….  🙂


As always, I hope some people find all this useful.  Feel free to send me more questions, discuss this Q & A in the comments, or pass this on to anyone you feel might be interested.  I’m leaving tomorrow to play the Photosynthesis Festival I’ve been blogging about recently, so if I don’t reply right away I’ll do so as soon as I get back next week.

Don’t forget you can sign up for email or twitter notifications of new postings, and please click that “LIke” button inthe upper right of the page if you enjoyed reading this.  Thanks, and I’ll have some new posts soon!