Octatrack Preview

Ok, so I lied about yesterday’s blog post being the last one of the year, oops.   As I mentioned then, I just received an Elektron Octatrack, and I’ve since been getting tons of people contacting me with questions about it and wanting to see some video.  So I knocked together this really quick before I start my mastering work for the day, hope it holds people over until I can get more indepth on the Octatrack and record some better live performance related videos.

Enjoy, and post any questions you might have in the comments sections.

EDIT – I should also mention that the Elektron forums are a great source of info if anyone wants to learn more about any of the Elektron products. I’m a moderator there as well, but lots of really friendly and helpful people ready to answer questions: http://elektron-users.com

What Plug-In?

As I was cruising different forums this morning, it was pretty apparent that some people had gotten a bit of money for the holidays, and were looking to do some shopping.  I found it kind of funny though that so many people were looking to plug-ins to solve all of their music writing dilemmas.  Certainly some plug-ins are better than others for different tasks, and some have a sound we tend to prefer compared to others.  But what struck me was how many people were assuming that there was some magical plug-in that would instantly solve what in reality were nothing but simple audio engineering issues.

Some examples:

– What plug-in do I need to spread sounds around in the stereo field?

Instead of relying on a plug-in on your master channel, or applied during mastering, just use the pan control on each track in the DAW to spread sounds out.  This gives you greater control over which sounds are placed where, and can sound much more natural than some of the stereo “widener” plug-ins.  Additionally, by placing different instruments this way, you also free up room in your mixdown for all the sounds to be heard clearly.

Keep in mind that most of these widener plug-ins work by using phase-shifts or short delays to create a Haas effect.  So while it might sound good in headphones, you can run into issues on mono systems and other playback environments.  One of the most common things I find myself adjusting in mastering is compensating for when people overdo these types of effects, leaving the center of the stereo field with a big gaping hole where the main instruments should be.  If you do use plug-ins like this, you only need a little to achieve a lot.

 

– I just got some top-notch drum sample packs from “Producers X,Y,Z”, what plug-in do I need to make them work in my mixdown?

One of the great things about buying a well-produced set of samples, is that most of the time all of the processing has already been done for you!  More often than not those drum sounds were already compressed and EQ’d to sit well in a mix.  You’re paying not just for the source samples themselves, but also the preparation that went into them.

It’s a classic case of people who by default start to apply processing to sounds, without first perceiving a real need for it.  So try your new sounds in a song without applying any EQ or Compression to them, I bet you’ll be surprised at how good they sound as is.

 

– I heard a producer with 20 years more experience than me do a live set over the weekend, and his tracks sounded so much better than mine.  What plug-in should I put on my master channel to fix this?

While the performer you heard might have some type of processing applied on the master out of their set, it’s worth keeping in mind that there’s no magic “make me sound better” plug-in (or hardware box).  The artist sounds better than you because they ARE better than you most likely.  They have years more experience, a studio full of hand-picked gear that suits THEIR needs (and might not fit YOURS).  Not to be a downer, but it’s usually a bit unrealistic to expect you’ll sound as good as professional with decades of experience when you’ve only been doing this for a year or two.

So instead of approaching the situation looking for a magic solution that will solve all your problems, look at it instead as a goal post to reach, a milestone to set and achieve in the future.  Try and find other sets online by the same artist, and compare your productions or live sets to theirs.  Analyze the specific differences, and try to identify areas in your own productions that you need to work to improve on.  Break it down so you can approach it one piece at a time, instead of trying to to improve everything all at once.  Baby steps, as they say.  And keep your chin up, we’ve all been there in the past.  🙂

 

– Every time I try and spread cold butter on my toast, all I do is break into the bread instead of leaving a thin layer of butter.  What plug-in will fix this for me?

Vintage Warmer, obviously  🙂

Apologies to anyone if some of those look familiar, not trying to call anybody out, just using some random examples to make the point.  In a lot of ways I can understand people looking at their problems this way, after all in the last couple of years there’s been some amazing plug-ins released that have really changed what we think is possible in terms of audio processing (Melodyne anyone?).

But sometimes it’s worth stepping back and getting a little old-school when it comes to looking at a problem too.  You might be surprised to realize you already have the tools and the means to solve an issue you’ve run into, and at the same time you might save some money for something you really have a need for.

Just for fun though, reply in the comments with what you would consider your favorite plug-in and why.  You have to pick only ONE though, no putting a list or multiple options.

——————-

Well, this will likely be the last blog post I do before the end of the year, unless I suddenly get inspired in the next couple of days (and considering my Octatrack just got dropped off by UPS minutes ago, fat chance).  I just want to once again thank everyone who’s followed my ramblings over the last year, shared the blog with their friends, or left some insightful comments of their own.  I’ve got a lot of new ideas for the coming year, so be sure to check back often.

Special thanks to those of you who were kind enough to donate a couple bucks last week too. It helps a lot, so you have my sincere appreciation.

Now, who else is looking forward to a killer 2012?

Production Q&A #5

Well it’s taken a few weeks, but I’ve finally had enough people submit some questions that I can write up another Production Q&A. So, let’s get to #5 then:

1. Reverbs and delays are often used on aux channels to give some space and depth to a mix.  Seeing as how you are not actually putting these effects on any one track, what is a good way to audition reverbs/delays for use as a send track?

Generally I’ve always found the easiest way to audition send effects like reverbs is turn up the send amount on my snare track while I try different reverb settings.  More often than not, that’s one of the instruments I’m going to use a reverb on anyway, and since it’s typically a nice, short sound without a lot of tone (mostly noise), it’s easier to hear the actual reverb character.  High hats can work well for this too, unless you have a really busy high hat pattern playing.

One downside of this method, is that it’s easy to get too used to the sound of that much reverb on that particular sound.  So before you settle on a send amount, pull the send all the way down to zero and listen to the track without any of the reverb for a bit.  Just to reset your ears a little bit.  Then go back and adjust your actual send amounts on a track by track basis.

And keep in mind that a little bit of reverb can go a long way, you don’t always need to completely drench everything in the effect for it to do it’s job.

 

2. Is it better to worry about the mixdown while you’re working on a track, or focus on it after everything is written?

In general I don’t think there’s a right way or wrong way to approach this, I know people who are getting great mixdowns with both methods.  I typically tell people who are just starting out to maybe wait until everything is written before they worry too much about the final mixdown though.  It’s too easy when you’re writing a song to get wrapped up in any one sound, and when you’re focused on it that much, it’s hard to be subjective about the overall balance of all the sounds in the track.

I think the same advice applies if you’re getting close to the end of the writing process, and while you’re happy with all the elements in the song, something just doesn’t sound right or it lacks that cohesion you wanted.  It can be worthwhile to save a new copy of the song, reset all your volume faders and remove all your dynamic effects (compression, EQ, etc).  Then try doing the mixdown again from scratch, focusing first only on the volumes of everything, and then turning to dynamics processing if you hear a need for it.

But there’s nothing wrong with just making things sound good as you write the track too. Most people are probably doing this to some extent already, just so it doesn’t sound like crap while they’re working on the tune anyway.  I think as you get more experience and learn how things from your studio translate in the real world, it gets easier to just mix as you go.  I personally rarely go back and redo a mixdown at the end of the writing process in my own music these days, as I’m pretty comfortable with knowing how things will translate elsewhere (very handy in my line of work! :)).  It helps I don’t work really fast either, so there’s a lot of chances for me to come back to a song in progress with fresh ears and hear something that might be a bit off.

 

3. How can I get more people to give me feedback about my tracks?  I post them everywhere but no one ever leaves any comments!

Well, everyone wants people to listen to their music, so you have to keep in mind you’re one of thousands of people posting a new song each day.  A few general tips that might help:

– If you want people to spend their precious time listening to your song, than return the favor and proactively listen and comment on some of theirs first.  It’s just common courtesy these days on most forums, if your first post is something like “hey let me know what you think of my new song!”, chances are no one is going to bother listening.  Why should they take the time if you couldn’t?

– Sort of on that note, be a part of the community you’re trying to get feedback from.  Don’t just post new songs and hope people will take time to comment, get to know people who frequent the forum.  Spend time contributing in some way so people know who you are.

– Don’t resort to sneaky tactics to get comments, like misleading Subjects Titles, or false links. Those might work once, but it’s like crying wolf, people will remember that next time you post. Or my new favorite, people putting “Free Download!” in their subject lines.  “Free” works great at getting people’s attention if you’re well known and there’s already a demand for your music.  But if you’re a nobody (relatively speaking), it doesn’t mean anything to most people.  They EXPECT an up-and-coming producer looking for comments is going to make the track available for free.

– Catchy artwork, or a funny tag line can go a long way at making your track standout to people.  Do something to make them curious enough to listen, just avoid going overboard per my point above.  And for heaven’s sake, if you’re giving out MP3’s, take the time to at least fill in all the ID3 tag info too!

 

4. What’s the most common mistake you see when people send you songs for mastering?

If I had to really narrow it down to one thing, I would say not proofing the mixdown file before they send it to me.  Mistakes happen, and more than a few times I’ve had people realize after I’m done with the mastering that they had a part muted accidentally, or the song didn’t end  in the right place (loop braces were set before the end of the song for instance), or maybe they mistakenly sent an earlier version of the song.

I try and spot the more obvious issues and bring them to the producer’s attention before I start the mastering, but I can’t know everything they intended with the song.  Missing parts, or an effect that’s not turned on is a difficult thing to try and spot when you’re not involved in the creation of the song.

A lot of this just comes down to people rushing to get the track done, which is understandable when you’re excited about something new you created.  But if possible, I definitely recommend people render their mixdown, and then wait a day (or more) before they send it for mastering.  The next day, go back and listen to the mixdown file you made the day before (not the DAW project!) and make sure you’re totally happy with how things sound and that nothing is wrong.

It’s not easy to take that day off, but it would solve SOOOO many issues for people if they just took this one step.  Even if you’re going to master it yourself, giving yourself the time to listen again with fresh ears before you start will definitely make any issues that much more obvious.

 

Well that about wraps it up for this Q&A then, thanks again to everyone that submitted questions.  If anyone has anymore, please post them in the comments or send me an email and I’ll be happy to address them next time.

———————-

On a sort of related note, I posted this on my Facebook page earlier today, but thought might be worth posting here too:

On average lately, I get about 20-40 emails a day from people asking me to listen to their newest track and tell them what I think. I’m pretty accessible and love to help people when I can, but I hope people realize there is just no way I can listen to that many songs every single day and still find time to work on paying customer’s tracks (much less my own music, on the rare occasions I can find time for that anymore). Please try and understand if I don’t reply to you, or say that I don’t have the time. It’s not me being rude, it’s just the honest truth.

If you truly want my opinion, consider having even just one track mastered, as then I can spend the time working with you and answering your questions more fully. I love my job and I’m lucky to be in this position, but it still requires long hours and hard work everyday, just like all jobs do.

Thanks for your patience and understanding, as well as all your continued support! 🙂

Peace and beats,

Tarekith

Maschine

Well, as followers of my blog no doubt know, I’ve long been a huge fan of “grooveboxes” for making music.  I’ve owned most of them over the years, and always found their self-contained nature inspiring and downright fun to play.  So I was pretty interested a couple of years ago when Native Instruments announced Maschine, which was touted as a more modern take on the traditional groovebox concept.

But, for a lot different reasons at the time, I just was never able to get one.  Partly because I wasn’t sure I really want to revisit the MPC-style workflow it offered, and partly because funds were allocated to music gear elsewhere.  However as more and more people I knew started using it, and talking about how it completely changed the way they wrote music, I once again started to get interested.

A few weeks ago I was finally in a position to get some time with one, so I’ve been spending quite a bit of time learning it inside and out, and seeing how it fit into my studio (and perhaps live sets).  I’m not going to go into the details of what it does and doesn’t do, or how it works necessarily, there’s plenty of reviews and YouTube videos out there that cover that.  Instead I’ll just cover the things I really liked about it for my uses, and the things that I didn’t like so much, with a brief summary at the end explaining my overall thoughts in more detail.

Likes:

– The hardware is solid. Lightweight as it’s mostly plastic, but feels like it would take some abuse.  The drum pads are the best I’ve ever used, and setting up the sensitivity to be exactly what I wanted was a cinch. The knobs are the same as on the S4, S2, and Kontrol X1, which is to say solid and smooth feeling.  I think NI has some of the best feeling knobs on their MIDI controllers (tied with Akai), so it was nice to see the same ones here.  The LCD’s are dim-able, easy to read, and do a lot to make you forget this is just a MIDI controller.

– MIDI Map mode on the Maschine is well done. The editor is easy to use and works inline with the MIDI in and out, so you can edit your mappings while controlling whatever host software you’re setting it up for.  You only have limited control over what names are shown on the LCD displays, primarily for the main drum pads only.  Still, it was dead easy to make my own custom mapping for Traktor Pro 2, complete with visual button and pad feedback (they light when pressed or controls are latched for instance).

– The sounds.  NI has always had really good drum sounds in their gear, I was impressed with what came with Battery 3 back when I owned it.  The quality of sounds in the Maschine are equally as good.  Doesn’t come with a ton of non-drum instruments, but for the most part they are good as well.  What makes this a really nice deal now is that Maschine also comes with the new version of Komplete Elements, which is full mapped to the Maschine hardware.  So there’s a lot of sounds you can access with Maschine, and this is before you start adding your own plug-ins ala the 1.5 OS update.

I did also get to take advantage of two of the three available Maschine add-on packs, Transistor and Vintage Heat.  In general I didn’t think these sounds were as good as what came with Maschine by default, but partly that was just my preferred style of music.  A lot of the content in those two packs were more geared toward hip hop and RnB styles, though certainly people could use them in whatever styles they wanted I’m sure.  Not bad sounds, just not great either IMVHO.

– Flexible pad assignments.  In Maschine speak, each pad can be assigned a sound, and these sounds can be individual samples, an effect (internal or plug in), or even multi-sampled sounds that you can play chromatically.  So for instance, in a kit with 16 pads, you could have each pad be a different grand piano if you wanted, they don’t just have to be individual samples.  This makes having only 16 sounds per kit, MUCH more flexible.

– Effects, lots of them.  Not just the number that come with Maschine, but how you can apply them.  You can apply up to 3 effects per drum pad, then three more to all 16 pads that make up a kit (or Group), and then up to 3 more effects on the master channel.  If that’s not enough, you can also create pads that are dedicated to just holding 3 effects on their own and treat those like send effects, complete with comprehensive routing options.  I’m probably missing some more, but needless to say there’s a lot of places in the signal path that you can apply effects in Maschine.  They’re not my favorite sounding effects (very clean usually), but there’s definitely enough on offer to do just about anything you need.

– Stable.  At least in my testing, I never ran into any crashes or other issues, everything worked exactly as it should once all my software was up to date (see below for more on this).

 

Dislikes:

– NI Service Center.  This is the software NI uses to keep all of their software and drivers up to date on your computer.  It’s supposed to make things easier by putting all your updates in one place, but a lot of times it makes things more difficult.  My complaints aren’t Maschine specific, but it still drove me nuts.  For example, often times I would launch Service Center to check for an update, only to be told that the Centre itself needed to update.  Ok, fine, it would download itself, install and relaunch.  Then I’d have a list of updates I was told I needed, many of which were drivers for hardware I didn’t own (say S4, or Audio Kontrol 1) or manuals in other languages.  There’s no way to exclude these from the list of updates you’re told you need.

Now, maybe I just got Maschine at a time when a lot of updates were coming out, but I swear I felt like I spent half my time using Maschine trying to keep everything up today and restarting my computer.  New version of the Maschine drivers?  Restart.  New version of Guitar Rig Player for Komplete Elements?  Restart.  New version of all the drivers I didn’t really need but had to install so Service Center wouldn’t flag them as out of date? Restart.  Oh look, another 0.1 version update got released today, repeat the whole process over.

Granted I also have Traktor Pro and a Traktor Audio 6 soundcard on my system, but even with just the Maschine options I felt like this was a frustrating process.  It was easier to just download them manually from the Native Instruments website.

– You really can’t ignore the computer.  At least not all of the time.  I had so many people tell me how they were able to use Maschine like a hardware groovebox, and that you could just totally ignore the computer software.  Well, not really.  For basic beat creation and coming up with simple patterns, sure.  But there’s a lot of things that can only be done in the software, like saving a new version of your project, saving presets or custom kits, loading new (blank) projects, assigning Macro controls, etc.  Also, while there is some sequencer editing functionality with just the hardware, in general it’s MUCH, MUCH faster to just use the piano roll editor in the software.

I don’t want to over-state this, as there’s definitely a LOT you can do with just the hardware.  But at the same time I didn’t think that it is at the point where I could just sit with it on my lap and ignore the computer side of things either when making a complete song.

– Browsing sounds.  One of the downsides of the large library, is that it cam take awhile to browse for just the right sounds you need.  NI has done a good job of tagging all the samples and sounds like with most of their plug ins, but there’s still something like 700 kicks alone to go through.  It’s not bad to have a lot of choices, don’t get me wrong, but it did seem like I was spending a lot of time searching for the sounds I needed.  When it came to the multi-sampled instruments, the process was even worse, as some of these would take a few seconds to load each time.  Really not sure how this could be improved, but just be aware of it.

– Not a lot of performance-based options.  You can’t record song-length automation, you can’t build up a song structure by selecting patterns on the fly (though you can scenes, which is still kind of limited), etc.  I like grooveboxes primarily because they let me perform my grooves in realtime, and I just didn’t get the feeling that Maschine was designed with that in mind.

– Plug-in hosting is still a bit hit or miss.  For simple plug ins and the ones that come with Komplete Elements, most of the controls you want to access are already assigned to the first 8 macro knobs.  Otherwise, you might be scrolling through pages and pages of parameters on the hardware trying to find what you are looking to edit.  Again, probably not totally NI’s fault, but it does mean that once again you’re back at the computer with the mouse.

 

So really, not a huge list of complaints compared to the things I liked about it then.  So then why am I ultimately getting rid of it?  To be honest, I think it has less to do with Maschine, and more about the way I like to work.  I was hugely impressed with Maschine early on, I think NI did a great job of creating a new way of writing music that leverages some of the best ways of working with hardware and software.  But for me, it was almost a case of being too middle-ground for me to really get inspired by it.

It was similar enough to hardware grooveboxes that I really wanted to like it more, but the fact I had to keep reverting to the computer to do some tasks started to make me wonder why I didn’t just use the computer in the first place.  The hardware does a pretty good job at letting you focus on banging out simple patterns and ideas, but once you want to do any sort of detailed editing or arranging, it was back to mousing in a piano roll editor.  Call me weird, but I really kept wishing for a MIDI list editor I could access right on the hardware.  It would have made fixing the odd bum note(s) that much more focused on the hardware.

I think the other thing is that I’ve never really been hugely attracted to working in the typical MPC sort of workflow.  Using a 4×4 grid to play melodies just feels weird to me, and simply chaining together patterns to create a song structure just isn’t my thing.  I like lots of fills and transitional elements that lead to different sections of my song, and I found creating these on the hardware pretty tedious.

Don’t get me wrong, I think it’s a great bit of gear actually, and I can see why so many people like it.  At the end of the day though, I think I just have a pretty specific way of creating my music that the Maschine just doesn’t fit into easily.  I’m sure I could find ways of using it, but it’s never going to be the center-piece of my studio like it’s intended to be.  So for now I’ll wish it a fond farewell, and move on to something else.

If anyone is interested in buying it (like new, all original items, even the stickers), I’ll sell it for $499 via PayPal and cover the shipping to the lower 48.  If you’re overseas, you’ll have to cover shipping costs.  Drop me an email if you’re interested.

The Live PA Interview

A few weeks back I was interviewed by Ali Berger, a student at Tufts University in Boston.  He was working on a paper for a class called “Sketch Studies Today”, and wanted to pick my brain about different aspects of how I do my live sets, specifically my “Wired Roots” live set that I posted on YouTube.  You can find more details on that set HERE.

Over the course of a couple weeks we traded emails back and forth, and I thought the dicussion was something others might like to read as well.  Some of the most interest on my blog has been when I discuss different aspects of Live PA’s, so I figured there would be some interest in this too.  As usual, if anyone else has any questions on this topic, please add them in the comments and I’ll be happy to answer those.

INTERVIEW

Ali:  First, when you wrote Wired Roots, did you make a conscious effort to keep the sounds or musical features consistent across the set? From the blog, it definitely sounds like you think of it as a single piece of music. How much do you define a theme (in terms of inspiration, not necessarily a musical theme) and other constraints/parameters before you start writing the set?

Tarekith: I definitely had a very clear goal and sound I wanted to achieve across the whole set with something like Wired Roots.  The actual sound was largely determined by the gear I was using, in this case the two Elektron boxes.  But in terms of the overall feel of things, and how it all flowed together, I knew right from the start that I wanted it to be a sort mellow, downtempo set, with just enough energy to keep people from getting bored.

For my hardware live sets, I often have a plan of attack before I start writing anything.  The point of live sets like this is to progress in a logical way, so I spend a lot of effort sort of pre-planning where in the set I want the peak song to be, where I want things to be more chill and laid back, how I want to start and end, etc.  As I write the individual songs, it’s not uncommon for me to move them around a lot in the set so I can maintain this sort of flow.

For instance, say a song was originally in a position in the set where I was planning on having a bit of a breather, and things were more minimal.  Then while writing that song, I get a great idea and now the song is more upbeat than I intended.  I’ll move things around so that the overall flow of the set as a whole reflects the intent I originally had.

Ali: Is your choice of hardware at all related to the theme/central idea of the set, or do you choose particular gear combinations for other reasons?

Tarekith: Well, these days I’m pretty much a minimalist when it comes to gear, so often it’s just whatever gear I happen to have at the time.  Sometimes I’ll buy gear just to see how it works for a live set, in fact the Monomachine was one of these kinds of purchases.  Sadly, as much as I liked it, Wired Roots showed me that it just wasn’t as flexible as the machinedrum for performance based music, so I sold it to fund other gear.

Other times the sound of the set itself will be based completely on the gear I want to use.  Again, with the Elektron stuff, I know they are pattern based boxes that really shine doing loopy electro techy sounds, so rather than fight that I’ll write the set so that the gear in mind.  I’ll embrace the repetitive nature of them when I write the songs.

Ali: How much do you plan the structure of the set, and how much do you improvise? Does this change a lot as you practice the set? What drives the choices you make during the performance? (That might be a tough/broad question.)

Tarekith: I think the overall structure of the set is definitely planned well in advance, and for the most part I stick with that.  If I feel the songs I’m writing for it are really strong but pulling the set in another direction though, I’m not against altering my orginal goals either.  You have to be flexible when it comes to writing music, trying to force crativity to be something its not just leads to frustration in my experience.

In terms of practice, a lot of times the live sets I record and post online are the first complete run through of the set.  It’s one thing to play the same set over and over when you have a crowd to interact with and make it exciting, but doing that at home just gets boring. You start losing the urge to be spontaneous, and fall back on things you know work.

So a lot of the set is improvised, as it’s when you take risks that you run into the best “happy accidents”.  Besides, if you make a mistake in a live set, it’s not a big deal most of the time.  It’s over done with before most people notice, and as long as you don’t do it too often, no one cares.  It helps remind people that you’re really doing something live, and not just pantomiming a preplanned set (*cough* Glitch Mob *cough*).

Does improvise drive the progression of the set?  Definitely.  In Wired Roots you have to remember that each “song” is really only a 4 bar loop, and I’m controlling the song structure and how the sounds evolve live on the fly.  If I hit on something that’s really grooving, I’ll let that play longer, and when I can tell that something just isn’t working, I move on to something else more quickly than I might have.

Ali: When do you consider the set finished? When you finish all the patterns, when you make a final recording?

Tarekith: Whew, tough question.  I think I’ve learned over the years that I tend to always plan out sets to be more complicated than they end up.  For instance, Wired Roots was originally supposed to be a little more glitchy, with more fills and things programmed in it.  But as I started to get all the patterns written and organized, I inevitably reach a point where I realize that adding more to the set really isn’t going to make it better.  I could easily spend a lot more time fine-tuning things, but ultimately at the end of the day most people would never realize.  I think this is true of writing music in general for me, I just tend to suddenly KNOW that it’s done.

Some of it is boredom and wanting to move on too.  Writing an entire live set is A LOT of music to write, and sometimes I just want to finish it and move on to the next project.  It’s always a gut feeling though, a little light bulb going off that tell me “right, you’re done, wrap up the loose ends and get this recorded”.

Ali: Are there any influences you’d point to for where this set came from? (other artists)

Tarekith: Not so much.  Mostly it was just exploring what the MnM could do as that was a new purchase, and I wanted to see how it paired with the MD for a live performance.

Ali: In the process of writing the set, did you use any additional midi controllers like a keyboard, or was it purely step editing/live recording with the step keys?

Tarekith: No, it was all done directly on the MD and MnM.  I like the focus working with the least amount of gear possible gives me, and it helps familiarize me with the way it works.  Since those are the only tools I’ll have on stage to perform, it’s a good way to get more comfortable with how they work on all fronts, in case I get an idea while performing.

Ali: What kind of audiences might this set be for (besides the people listening to the recording or watching the video)? Dancing, seated? Watching you, or not? I ask because people in the class were curious how this would translate to an in-person audience. Would you consciously do anything differently if there were people there? And what are the audiences like in general for downtempo sets you do in person?

Tarekith: Umm, good question.  I guess in this case thoughts about the audience wasn’t really a factor in how I wrote the set, it was mainly for my own enjoyment (in this specific case).  If there were more people there, or if I knew for sure people were going to be watching this particular set, I probably would have had more material prepared, just to give me more flexibility in how it progressed based on people’s feedback. In some cases you can tell when people just aren’t feeling a particular section, so it’s nice to have more material than you need so you can skip to something different if needed.

Most of my downtempo sets are for more relaxed crowds, either at art galleries, lounges, or chill out tents where I don’t need to make people dance.  I have plenty of more clubby and uptempo sets prepped in case the venue or crowd dictates that kind of approach.

Ali: To what degree to you expect/want your audience to know what you’re doing when you play a live set? People were curious about who you had in mind when you did the commentary (other producers/live PAs, or audience members who you wanted to educate, etc).

Tarekith: The commentary in the youtube videos was strictly for other electronic musicians and performers.  A lot of people had questioned whether my sets were truly done live, so I wanted to offer up some sort of proof if you will.  But I’ve found that a lot of other musicians like to see how other people perform, it seems to be a common question I get a lot.  Most producers these days seem to start out in their bedrooms alone, so they don’t understand the process of taking studio work live, or creating music JUST for live performance.

I got into electronic music almost solely with it being a live performance type of deal.  Most of my friends were DJs, and if I want to play at parties with them, I wanted it to be my music and not just records written from other people.  So all of my early music making was spent creating material solely with a live setting in mind.

In terms of do I care if people know what I’m doing or not, well… not really.  There’s plenty of live acts out there where the performers focus more on putting on a good visual show versus truly creating something unique on the fly.  Some people wear a big mouse head, or jump around like a rabid monkey trying to avoid a swarm of mosquitos, but that’s just not my thing.  I grew up in the early rave days, and even a club scene, where the performer or DJ was often hidden off to one side and people were only concerned about what they heard.  Sure, you might have a couple musicians curious about what was going on watching, but for the most part people were happy to just dance or enjoy the music without that visual interaction.

In a lot of respects I think that’s been one of the worst things to happen to electronic music, trying to make a visual show out of something that just doesn’t inherently lend itself to spontaneous gestures where the lay person can understand what is going on.  Let’s face it, a lot of today’s live acts dumb down their live sets merely so it looks good, and as a result there’s a lot less improvisation and truly on the fly creation.  Too much of it is performers just pretending to exaggerate a big filter sweep with a knob or touchscreen, because it’s the one motion just about anyone can correlate to a sound they hear. It’s become pantomime.

Most of what we do is music meant for dark rooms and for people to get lost in their own mindsets as they listen to it.  You don’t need to be looking at a stage for that happen, so I don’t concern myself about it.  The lack of rock star egos is what used to set this kind of music apart, and the second that sort of mentality crept in, it just got compartmentalized and lost it’s edge.

Of course, that’s just my opinion 🙂

Ali:  One more question for the paper: why do a live set over a DJ set? If you don’t expect people to know either way, and in fact that doesn’t really matter to you, is it just a personal preference, or do you believe there are advantages to the live set that allow you to provide a better experience for people than a DJ set might?

Tarekith: Because I enjoy the act of playing live, and I’d rather have the chance to show people my own music than someone else’s.  Don’t get me wrong, I DJ a lot too, been doing it almost as long as I’ve been playing live.  But in general I prefer the more hands on aspect of performing my own music, versus DJing most of the time.

Ali: And one thing I’m curious about as a producer/live PA. I’m planning to take this winter break and finally work up a hardware live set, since I’ve always wanted to do one and I’ve been making old-school acid techno and electrofunk tunes lately. Now that I’m free of complex song structures and sound design it makes sense to use a hardware sequencer, a sampler, and a few synths instead of need Ableton’s audio loops to organize everything. The main thing I’ve been wrestling with, though, is how to get smooth transitions.

Tarekith:  LOL, if you knew how many sleepless nights I spent trying to answer that question myself back in the day!

Ali: My setup will likely be an EMX for sequencing and some percussion, a small synth for 303 basslines, and an Akai S2000 for drum samples, all running into a Roland hard disk recorder/mixer (since it has some built-in effects). I know you’re a big fan of the RAM machine loops–I spend about a page on that in the paper–but I know you haven’t always had the MD for live sets. How did you do things before that?

Tarekith: In general I’ve tended to gravitate towards gear that had some sort of facility to enable me to do this.  Early on it was the Roland MC505 which had a function called Megamix.  Basically it let you grab a phrase or track from one pattern, and insert it into your current pattern, all in real time.  So I’d grab a phrase one at a time from the next pattern in my set (with each pattern basically being a song), and in this way I could introduce elements from the next song before I actually switched to it.

After that, I was using an Emu Command Station, and actually worked with the programmers to implement a better version of this feature that they called XMIX.  Same concept, just a bit more flexibility in how it worked.  For awhile I was also using the Roland SP808ex, which is a phrase sampler.  So I could have pre-recorded loops, or grab samples on the fly from my other hardware to play while I switched patterns on them.  Same basic concept that I still use with the UW aspect of the Machinedrum.

I’ve even done the rather simple method of just holding a long droning note on a keyboard while loading a new song too.  Done sparingly, it works just fine.  Lot’s of ways to tackle this issue really.  If you’re using multiple pieces of hardware, especially with built in sequencers, then you can just switch each piece of gear one at a time.  For instance, while it’s muted or has the volume down, switch to the next pattern on your 303 device, then raise the volume.  While that’s playing, maybe you drop the volume on the EMX and then switch to your next song on that, bring the volume back up. Etc.

Thanks to Ali Berger for allowing me to repost the interview, you can find out more about his own music and live set on his blog:

http://aliberger.tumblr.com/

—————————

On a different note, I’m pleased to say that I had over 10,000 visitors to the blog last month alone.  Glad to see that people continue to find interest in what I write, and have helped pass on the site to others they think might be interested as well.

Unfortunately, while the number of visitors has increased, the number of people donating to help support the blog has dropped drastically.  If a few people a month send me just $1, it really helps to offset my hosting costs.  I’m not looking to get rich or anything, but if you’re feeling charitable and can spare $1, please click on the donate button up on the right hand side of the screen.  Thanks everyone, much appreciated!

Sound Quality: Live versus Logic

As a lot of people know, I tend to frequent a lot of various music-related forums throughout the day.  Every now and then (approximately every 18 minutes) I end up running into a thread discussing which DAW sounds better.  Or as some people like to say, which DAW has the best sounding “Summing Engine”.  Now, I’ve looked into this in the past and posted my results about it, but there’s new people getting into music production every day, so it’s time to revisit the topic I think.

Originally I had planned to do a huge, comprehensive test among all of the latest DAWs I could get my hands on.  But, I’m super busy with the mastering business lately, and realistically I don’t have the time to learn the intricacies of each DAW to make sure that I’m doing the test as fairly as possible.  And besides, the test is easy enough for anyone to run on their own.  So I’m only going to focus on the two DAWs I know and use the most (which also happen have to most heated debates on inherent sound quality), Ableton Live and Apple Logic Pro.

So for this comparison I’m going to be using Apple Logic Pro v9.1.5 in 32bit mode, and Ableton Live v8.2.6.  The basic premise of the test is pretty simple, I’ll use the same set of audio stems in each application, and then compare the rendered results.  For those of you who’d like to use these same stems in your own DAW of choice, you can download all the 24bit stems here:

http://tarekith.com/assets/SoundQuality/LogicLiveStems.zip

Because I don’t have time at the moment to write new material for a test like this, I just used some stems from one of my recent songs.  I kept the stems at 17 bars to keep the file sizes smaller, with a short click at the very beginning to assist in lining up the files for comparison later on.  I did run the test on song-length stems as well, and got the same results as with these shorter files, for those that are curious.

Step one was to import all the stems into Logic, and lower each track fader to be exactly -3dB. (Command+Click on each image below to view it larger in a new Tab)

Next I made sure to change Logic’s pan law to “-3dB (Compensated)” in the project settings.  This way Logic is using the same pan law that Ableton Live uses (Live does not allow you to change the pan law).

After that I bounced all of the stems into a single stereo 24bit wav file.

Now to do the same in Ableton Live.  First step is to drag all of the stems into Live, MAKE SURE WARPING IS OFF, and then lower all of Live’s volume faders to -3dB.  It’s important in Live to actually type the exact value you want for the value faders.  Many people don’t realize this, but Live’s faders only show a resolution of one decimal place, but can actually be slightly different if you drag them with the mouse or use a MIDI controller.  For instance, if you drag them with the mouse they might really be set to -3.045dB, even though they show -3dB.  For day to day use this is no issue at all, but I want to make sure the volumes are identical to what I set in Logic.

Then I Exported the stems from Live into a single stereo 24bit wav file just like before.

Now the fun part.  The first thing I wanted to do is just listen to these two files and see if I could hear an obvious difference.  I dragged them both into Live (again making sure warping was off) and assigned a MIDI controller to mute one track while soloing the other.  This way I could instantly toggle between them with one button press.

Normally I’d get my wife to help me by toggling these while I wasn’t looking, so that the observations are done blind.  However she was watching Amazing Race on TV, so I had to just turn off the computer screen and do it manually a bunch of times without keeping count of how often I pressed the button. Not the most scientific, but regardless I could hear no difference between the two files anyway.  This was done at multiple volume levels with my monitors as well.

You can listen to the files yourself here:

http://tarekith.com/assets/SoundQuality/LiveTestFile.wav

http://tarekith.com/assets/SoundQuality/LogicTestFile.wav

Right click and choose “Save As” to download these to your computer if you want, and remember the little click you hear at the start of both files was intentional.

After that, I opened both files up Audiofile Engineering’s Wave Editor, and ran an audio analysis on them both.  You can see those results here:

As you can see, the results are identical.  Note that even though the “Selection Only” option is checked in the Analysis Window, the entire audio file was selected in both cases when I ran the analysis.

The final test was the infamous phase-cancellation test I’m sure many of you have seen mentioned before.  To perform this, I dragged both rendered files into Logic, used a Logic Gain plug-in on one of them to invert it’s phase, and then compared the combined output when both were played at the same time.  I used Logic metering, as well as Sonalksis’s Free-G plug in meter for greater resolution (and it’s free so other’s can use it for their own testing).  As you can see, when the phase of one of the files was inverted, the files COMPLETELY cancelled each other out.  I also repeated this test in Live using a Utility plug-in to invert the phase of one track, and got the same results.

This means they are bit for bit identical.

So, the results of this test show that Ableton Live and Apple Logic Pro produce exactly the same thing when you export a mixdown.

Everything else being equal.

That is the key point that people need to take away from this test, everything else being equal.  This test only shows that at their core, these two applications combine multiple tracks into a stereo wav file in exactly the same way, nothing else.  There are dozens of other aspects of each program that can affect the final audio quality of your mixdowns, and since there is no way to do a fair comparison of those, I’m not even going to bother trying.

For instance, both programs offer different time-stretching algorithms (Warping versus Flex-Time).  They both come with plug-ins (and presets) that designed and sound vastly different from each other.  They both handle things like automation data differently.  Just a few examples of places other than the supposed “summing engine” where what you do, and how you use each program, can impact the final sound of your productions.

I’m sure what I’ve done here won’t end the debate over which DAW sounds the best, but I do hope that in some small way it shifts the discussion to the aspects that truly make a measureable difference.  As I’ve shown here, if there is a difference in sound quality, it’s not in the way they combine multiple tracks into the final end result.

——————-

If anyone finds a flaw in my testing, or just wants to continue the debate, please feel free to discuss this in the comments section below.  Please don’t ask me to test other DAWs , I’ve provided the exact files I’ve used if you’re really that curious about it.  By all means feel free to post the results of any testing you do though, as I’m sure other people are curious as well.

——————-

UPDATE 12-06-2011

Well, it took awhile but the flood gates have opened about my Live versus Logic sound quality test that I just posted.  Some people have raised some good points about ways I could have modified the test to include other parameters, so I’ve gone back and done a few things differently as sort of a round two.   I also wanted to clarify a few questions that seem to keep popping up on different forums again and again.

First and foremost though, I wish people would have not just skimmed the article and actually read my conclusions.  I am NOT saying that Live and Logic ALWAYS sound the same.  The point of this test was to isolate one specific area for comparison, and show how at the core, the way these two programs combine multiple audio tracks into a stereo wav file is the same.  That’s it.

Like I stated in the original post, there’s a LOT of other areas where there will likely be difference in sound quality.  Instead of getting mad at me for not doing all the work for you, it would be great if people instead tested some of this on their own and said “hey look, here’s one area where I can reliable show a difference in signals”.

 

Anyway, here’s some other things I looked at over the last day, and some clarifications on the test itself:

– A few people mentioned that they hear the differences most notably with recorded instruments.  The guitar in this test was recorded live, it’s a Parker Dragonfly using a combination of the piezo and mag pickups, through a Pod HD500 and then into an RME Fireface400.

– Some people have questioned whether the soundcard I use (see above) could have any impact on the signals in the test.  Short answer is no, the soundcard doesn’t factor in at all until after the DAW has done it’s thing and that signal is trying to get out of your computer.  Or if your song sucks, maybe the signal is ashamed and is trying to stay in your computer, I don’t know.

– Other people wanted to know if perhaps the test would turn out differently if I use more than 9 audio tracks.  So I duplicated the tracks in each DAW many times, and added some other random loops from my collection as well (to rule out it just being these audio files that this was happening with).  In total, I used 80 stereo tracks, exported, and still got total cancellation when comparing the two.

– This last test was one of the most interesting.  Someone had suggested using the same 3rd party plug ins in both DAWs, and seeing if that had any impact on whether or not they cancel (or sound).  So I used a combination of Fabfilter Pro-L, DMG eQuality, Voxengo MSED and Polysquasher, and PSP Xenon, placed randomly across the different tracks (yet the same tracks in both DAWs).  Some were placed one after the other in series on the same track, others were solo by themselves on a single track.

Interestingly, when I compared these two results, they did NOT cancel, barely at all in fact.  As I dug into this some more, it seems that the Voxengo and PSP plugs were the primary cause, as once I removed these the signals almost cancelled.  Summed they were inaudible, but I was still seeing some very small signal around -96dB on the Free-G meter.  This gave me an AHA! moment though, when I realized this looked a lot like a dither signal.  Sure enough, I had forgotten that I had dithering enabled by default in Pro-L.  Once this was turned off, the two signals cancelled completely.

So, I’m really not sure what kind of conclusions one can draw at this point about this, other than some of the differences in this part of the test seemed to be down to the plug ins themselves.  Perhaps they report their latency differently, or have some sort of random processing happening as part of the way they work internally, I really don’t know.

 

Anyway, the long and short of all this is that all this testing was never meant to be a definitive statement about which DAW ultimately sounds better, or which people should prefer.  I’ve gotten a lot of surprisingly hateful emails from people calling me an “Ableton Fan Boy” (is that an insult?) among many other not so nice things.  At the end of the day yes, I do like Ableton Live for many things, but it’s only one of many tools at my disposal.

For instance, when clients send me mixdowns to work on, I don’t use Live unless they ask me to, I always reach for Logic first.  It’s faster for this type of work, has better automation functions, and quite frankly I like the way it’s plug ins sound better and how quickly I can add an EQ to a channel if I need to. (far from perfect though, Logic has been buggy as shit since OSX Lion came out).

As always I’m sure people will draw their own conclusions no matter what I say, but I do ask that instead of sending me nasty emails or message, maybe try instead to offer something more constructive to the conversation than “You must have tomatos in your ears!”.

——————-

UPDATE 12-13-2011

Well, it turns out a flaw has been found that invalidates my test.  In attempt to use measurement tools that others could also obtain for free, it’s been pointed out to me that the low-level resolution of the Free-G metering plug in was not sufficient to capture all of the audio signal.  An Ableton Forum user has brought to my attention that the last 3 bits of the null-test signal (the signal below -126dBFS) are in fact not bit for bit identical.

How much affect this has on the audible difference between the two signals is debatable (and I’m sure people will), but I have to withdraw my conclusion that Live and Logic produce bit for bit identical audio files given the conditions above.  My apologies for not being more thorough in my testing, you can now go back to arguing about which DAW sounds better 🙂