Broken Past DJ Set

BrokenPast (<- Click to download or play)

I was going through some of my older tracks one day last week and decided it was time to to put together a mix of some of my more uptempo tracks from days gone by.  Some of these songs are 6-7 years old (and sound like it!), but it was a lot of fun revisiting some of my earlier tunes again.  Mainly breaks in the beginning, with some 4/4 stuff towards the end.

Start Time – Track (all tracks written by Tarekith)
00:00 – Devoid
03:52 – Playing With Forces
08:49 – The Thin Line
15:42 – Somewhere In Here
21:37 – Talus
26:32 – A World Away
33:10 – Shadows Falling
38:45 – Huck & Drop (2010 mix)
44:29 – In Your Place
50:06 – Deception
53:53 – Curver
59:25 – Base5
65:59 – Know No Limits

Very Sary – Downtempo DJ Set

Very Sary
Downtempo DJ Mix 06-19-2011

Smooth downtempo for the coming summer, perfect for the beach, poolside, or late-night lounge sessions.  This is probably one of the best mixes I’ve done in a long time, really happy with the tunes I found for this one, and how it all came together.

This set was recorded for the RK2 Podcast 5 year anniversary, thanks to Korruptor for asking me to be a part of this celebration!

http://www.rk2podcast.com/

Start Time – Artist – Track Title – Label
00:00 – Soulstice – Changes (Rocket Empire Rmx) – OM
05:58 – Solid State Drive – Lazy Hazy – Dusted Down
07:39 – TV Victor – Part 3 – Tresor
14:40 – Ruggero – Zaffiro – Wormland White
20:45 – Phutureprimitive – Burn – Native Harmonix
26:53 – Green Beats – The Beach – Synergetic
34:05 – Tarekith – Ridgeline – Tarekith.com
38:50 – Solid Sessions – Janeiro (Chiller Twist Rmx) – Cloud 9 Dance
45:59 – Green Beats – Tokyo Romance – Dubmission
51:32 – Hybrid – Every Word – Distinctive
56:17 – Zoubir Madani – Joy – Kinjo
61:31 – Bassus – Waterfall – Sambit
66:29 – Lomez, Ivaylo – Rila (Sound Solutions Rmx) – Houselective
73:02 – Fastus – Vol du Nuit – Master

 

(Still) Almost Live

Back in April I posted about the steps I was taking while preparing a new hardware based live set:

http://tarekith.com/almost-live/

Well, things are getting much closer to wrapping up finally, so I thought I’d update people on some of the other aspects of what’s going into this set.  Probably a bit overkill since it’s a relatively simple affair, using only the Machinedrum and Monomachine, but people seemed generally interested the last time I talked about it.

So, when I last discussed this, the core patterns in the Monomachine (MnM) had been written, covering the bassline, lead, pads and fills, and other random synth sounds.  I had the basic beats sketched out in the Machinedrum (MD), mainly just some simple kick, snare and high hat patterns though.  I was planning on using 16 patterns as songs to fill up an hour live set, with transitions being handled by the real time sampling functions of the MD’s RAM machines.

The next step was to start adding in supporting percussion parts in the MD, and for this I wanted to do something a little different.  I decided that all of the percussion sounds would be made of of found sounds, basically me running around the house with a microphone recording myself hitting and tapping random objects (note to self, the dog does not appreciate being a drum).  I didn’t need a lot of sounds, the MD has quite a bit of sound sculpting ability, so I narrowed it down to only 23 samples in the end.  You can download them here if you’re curious:

http://tarekith.com/assets/2011LiveSetMDSamples.zip

From there it was just a matter of fleshing out the Machinedrum patterns with these new sounds, as well as some cymbals using the built-in synth engines (as well as my samples).  At this point I was also balancing all the levels of the different drum sounds, adjusting the panning on the less important sounds (main sounds are always right up the center), and programming some parameter locks here and there to keep things interesting and evolving. I’m a big fan of using the LFO of some parts like HH’s to modulate volume in a semi-random fashion as well, keeps things a little more organic sounding and less static.

Once done with that, I’d say 95% of the music on both the MD and MnM was written, so I was able to start working on the track order for the live set.  I like to start out a little slower but still catchy, build that up for a bit, then break up the set in the middle with some slightly weirder and perhaps even darker sounding songs.  Then I can come out of those and increase the complexity and energy to end on a strong note.  I’ve always found that Ableton Live is a really good tool for helping me to figure out the track order of live sets, since I can easily move clips around in session view to see how the set flows from start to finish.

The first thing I do is create 3 audio tracks in Live, one for the MD, one for the MnM, and one that I actually record to.  The MD and MnM tracks are routed to this third record channel, which lets me record both instruments into a single clip for each pattern.  I do this, naming each clip in Live the name of the patterns in the Elektrons, and then play around with the order of things until I like the way it flows as a set.

(Click to enlarge)

Once I’m happy with the order of things, I take a screen shot of the clip order in Live, and then it’s time to start playing with sysex.  The Elektron boxes don’t have a dedicated librarian for moving things around on the computer, so it all has to be done old-school style with sysex.  Luckily, Elektron has built some really clever sysex functionality into each box that makes this a lot easier to manage.  For starters, since a pattern will always call up a kit when loaded, it’s possible to export both patterns and kits in one go, and they will be tied to each other.  So the first thing I do is export the pattern and kit sysex for every pattern in both machines, naming them appropriately according to the track order I want.

Here’s the neat bit though, once the sysex is named and ordered on the computer, I can send it back to the Elektrons and specify the exact locations where I want both the kits and patterns to load into.  When receiving sysex dumps, the Md and MnM can be set to load the sysex into the same locations it was originally, or I can specify an exact starting point for the both kits and patterns.  This means I can send all of my sysex in one go, in the correct order, and I know that the Elektrons will store this data in the correct order as well.  Sounds a little confusing, but it saves a TON of time compared to having to manually send each kit and pattern and save them individually to the right locations.

I should also note that during this process I culled two songs that just weren’t really working in the set.  Rather than back track and trying and write new material, in the interest of moving forward and getting this set prepped, I’m just going to go with 14 patterns.

So, once the track order is set and addressed the way I want it, the last step is to go back and do one final adjustment of all the volume levels.  I’m trying to make not only each patterns full and balanced sounding ala a mixdown, but also making sure that the volumes are consistent from song to song.  I also make sure that the low end is nice and balanced (as much as I can), since the Elektron boxes can output gobs of sub-bass if you’re not careful.  Full range monitoring definitely helps here!

I also tend to write my Elektron sets with the master volume knob on each machine all the way to max, so I’ll double check that I’m not clipping my audio interface by sending too hot of a signal to my Fireface400.  I don’t perform with the volume knobs at max, I tend to put them at about 3:00 to give myself more of a safety margin when I play out.  This just lets me know that even with things maxed out for some reason, I will not be clipping either a PA or anything I recorded into.

And of course I’m backing up all of this work daily too, sometimes more than once a day depending on how much work I’ve done.  Better safe than sorry!

And now we come to where I am at the moment with the live set.  All of the above has been done so far, and I’m pretty close to being able to perform and record the set.  As I mentioned earlier, the songs are all about 95% of the way done, so I’ll take my time over the next couple of weeks to go back and fine tune everything until I’m totally happy.  After the last couple of weeks of heavy writing and tweaking, it’s nice to have a couple of days away to give my ears a break and get some fresh perspective.

Then it’s just a matter of waiting for the right time to get inspired to play and record the set.  Hoping to have this done in the next couple of weeks, but it wouldn’t be the first time I’ve said that about this live set!  🙂  I also plan on trying to video tape the performance, so people can see how I ‘play’ a live set on hardware.  No promises, but I’d like to do a close up of the MD and MnM and annotate what I’m doing through out the set if I can.

—————–

So, there you go, an update on the live set.  It’s been a lot of fun working on the set, I always enjoy the mental process of composing on hardware in a groovebox fashion.  but, I’m glad it’s almost done too.  I’ve been working on this for a long time, so it will be nice to put this behind me and move on.  I’m tentatively planning on starting a new full-length album after this, focusing on Ableton Live and Max4Live, using the APC40 and iPad apps to control and write.  We’ll see though, after this project I might need a break and who knows what new ideas I’ll get then.

Thanks for reading as always, hope some of you found this interesting.  Just a note that I’m now on Facebook as well, so stop by if you want to follow or say hi:

https://www.facebook.com/InnerPortal

 

Ableton Live & APC40 Live PA set up

A few people over the last couple of weeks have asked me how I use Ableton Live and the Akai APC40 to perform my live pa’s.  I’ve covered it briefly on the Ableton forums over the years, but I figured it was time to go into a little more detail.

While I tend to write brand new material for my hardware live sets, my Ableton Live sets are my chance to perform the studio tunes I’ve written and released during the previous couple of years.  To make things coincide with the APC layout, and to keep the set from being too complex, I use 8 tracks of audio clips in my Live sets.  To make it easier to remember which sounds are on which track live, I use the following layout for all my tracks:

Track 1 – Kick and Snare
Track 2 – Percussion
Track 3 – Cymbals and Hi Hats
Track 4 – Bassline
Track 5 – Lead (synth or guitar)
Track 6 – Synth 1
Track 7 – Synth 2
Track 8 – Pads and Fills

Tracks 6 and 7 are basically for any sounds that don’t fit into the other categories, things like secondary synth lines, supporting guitar parts, weird effects or vocal samples.

So the first thing I do when prepping material for my sets, is to open the original song project file and start combining everything down to these 8 stems.  One of the things I’ve learned over the years, is to not try and include everything single sound from the original song in my stems.  It makes the overall sound too busy in a live setting, and often it’s better to just focus on the strongest, and most important parts of the song.  So a lot of fills, and sounds that only were used occasionally in the original song will get deleted.

Once I decide on what sounds will be part of the 8 stems, the next thing I do is work on making these into 32 bar loops.  I grew up performing with grooveboxes, so I’m used to working with loopy material and creating the song structure, builds, and peaks on the fly.  I find that 32 bars is the best compromise between the loops being too short and repetitive, or being too long really not giving me a chance to interact with them to create something live.  Typically in a live set, I’m only going to loop each clip 3-4 times before moving on to the next song, so it works out well.

In this phase I’m basically trying to condense the song into the strongest 32 bars I can, so that when all 8 stems/clips are playing at once, it’s more or less the peak of the song.  Mainly because I find it’s easier to play with the song structure on the fly this way.  I have a lot of tools to make complex parts simpler, loud parts quieter, and important sounds less in the forefront if I want.  More on that later though.

As part of this process of paring things down to 32 bars, I really try and re-use my programmed fills from the studio version to make things more exciting and less loopy sounding.  For instance, in the studio version, I have programmed a kick and snare fill every 16 bars.  When combining everything down to the live versions, I’ll pull the best of these fills and put them every 8 bars maybe.  The strongest and most exciting fill be placed at the end of the 32 bars as well, so that when the clip loops, it does so in a way that avoids being too monotonous or boring.

The last thing I do before rendering these stems, is to make sure that they actually do loop and repeat smoothly.  There’s no clicks or pops, and that no matter which combination of the stems is muted, it sounds natural and flows nicely.  I don’t want people to think “oh right, that’s where his song looped and repeats again” if I can help it.

Once that’s done, I render each stem as a 24bit/44.1kHz wav file, and name it with the stem type and the song name, i.e. “Bassline – Disappear.wav”.  This just makes it easier to quickly find the audio file if I need to later on.

(Click image above to see full sized image)

From here it’s time to organize the live set into one Ableton project in Session View.  As I mentioned, I use 8 tracks, and each scene in Live is a different song.  Sometimes if a song has a solo that doesn’t fit into my stems, or maybe I have a really long drop I like, I might create a second scene for just those parts.  In the screenshot above you can see I did this for the song “Tidal”.  It has a very strong solo I recorded in the studio, and I want to make sure I don’t accidentally trigger it until I’ve built up to it appropriately, so it’s on it’s own scene.  It’s a way for me to visually know that that clip is special in some way, and to not trigger it as if were a normal stem.  When I say visually, I mean both by looking at the laptop screen, or by looking at the APC40’s grid buttons.

So, the next step it to put all my stems on the appropriate tracks and scenes, name all the clips and the scene, and give each song it’s own color (both the clips and scenes).  I’m a visual person, so if I DO need to glance at the laptop to see where I am in the set, the colors help me break up the set in a way that I can quickly see what I need to.  I also put the song tempo in the scene name. Because my downtempo sets can cover a large range of tempos, this lets me know to change the set tempo to match the original song tempo.  I do this by assigning the Cue Volume knob on the APC to Live’s tempo field.  Generally if I know the next song is at a faster tempo, I’ll slowly start increasing the tempo during the current song to make the tempo changes less noticeable.

The next thing I do is warp all the clips.  The drum clips usually get warped with Beats mode, basslines with Tones, and everything else typically Complex Pro, though admittedly it depends on the sounds too.  I’ll use whatever sounds best over a +/-10 BPM range.  I double check that each clip is set to loop properly, and that Live guessed the correct location for the start marker (sometimes it offsets this a tiny bit, which throws everything off).

The last step in prepping the clips is to basically do a mixdown of each scene to make sure everything plays back at the right volume, and is consistent song to song.  I like to have the faders up all the way on the APC for this, so I can easily slam faders up and down on stage without worrying about boosting too much.  So I’ll set the volume fader for each track in Live to max, and use the clip volume controls to adjust the volume of the audio.  This is a great way to give the whole set a more cohesive feel as well, since I can redo the mixdowns to be similar song to song.  Typically I try to leave about 4-5dB of headroom on the master channel when prepping the set this way.  I do put Live’s Limiter on the master channel as well, but only for catching stray peaks that might happen when I perform, mainly from effects usage.  Rare that it happens, but better to be safe than sorry.

When it comes to effects in the set, I have 2 return tracks in the set, one for reverb and one for delay.  I also have a custom effects rack on each track, and this is made up of 8 of my favorite DJ EFX from the packs I’ve released here:

http://tarekith.com/assets/TarekithDJEFXv8.5.zip

As you can see the rack has a high pass and low pass filter, some gating effects, some chorus and ambient generating effects, and more delays (I love delays).  I use the same rack on every track, again, just to be consistent so I know what I’m tweaking no matter what track I’m adjusting.  I can do the whole live set without looking at the laptop, so this type of consistency just helps me avoid any unexpected things happening as I jump around the set looking only at the APC40.

And that is how the Live Project is set up for my Live PA’s.

The APC40 I use to control the set is basically set up to use the default mapping it comes with right out of the box.  The grid buttons launch clips, the faders control track volumes, and the solo and mute buttons work as you’d expect.  I use the Track Control knobs to control what feeds Send 1 (Delay) and Send 2 (Reverb).  Because I only have one Effects Rack on each track, the Device Control knobs control my track effects depending on which Track is currently selected.  The only non-standard mapping is the tempo control I mentioned earlier, where the APC’s Cue Level knob is assigned to global tempo.

Also, as you can see above, I have colored the Clip Stop buttons red (with a Sharpie, nothing fancy).  This helps remind which buttons are the Track Select buttons, and which will stop my audio at the wrong time.  Honestly, this is pretty much my only complaint with the APC40, I still don’t understand why Akai didn’t use red LEDs for the Clip Stop buttons.  Red means stop, green means go, duh.  🙂

From here it’s just a matter of performing the set.  I use track volumes and muting to define the song structure on the fly, create drops and build ups, and slowly morph from one song to the next.  Track effects let me alter my audio loops in different ways, and with my Delay and Reverb sends along with my Weird Wash effect (in the track effects rack), I can turn any sound into a texture or a pad.

Probably my favorite part of this set up, is that I can do a whole set without looking at the laptop, it turns  the APC40 into almost a groovebox.  I don’t feel like I’m using software and a MIDI controller at all.  In the future I’m thinking about using Kapture Pad on my iPhone or iPad as well.  That way I can use a lot of effects to mangle the set into weirdness, and with a press of a button (errr…. on the screen) bring it all back to normal instantly.  Haven’t had a chance to play with this yet, but it’s definitely something I’m keeping in mind for the next time I do a software based live set.

Anyway, that’s how I do my Ableton Live sets using the Akai APC40.  I’m happy to answer any questions if I didn’t explain something clearly enough, just post it in the Comments section below.

Edit:

Oops, I forgot to post a link to one my sets that I did via the above:

Downtempo Live PA

 

Almost Live

Recently I’ve talked a little bit on my Twitter feeds about how I’m prepping a new hardware based live pa, and I’ve had a few people ask me questions about it.  Namely, why hardware and not Ableton Live anymore, and how do I go about creating a strictly hardware based live set.  So, I’m going to talk a little about that for this week’s blog entry.

To start with, no I’m not ditching Live and the APC40 completely for my live sets.  I’ve been happily using that combination for a couple years now, it’s just time to revisit my past a little bit.  If you were one of the bored people who made it all the way through my “History” blog post from a few weeks ago, then you’ll know that my very first introduction to writing electronic music was to put on live pa’s in the late 1990’s.

I’ve revisited the idea a few times over the years since then in a series I call “Morphing Mechanism”, but for the last couple of years I’ve really been itching to put together a brand new live set that doesn’t involve a laptop at all.  It’s both a challenge to me, and I think a way to sort of set myself apart a little bit from the plethora of laptop based performers in Seattle these days.  I’m sure one day I’ll revisit the laptop based live set (in fact all this hardware work has given me some new ideas on how to do so), but for now I’m focusing strictly on hardware grooveboxes and drum machines to perform with.

I started work on this project early last year with the intention of it being based around an Elektron Machinedrum-UW, and an Access Virus TI2 Polar.  In this instance, the Machinedrum (MD) was going to be doing all of the drum sounds, as well as being the sequencer driving 4 tracks of synths in the TI.  Unfortunately, after 8+ months of work (and literally on the day I finally considered the set done and ready), I ran into a nasty bug in the Virus OS.  An hour before I was to record a demo of the set to pass out to promoters, I lost all of the sounds TI and there was no way I could get them back.  Yes, I had been making daily sysex backups, but the bug was such that the backups the TI sent were corrupt and I had no way of knowing this.  So after loading one of these corrupted sysex backups back into the TI, all of my sounds were over-written with garbage noise.  To say I was upset would be a huge understatement.

A few days later Access confirmed the bug (and released an OS update correcting it shortly after), but by then I was pissed off and fed up, so I sold the TI.  Of course, this left me in sort of a quandary.  With the TI and all of my synth sounds gone, what was I going to replace it with?  In the end I decided to finally take the plunge on an Elektron Monomachine (MnM).  The Machinedrum is my favorite bit of gear ever, and I figured it was time to see if the MnM equally as good when it comes to synth sounds.  Based on other user reviews, I was a bit fearful that it might not be a sound that I liked, or that it would be too simple for me, though luckily in the end these fears proved to be completely unfounded.  The MnM is a very deep synth, and while not as oriented to performance as the MD is, I knew it would work nicely for my new live set.

By this time, I was beginning to think it would be best to just scrap everything from the last live set attempt and start over with a clean slate, so that’s what I did. All the MD sounds and patterns got deleted, and I started with an empty palette on both the MD and MnM.  Because the MnM has it’s own built in sequencer, there was no longer a need for me to use the MD to sequence my synth parts either.  So for this go around, I’d still be doing all of the drums on the MD, but both sequencers would be running slaved, with the MnM being the master clock.  No real reason why the MnM is the master rather the MD, other than the fact that I have the MD on the left and MnM on the right, and it just feels more natural to hit start and stop with my right hand.  I again decided to use only 4 synth sounds on the MnM, which leaves two of it’s six track free to assign as effects.

One of the things I find most helpful in preparing and performing live sets, is sticking with a set layout on all my gear when it comes to instrumentation.  For instance, I know that no matter what song I’m playing, Track 1 on the MD is always my kick, Track 2 is snare, Tracks 9 and 10 are the HH’s, etc.  Likewise on the MnM, Track 1 is the bassline, Track 2 is my lead, Track 3 is the effects for the lead, Track 4 is a random synth, Track 5 is my pad or fills sounds, and Track 6 is the effects for Track 5.  Setting things up this way right when you beginning writing and prepping the live set really makes it simple to know exactly what you are controlling at any time in the set.  Not to mention trying to troubleshoot things in the heat of the moment when something doesn’t sound right.

The other thing I do when working with hardware live sets, is to treat each pattern like it’s own song.  In most hardware grooveboxes and drum machines, your sounds and sequences are organized into short segments called a pattern, typically from 4-32 bars long.  So when I’m crafting the set, I basically treat each of these patterns as a distinct song in the live set, and I write between 10-16 patterns to last me an hour or so.  Of course, this means that all of the drops, build ups, and variations in each song need to be done on the fly, they can’t be programmed in advance.  Normally this is accomplished via muting individual sounds, and tweaking the parameters of different sounds as I play.  This is actually my favorite part about performing, as it means that each time I do a set it’s completely unique, and I get to orchestrate it on the fly depending on my mood.

In the case of the MD and MnM, they both have a maximum pattern length of only 4 bars though.  This presents some interesting challenges when writing and preparing a live set.  Namely, how do I keep things interesting enough and not too loopy sounding?  With software, this is less of any issue, it’s easy to add in complex pre-recorded fills, or use longer patterns.  So one of the things I’ve learned over the years, is to just not worry about that too much.  I just embrace the fact that this is going to inherently be a bit loopy sounding, and focus on making the strongest grooves I can so people don’t mind listening to them for 3-4 minutes a piece.  Again, this is one of the great things about playing live versus writing in the studio, in all likelihood your audience will only ever hear these songs this one time, so you can get away with a little more repetition.

That’s not to say I still don’t try and keep things evolving and interesting either.  I try and keep each song pretty short, and add a lot of variations with real-time tweaking and mute variations.  You only have two hands, so there’s only so much you can do, but I’ve been doing this for almost 20 years now so I have a good feel on how to pace things to keep it moving.  It helps that the MnM has 3 really slow LFO’s for each sound, so it’s not too difficult to make things slowly morph over the 3-4 minutes I’m playing each song.

The other trick I’ve learned for keeping things interesting, is to not worry about the drum parts until later in the process.  I try and really focus only on the synth parts initially, so that they are strong enough to stand on their own without relying on complex drum parts or familiar rhythms.  When I do this, it seems that the songs ultimately are more interesting than when I start with drums like I normally do when writing music.  It might seem odd at first, but when you have really strong instrumentation, it’s a lot easier to write drums to fit, versus the other way around.  Especially when each groove is only going to be 4 bars long.

Of course the one thing most people ask me, is how do I transition smoothly from one song to another?  Let me start by saying that you don’t always have to worry about this.  I know a lot of really awesome live acts that only play one song, stop, load up the next song, and then perform it.  It’s a perfectly valid way of performing, and arguably has it’s own advantages (like not having to stress about transitions).  But, for whatever reason, playing electronic music live has always been about crafting a continuous piece of music for me.  Because of this, I’ve always gravitated towards gear that has some sort of facility that makes this easier.  Initially it was the Roland MC505 with it’s Megamix, then the E-mu Command Stations with their similar XMIX function.

The Machinedrum UW has a rather unique function in that you can sample both it’s internal output signal, and/or anything coming into it’s inputs at the same time.  Samples are mono, can only be 2 bars long at most, and quite honestly sound rather digital since they are played back at a bit rate of 12bits.  Still, despite being a limitation, it’s a lot of fun and offers me an easy way to move from one pattern to the next.  I merely sample the MD internally at the same time as the MnM coming in externally, loop that, mute all other parts, do the pattern switch while the sample continues to play, and then slowly unmute the new parts from the next pattern.  The whole time you can freely tweak and re-sequence the audio you previously sampled too.  It’s a terribly difficult thing to describe succinctly, but trust me that it works great and is very simple to do once you get the hang of it.

Initially I was running the MnM directly into the MD’s inputs, but to be honest, anything coming through the MD directly like that ends up sounding rather flat and one-dimensional.  All the depth and subtlety is gone.  So now I use my RME Fireface400 as a small, but very high quality standalone mixer (it doesn’t need a computer connected to work like this).  Both the MD and the MnM go into the FF400’s inputs, where they are summed and sent to a master stereo output.  I also have a copy of the MnM’s audio signal going to a separate output which feeds the MD’s inputs strictly for sampling for these pattern switches.  The best part about this set up is that both machines sound fantastic on their own, and I can still feed the MnM to the MD for sampling.  If the Fireface is out of your budget and you’re interested in this idea for your own sets, the MOTU Ultralite can do the same thing at less than half the price.

So there you have it, a somewhat brief run down of how I’m prepping and preparing my new live set.  Currently I’m about halfway through writing material for the new set, though it’s coming together a lot faster than I thought it would.  If all goes well, I hope to have a demo recording ready to go in a couple months or sooner, with some live gigs to follow shortly after that.  If you’re interested in hearing some examples of material performed like I’ve described, here’s links to two of my previous live sets using similar gear.

This set is done using only the Machinedrum and nothing else:

http://tarekith.com/mp3s/Tarekith-Machinedrum_Live_PA.mp3

This live set was done with the Machinedrum doing the drums, and a Korg EMX-1 providing all of the synth parts:

http://tarekith.com/mp3s/Tarekith-The_Flow_Of_All_Things.mp3

Both sets were done 100% live and on the fly, with no additional editing or processing aside from normalizing done to them.  Enjoy, and stayed tuned for the new live set in the near future.

———-

Also, if anyone is curious to see what your’s truly looks like (shudder), I just recorded a new video introduction for my mastering business.  You can view it here:

http://innerportalstudio.com

Space Is The Place

Recently I’ve been seeing a lot of people asking how to create space and depth in their mixes, so I figured it was a good time to write down my thoughts on the subject.  When I talk about depth and space, I’m referring to that 3 dimensional aspect of a song, where some instruments sound like they are farther away from you the listener. It can also refer to the times when it sounds like the music you’re hearing was performed or recorded in a very specific location, such as a performance hall.

Contrary to popular belief, it’s usually not as simple as just putting a reverb on certain sounds.  While that can be a part of the solution, more often than not it often ends up making things worse in the wrong hands.  Don’t get me wrong, for many people a single reverb might be all they need to add space to a mix, but there’s certain things you need to keep in mind if you go this route.

For starters, not all reverbs are created equal, and the better a reverb you use, usually it will make your job a lot easier.  With cheaper and less CPU intensive reverbs, often you end up just washing the sound out, versus actually adding any depth to it.  It does the opposite of your intent, and makes sounds harder to place sonically. I personally find convolution reverbs the most realistic, but they’re certainly not the only options.  So tip number one would be to use the best reverb at your disposal if you want depth, and you’re not trying to create a special effect.

One of the most important aspects of reverb is the pre-delay, which (in simple terms) controls how long after a sound is heard that the reverb starts.  Think about clapping your hands in a room, and the reverb tail that sound makes.  You don’t instantly hear the reverb when you clap your hands, it takes time for the sound to reach the walls, then bounce off and interact with the other reflections to create the reverb.  A good rule of thumb is that sound travels 1 foot per millisecond in air (at room temperature).  So if you are 15 feet from the nearest wall, that means that it will take approximately 15ms for the reverb to start.

You can use this to your advantage when setting up your reverb, since you can use the pre-delay to help determine not only how big a room is, but also where your instrument is in the room.  Keep in mind that this is only a general guideline though, with most reverbs the pre-delay parameter often controls a much more complex set of interactions, so as always, use your ears and do what sounds best.  Generally I find 15-30ms is a good range to start with, rarely do I use more than that though.

The last reverb tip I’ll offer is to use less than you really think you need.  Often I hear people really soaking sounds in reverb to create depth, when in real life, our ears only need a tiny bit of this sound to accurately place the location of a sound.  Using too much ends up sounding more unnatural than not using any at all.  Same with the size of the reverb, you don’t need to use really long reverb decay settings to create the sense of a large space.

Having written all that, I have to admit that I pretty much never use reverb anymore to create depth.  I’ll usually use a delay instead, as I find having a slowly decaying delay can often create more space than a reverb will, without cluttering up the mix or washing things out too much.  Coming up with a delay setting that conveys depth usually requires more experimentation than reverb, so unfortunately I have less concrete advice to offer here.  After all, delay is only mimicking the sense of space true reverberation offers, in real life we rarely if ever hear things as only discrete delays.

Stereo and ping pong delays work well as they let the effect fill the sides of your stereo imaging, while the core sounds can remain centered.  Likewise using a delay that low pass filters each successive repeat can simulate the sound of the delays being absorbed by items in the “room” you’re creating, much like what happens in real life (I.E. furniture will over time absorb the reverb reflections in a room).  Again, if you’re after a sense of realism, use less than you think you need, the point is to HINT at space with delays, not drop the listener into a huge pool of them.

Regardless of which method you prefer, reverb or delays, if you want to place multiple instruments in a space, you need to set each of these to varying amounts of the effect.  Or use different reverbs for close instruments, and different reverbs for instruments further away.  The key to this technique is to make the reverbs or delays as close as possible to each other tone wise, varying only the controls that convey the actual space or distance the listener is from the sound source.  And of course, you don’t want to add the effect to all your instruments, as that will make everything feel far away, and your song might lack any impact or contrast.

Space and depth are not just created with effects alone though, there’s two other aspects of your mixdown you need to to pay attention to.

The first is panning.  Having every sound in your song panned dead center might ensure the greatest compatibility on a club sound system, but when it comes to realism, it doesn’t work very well.  It’s like going to a concert where every musician in the band is standing single-file in a line with you in front of them.  In addition to making it harder to mixdown and combine multiple instruments, it doesn’t add much stereo interest.  So move things around left to right in the sound stage.  This helps convey a lot of information about where instruments are in relation to each other, and honestly is just more exciting to listen to most of the time.

Two pitfalls I often see people fall into with panning though, are putting too many instruments to the sides, or putting them too far out to the sides.    You don’t want every single sound in your song to be coming from only the left or the right speakers (usually, maybe that’s what you DO want, weirdo!).  Maybe it worked for the Beatles, but they didn’t have much choice and you do.  So be selective about what you pan, and how far to each side you pan it.  I get a lot of songs sent to me for mastering where the artist went crazy with their panning, and as result, there’s nothing in the center of sound stage. EVERYTHING is panned somewhat left or right, and it creates a dead spot right where you want things to be front and center.

In general, I try and keep the most important instruments in the song closer to the center of my sound stage.  If not dead center, then at least not panned very far out to the sides.  Things like pads, effects, strings, etc are usually filling more of a supporting role, so they can afford to be out to the sides.  When panning instruments further to the sides, pay attention to the overall balance of the mix too.  You don’t want more instruments in the left side of the mix than in the right, or vice versa.  It makes things sound unbalanced, and gives the impression that one speaker is louder than the other. If you have something loud panned left, pan something equally loud to the right.  Simple.

The last thing I want to bring up is the issue of dynamics.  No, not going to talk about the loudness wars (this time!), but dynamics do a lot to create depth in a song.  Think of it this way, when you compress something in your song, it’s often to make it more prominent and in your face, right?  Well if everything is loud and in your face, then what is further back in the mix creating depth?  Compress what you will, just keep in mind that some dynamics in the song will really help you create depth in your mixdown.  Balance the loud stuff with quieter, more dynamic sounds and you win on both fronts.

As sort of a subset of dynamics, is the issue of how busy a song is.  Depth and space are conveyed by the way sounds decay and fade away over time.  So if your mixdown is so busy that nothing ever really has time to decay (or it does so masked by other sounds), then it’s that much harder to put that feeling of space in your song.  You don’t need to be firing 1/16th notes all the time on all tracks, at least not if you want your song to have any sort of 3-dimensional aspects.  It’s often said that the notes you don’t play are as important as the notes you do play, and this is especially true when it comes to creating depth in a mixdown.  Keep that in mind when writing and arranging your track, and your job will be so much easier when you want to address this later on.

Hope you found this useful as always, please leave any questions or other ideas you might want to offer in the comments of the blog, and I’ll be happy to address them.

————

Finally, I want to take a quick second to talk about the Donate button off to the right side of the screen.  Some of you have seen this added a few days ago, and emailed me with your… concerns.  🙂  I want to be upfront and state that I will ALWAYS offer my guides and production tips on the blog for free.  I’m a big fan of helping people out, and I don’t think people should always have to pay for advice and tips, as long as I have the time to offer them.  I truly enjoy doing it, and it seems enough people enjoy reading them, so nothing will change on that front.  In fact, it’s one reason I switched to the blog format, as I had so many people who wanted a better way to stay informed of when I released new guides and tips.

In the past when I wrote my production guides, I’d include the Donate button at the bottom of the the html pages, and I’ll be honest in saying it helped bring in a little money (very little) each month for my family.  Since I’ve switched to the blog format, I haven’t had that option anymore, and a few people have asked how they can continue to Donate when they’ve read something useful that helped in their own productions.  In addition, traffic has increased dramatically since the site relaunched, so I’m hoping a couple donations each month will help offset those costs.  It also keeps me from having to put annoying banner ads on the site (and I’ve had plenty of offers), which is good as I hate those things as much as you do.

I don’t expect anyone to donate, but since more than a few of you asked, there you go.  Thanks for reading, stayed tuned for a lot more articles in the coming weeks.

Oh, and email notifications are working now too, so you can sign up to be notified of new posts via email if you want.

Peace and beats,
Tarekith

Know Your Limitations

One of my favorite ways of coming up with new ideas for songs, is to limit the options or tools I use during the composition process. I’m sure a lot of this is born from earlier times when I first got into music making, as I just didn’t have the money to spend on a lot of gear (and back then gear was expensive!). So I’d have no choice but to plumb the depths of whatever I was using, doing my best to write complete songs and not get bummed out by my lack of gear.

I used to get so frustrated with that too, not being able to follow through with an idea because I was already using my one EQ, or didn’t have another free input on my tiny Mackie 1202 mixer, whatever. Of course the flip side of that lack of gear, was that I was unknowingly learning the gear I did have really, really well.

Fast forward a few years and the whole concept of limitations was foreign to me, as DAWs with the unlimited choices they offer will do that. As many effects as I wanted, tons of free synths, plenty of free tracks, you name it and it was largely possible. I’d even go so far as to try and write songs using as many tracks and effects as I possibly could, just because I had that option open to me.

Like any new idea though, eventually this concept of throwing as much as I could at a project slowly began to fade as a source of inspiration, and I once again found myself struggling to think of ideas for new songs. It was around this time that I started playing with the idea of imposed limitations as a source of inspiration. By limiting my tools, I was forced to use what I had at my disposal in new ways. More importantly, it made me re-look at my working methods, and come up with new ways to do things.

You see, I firmly believe that we do our best work when confronted with a challenge. When taken out of our comfort zone and the creative repetitiveness that tends to breed, we begin to come up with new ideas we would not have arrived at earlier. So I began to look at each song as a chance to solve a new problem, and these problems were always self-imposed. Sometimes the challenges I set myself were not too difficult and affected only part of the writing process, other times I made myself work to achieve a task I knew could be extremely hard to complete.

For instance, here some of the things I would do to limit my options:

– Try and write a complete song using only a drum machine and nothing else. Double points for using only drum synthesis to create the sounds, and not samples.

– Use the song mode on a piece of hardware instead of my DAW, even though the DAW was much easier and faster to use.

– Try and mix a song using only one type of each effect. IE, pretend I still only had one EQ, one compressor, one delay, etc. Trying to figure out where to best use those effects can be very challenging.

– Create a song using nothing but a guitar, including the drum sounds.

– Create a song using only a short 4-5 second snippet of audio. Could be a field recording, or a sample of a record, whatever. The point was to deconstruct that one sample and use it to create a whole palette of sounds for the song.

– Record a solo for one of my tracks using a MIDI drum pad instead of a keyboard.

– Create the drum sounds in a song using only a single monophonic synth. The simpler the synth, the better.

– Use a pair of headphones to record all the sounds for a track. No going direct or using a real microphone.

– Let my room mate or girlfriend chose all the sounds for my song, no matter what I had to make it work with whatever they picked. At the very least this can lead to some pretty funny results.

– Play all the piano parts in a song using only my toes. (Ok, that’s a bit extreme, never really did that).

You get the idea.

Like I said, almost all of my songs these days start as some form of limitation I’m trying to make myself overcome. It forces me to learn the gear I have in new ways, and really opens up possibilities I never would have thought of otherwise. Of course the key is to set yourself a challenge that you can likely actually achieve, and not set yourself up for failure and endless frustration. I recommend starting with limiting yourself during small tasks at first, during small parts of your writing process.

Try choosing just one synth for all your sounds, or work only with midi instead of audio like you usually do. Eventually you’ll get better and realizing what kinds of limitations will help spur new ideas and working methods, and what limitations just lead to banging your head against the wall. Like everything, the more you do it, the better you get.

————–

Just a reminder that you can now sign up for email notifications of new blog posts if you’re not into RSS or Twitter. The subscribe buttons are to the right of the blog postings now.

The Quietman Album

As most of you know, one of the valued members of the Ableton forums (Martin Brown, aka Leedsquietman) passed away unexpectedly in December, leaving behind his wife and three kids.  Further adding to the tragedy, his mother passed away while preparing to attend his funeral.  To raise money and help out his family, a number of us from the different forums Martin participated in have gotten together to release a tribute album, the proceeds of which will be going directly to his wife Judy.
If you feel like contributing, the album (15 songs) can be purchased for $10 from Bandcamp:

If you want to help but just can’t afford to give money right now, please help us spread the word by joining and passing on the related Soundcloud  and Facebook pages:

http://www.facebook.com/Quietman2011

Thanks everyone for your support and help, hopefully we can raise enough money at least help Martin’s family in some small way.

Laptop, I love you, I hate you.

First up, if you haven’t seen the new teaser for the Elektron Octatrack, it’s definitely worth a look:

Obviously I’m a huge Elektron fan already (owning a Machinedrum and Monomachine, as well as moderating the Elektron-Users.com forums), so I’m interested in the Octatrack a lot.  Thinking it might let me use all hardware again to play live, leaving behind Ableton and my laptop for samples of my studio work.

Which brings me neatly to my main topic, the simplicity of the laptop, and why I’ve never been able to completely embrace it no matter how hard I try.  Like a lot of musicians, I went through a phase early on of owning a lot of studio gear to make music.  Multiple racks, keyboard stands with multiple synths, grooveboxes galore, you name it.  Then of course the digital audio revolution happened, and slowly but surely I started selling things off and moving more and more to producing entirely in the box.

Of course, in many respects this was really not at all that different from having lots of hardware initially.  Like so many others, I became obsessed with ‘collecting’ plug-ins.  Dozens of dynamics processors, too many softsynths, and more than a couple DAWs.  Slowly, I realized I was turning to a select few plug ins though, and I began to whitle down my collection.

Then I made the jump from a desktop to a laptop, and suddenly things changed.  I realized that here was a really compact means to making and performing music.  This one tool reduced clutter and cable nests, removed the need for external monitors, keyboards, and mice.  Paired with something like Logic or Live, I could basically create anything I wanted with such a simple, and yet extremely powerful toolset.  It was a sort of revelation, and in the years since prompted me to sell more and more gear, to the point where my studio looked more like a beginner just getting started, instead of someone with almost 2 decades of experience.

There was a problem though.  Despite achieving my dreams of a minimalist set up, I really wasn’t enjoying the music making process anymore.  At the time I thought it was the lack of physical controls that was throwing me off, and thus began the great MIDI controller experiment.  I think I must have tried dozens of MIDI controllers trying to find one that reminded me of using a groovebox.  Sadly, nothing ever really worked like that, at the end of the day a laptop is still a computer, and a generic MIDI controller still requires too much configuring to be useful in the heat of the moment.  I didn’t want to stop to remap every parameter I wanted to control when I thought of it.  Even things like Novation’s Automap just didn’t sit well with me, very unpredictable in use.

So for now I’ve accepted the fact that I just can’t work with only a laptop, I need at least a few pieces of hardware to use when making music too.  Someday I hope a more elegent solution is found, in the meantime I’ll have to live with the love-hate relationship when it comes to the laptop.