My DAWless Production Philosophy (currently)

When I decided to become a music producer/performer, I knew there were few paths to making money in the music business, but one I felt confident in was in playing live, because I don’t see live music meeting the same fate as recorded music. So I wanted to focus all my efforts on making music that I could perform live. I’ve always been a mostly hardware producer and performer as well, so these two factors veered me away from doing much in the DAW. These two factors have shaped my production approach for the last few years, and I’ve really honed that process the last 2-3 years until it’s nearly at the extreme that I can achieve.

Playing Live

Let’s talk about playing live and what decisions that has brought about. When I first started, I lugged around a huge rack, my full-size analog keyboard, nearly everything, and I needed another person to help me even move the gear to and from the location. I realized that wasn’t sustainable, so my first task was to choose instruments and a setup that would allow me to move the gear alone, and later, to move all the gear in one trip, or my “fits-on-a-bus” rule of thumb. I then chose the combination of synths that would give me the most sound possibilities while still allowing me to carry everything.

The second thing I needed to do was come up with a physical setup that would remain the same during the production and performance. Keeping complexity to a minimum is a must in electronic music, you simply can’t be fumbling around in the dark with connections during a live performance. And that also means that everything has to be recallable via midi so that I wouldn’t be distracted with tasks during a performance. That eliminated outboard compressors/fx (not recallable) and prevented me from using complex routings like sending other sound sources through my synths (too complex) as well. And I started using only those synths in a simple configuration. So that left only a single place outside of the synth to shape the sound…the mixing software in my recording device. So I separated the drums into 8 separate channels and routed my sampler, bass synth, and polysynth to the remaining three stereo inputs. And that’s where they’ve remained to this day. Solid and stable, and after recalling the mix settings, I can play any song I’ve ever written with this setup with no physical changes necessary.

And that’s how we got here. Other than a single reverb/delay shared by all channels and a compressor/EQ on every channel, all sounds come directly out of the synth, into the RME UFX, and out the main stereo outputs. No DAW interaction whatsoever, except to occasionally use the Lector VST to create robotic vocals. There is a mix for each song, but it lives completely within the recording device and is instantly recallable with a MIDI note. There are no overdubs, and when I play live, I don’t play stems, I just send midi notes to the synths.

Often, limiting your options as a musician is a good idea, to prevent decision paralysis, to prevent overproducing, and to keep your ideas uncluttered. The flip side is if you limit your options too much, you may not be able to accomplish what you’ve set out to do. But I set a goal for myself that all music can be broken down into volume, pitch, and width, and none of those things require a DAW to achieve, and so every time I come up against a problem, I come up with a “DAWless” solution to it. Here’s an example of overcoming a production issue.

Doubling and Panning

Since my setup is generally “one sound, one input”, it’s actually a challenge to take a single sound and pan it hard left and right, especially those from the drum machine. So my first solution was to use a delay with no feedback and 1ms time to achieve a panning effect. It works well because for a sound to be perceived as stereo, it must be slightly delayed on the left and right sides, a single unaffected sound panned left and right, especially bass, will still sound like they are coming from straight down the middle.

But this required me to use one of my valuable effects, and so I started searching for another solution. That’s when I figured out that I could send a sound both through the drum machine’s individual outputs AND the main outputs, so to achieve stereo width on a sound i just matched the sound level coming out of the individual and stereo outputs and panned one left in my mixing software and one right on the drum machine, and boom, a stereo effect from a single sound source without a DAW.

Highs & Lows

The difficult part of the process is getting studio-quality recordings from such a simple setup, but I’ve gotten better at layering, widening, and thickening the sounds I have so that I need less of them. So that has influenced my production. The upside of course is that I don’t need to do anything different to play the songs live…I just take my synths and audio interface to the party, set up, and I’m ready to go. But I’m already thinking of what the next phase will be, and it will likely be some hybrid DAW setup. But I don’t want to become dependent on the DAW for my sound, so I’m excited to see what the future might be!

Superluminal has been released

The second leg of the journey has been completed and the electro album, “Superluminal”, has been released. This one goes all the way back to 2020 with the composition of “Titans Fall” and the rest of the time was just getting good mixes and finishing the other projects simultaneously. This one has 10 tracks, with a short intro and an ambient track to close the album. The sound spans from straight-ahead electro bangers that explore the outer world to more intimate tracks that deal with inner exploration.

And this entire album was mixed and produced without a DAW. Well, I take that back: I used one plugin (Lector) to prepare some of the vocals. I would then export them to the sampler for tracking. And I did use the recording ability of Ableton to record the mixes, but all the mixes were created and recorded live, that is, there are no overdubs or multitracks. They are then sent off for mastering.

“Attitudes” has been released

album artwork for “Attitudes

The DnB album has been mastered and is available in most online music locations. Here is a link to the BandCamp page. First tracks were recorded in January 2022 while also recording and mixing electro and techno projects. Both the drum & bass and electro projects have been finished and mastered, while the mixing process for the techno albums is still ongoing. Super happy to have finished this crazy project and hope I never have to do it again!! But I feel it will give me many different directions to go in the future, while adding a number of solid tracks to my repertoire that I can play for as long as I’m a musician. And because they are different styles, I can create different moods for different needs and moments. I’ve already planned out a “chill” drum & bass set for a gig I’m scheduled to perform at this summer. With three or four additions from this album, I can now make a full set of this type of material, and can of course expand it in the future.

A Herculean Task: Producing 3 Albums DAWlessly — Part 5

Cost/Benefit Analysis

This is the final part in my series of producing DAWlessly. In this post, I talk about the advantages and drawbacks of this way of composing, as well as what I would do differently next time.

The Costs

Of course, the main drawback to DAWless production and performance is

  • The lack of access to all the possibilities that exist on the computer and in the DAW today.
current midi channel routing

But there are other drawbacks as well. For example,

  • I can’t really respond to any trends and write new tracks during this long production process because I have a rule where I don’t get too far ahead creating new tracks until I’ve finished the ones I’ve already started.

And with this one, there were quite a few tracks. So no fun time. And

  • The songs don’t necessarily sound like what I might produce in a singles format because they are being designed for a long-player type listening session.

And because all the tracks need to be a bit longer to fill out the time for a full album, they might not be as concise as they would be if I were writing 3 minute bangers.

  • The time required for a project of this size is vast.

Some of these bare drum tracks were written at the beginning of 2022, and now it’s 2024 and I’ve put in thousands of hours so far. How many thousands? There are 2000 working hours in a year, and I’ve been chugging now for two full years. Many days and weeks are up to 16 hours a day. This time commitment is so large that it also means that

  • I have a very limited live performance schedule because I can’t afford to split my time between the two pursuits.

Any time I’m spending preparing for live performances is time that is not being spent towards getting these albums over the finish line. And as every musician knows,

  • Your relationships suffer while you’re in the process.

You would love to go out and hang with friends, or spend time at home with your loved ones, but how can you enjoy it when you have so much work to finish?

  • All that hardware manipulation causes a lot of wear and tear on your devices, not to mention you.

The Virus and especially the MPC have endured an awful lot of button pressings during this time. I don’t think the MPC could take another project like this, and in fact I will have to replace some of the buttons if I continue to use the MPC as my hardware sequencing hub. The Virus and RYTM will also need a thorough cleaning to remove accumulated dust and dirt. You can see how the midi is configured in the image above.

The Benefits

It wasn’t all bad. Because the songs took so long to finish,

  • I was able to give them the time they needed to be fully finished, from a compositional and auditory perspective.

Sometimes I knew a section needed something, but it took a while to figure out exactly what it was. And multiple listenings helps give perspective on what parts are necessary or not.

current audio routing
  • Multiple rounds of mixes also allowed me to zoom in on problems that were obvious when viewing the waveform, like when frequency ranges were overlapping.

And to solve overlapping frequencies, I focused on the sound at the source more than fixing it after, which while a longer process,

  • Paid off in a lot of nice patches and an intimate familiarity with the synths and drum machines.

Perhaps paradoxically,

  • It took a project of this scale to get me to use a DAW more extensively.

I’m sure this will be welcome in future productions, especially those with high track counts, because managing this many tracks on the MPC or probably any hardware alone is not sustainable. Because this project was so broad and touches so many different genres and emotions,

  • It will set the groundwork for many smaller releases in the future that cover many styles.

As a hardware producer, you need lots of good original tracks to be able to play multiple different types of events. And you need to keep building your library so that you can have a track or sets that can fit with nearly any occasion. And this project adds to that repertoire immensely, as painful as it might have been to give birth to it.

The Analysis

I’ve learned a lot during this process, as could be imagined. Here are some of my takeaways.

  • The way I did this was an inefficient way to do it and I will never attempt something of this magnitude again, without some changes.

It’s probably a better idea to get the basic sketches out on the MPC but then transition to the DAW for midi sequence management. And the process of making a change is quite tedious on the MPC because it has to be changed manually everywhere it exists. Perhaps other sequencers don’t suffer from that issue.

  • In the future I will produce in more of a 3 song cycle, in a single genre, in a short burst of activity.

No more of this 8,-16, or in the latest case, 40 tracks to work on simultaneously. If I get an idea for a track, I write 3 similar ones and then choose the best of the bunch or combine elements from each in order to release one excellent track. The other two  could be cannibalized, used as b-sides, added as live-only tracks, or used to create an EP if they all turn out well. Although at certain points in the process it was nice to be able to change genres and listen to something different, I think my best work came when I focused on one genre at a time.

  • From a live standpoint I will continue to do 8-song monthly cycles in a particular genre.

This is the process I’ve done since the beginning where I write 8 tracks in a genre with the goal to have a live set at the end of the month. At the end of the month, I make a mix, regardless of where the songs are from a finished standpoint. As I continue to improve, perhaps some of those live tracks become “finished” with less refinement and mixing time.

As long as it has taken me to simply describe what I’m doing, you can imagine how much time it took to come up with and implement these processes. This project involves literally millions of bits of information that have to be tracked and organized, backed-up and recalled.

But the joy will be magnificent when completing something that has never been done before, all while using sustainable techniques and setting a new standard of what’s possible without a DAW. I started as a professional in 2019 and it took until now working full time++ (5 years) to get to the point where this was even possible, so it will be difficult for someone to be able to match the amount of work that went into it. But with new technologies, who knows, it might become easier and faster, and I will be happy to see that.

But Why Don’t You…

So let’s see if I can answer some of your potential questions.

  • Why don’t you use a hardware compressor?

I used to use hardware compressors in my setup. I had dual FMR RNC’s for the first year or two, but I ditched them. The main problem was in recalling settings. I’m not just a DAWless producer, I’m also a DAWless performer, and compressor settings would be different for every track, and I don’t know of any compressors that can recall settings via MIDI (other than software ones). And if you do want to get that sidechain sound, there are other ways to accomplish it, like placing a square LFO on the bass VCA that mirrors the kick pattern.

  • What about a software compressor?

Each output of the RME has a channel strip, so you can use that if you like to apply master FX or compression. I usually only record a stereo channel, but as long as you copied the settings to each master compressor on the RME output channels, you can copy it to any output channels you want. So it’s possible, I just don’t because not only does it add complexity, it also means that my sound will be totally different without it. I always try to imagine a worst-case scenario where my audio interface breaks and I have to plug into a regular mixer and perform. With required software in the middle, if it breaks or is absent, that most likely means the gig is ruined. As a professional, you can’t allow that to happen.

As prevalent as compression is, especially in DnB, I don’t think it’s absolutely required, especially if you’re just using it for gain or sidechain. There are other ways to accomplish those tasks that are better suited to that application. And in a DAWless environment like mine, you’re always looking to lesson complexity, even if the answer takes a little bit longer to accomplish. There needs to be a focus on sustainability and I think this process does that well.

A Herculean Task: Producing 3 Albums DAWlessly — Part 4

current audio routing

Last time, we talked about some choices that you have to make early on, which help shape where your process will go. This week, we’ll talk about the actual mixing process, where you take ideas and make full tracks out of them using the same DAWless approach used for live performances. To the right I’ve included a diagram of what my current audio routing looks like. I haven’t changed the routing at all since 2022 and very little since 2019 when I had to replace the JoMoX AirBase99 after it ran into some problems and limitations.

Sound Design, Composition & Mixing Philosophy

In most traditional music, an artist writes music and/or words and plays them on the instrument they know. But in electronic music, there is often a sound design and mixing phase as well, where you combine new sounds together, which requires a different production process. that is absent in traditional music. In addition to the songwriting process. it is also almost required to understand concepts like sound design and production topics like compression and EQ. I’m dividing what happens in this electronic music-making process into three categories: Sound design, composition, and mixing.

How Those Three Things Come Together in a Mixdown

When writing a new electronic song,  you play notes into a synthesizer or DAW, and use a sequencer to record and play them back, whether ITB or not. Then you arrange those parts together, add on any other sounds or parts you want, add effects, whatever you want to do, until you’re satisfied. At some point you reach a stage where you say, “let’s record this”. That’s when you enter the mixdown phase which means you should know sound design, composing, and mixing and how to use those tools to make the best recording possible.

Let me give an example: I get to the mix stage and a drum and a bass sound are interfering and they won’t sit properly in the mix. There are essentially 3 options: change the sound or sounds, change the time that they occur, or apply processing. By thinking of decisions this way, we can divide them into the categories mentioned:  Changing the sounds themselves is the “sound design” part. Moving them around in time is the “composition” part. Compression or EQ applied to the sounds after the fact is the “mixing” part. So let’s talk about how I use each of these techniques to achieve solutions in my DAWless setup.

Sound Design: Kick/Bass Interference

When a kick drum and a bass sound trigger at the same time, there is often a spike in the waveform that indicates there are overlapping frequencies that are both trying to be heard at the same time. And lets assume I like how they sound and don’t want to change the location of the notes. So there are a few options that don’t require compression. There’s the “ducking” method where the volume envelope of the bass sound is brought down just for the first part of the note. Or you can change the bass sound so that the attack of the VCA and VCF are slower. So what might you do if a kick and tom play at the same time and are interfering?

Composition: Kick/Tom Interference

Let’s say you’re building a kick and tom rhythm on the drum machine, but the kick and tom end up landing on the same note, causing a spike in the waveform. As you probably know, a kick my be “unvoiced” but there is still a fundamental note to which it corresponds. So in this case, rather than change the sounds themselves, I might move the tom over one-eighth or 1/16 of a note in either direction. I usually do this because it usually solves the problem with the added benefit of adding a new rhythmic element to the song. This is the composition solution. So what would I do to make my snare sound better?

Mixing: Snare Sounds Like Shit

So your snare isn’t really poppin’ and you want to copy the snare to a new track and pan them hard left and right? If on the RYTM, you could sample the snare and play it from an unused instrument and pan them left and right, or route the snare to both the main out and the individual out. But both of those solutions take some time and have serious drawbacks. So what I normally do to solve this problem is a “mixing” solution: set up a very short panning delay with no feedback and make sure the wet and dry sounds are at an equal level. The effect side should not have any EQ on it so it sounds as much like the original as possible. And so even though I can’t easily make copies of tracks like I do in a DAW, I’ve still more or less created what I set out to do.

So That’s How I Mix & Record DAWlessly

That’s my decision-making process when a song is ready to be recorded. This is where the majority of my time was spent during this process, making the decision whether to mix, compose, or sound design. That effort was much more difficult in the electro and drum n bass genres because they required lots more details and changes. I could have also stopped earlier on in the process and just let the mastering process deal with the rough edges, but I decided to work through all the issues so that they wouldn’t pop up when it came time to perform them live.

 

 

A Herculean Task: Producing 3 Albums DAWlessly — Part 3

In last week’s blog post, we talked about the details of my blended studio/live setup. This week, we’ll go into the ideas that underpin my DAWless philosophy and how it influences the decisions that are made.

Overall Philosophy

It’s All About Live Performance

A lot of the decisions I make may seem strange to outsiders, but there are some reasons why I do things the way I do. First of all, my main first goal is always live performance. It’s one of the only sources of income for modern musicians and it’s still somewhat rare to see in the electronic music genre because of the technical knowledge required to pull it off. And being a live electronic music performer also means that there’s a focus on gear, and as such I’m always trying to make my live setups as lightweight as possible and to minimize complex setups in favor of simpler solutions.

Small and Lightweight Footprint

my setup as of 2024

When I first started out, I would take my full-size synthesizer keyboard, a big rack with a mixer, and on and on, and it was so heavy I needed another person to help me move it. But as I’ve played more and more gigs, I’ve realized the importance of a small setup, and now I strive for as much power and flexibility as possible but in the lightest, most compact package I can achieve. So I’ve retired a lot of gear to studio-only use and have a setup that allows me to carry everything I need in one trip, or as I originally put it, “small enough to take on the bus by myself“. You can see the current setup on the left, essentially unchanged since April 2020 when I replaced my Jomox drum module with an Elektron RYTM MKII. Since that time, I’ve composed only with this setup and without changing the I/O. That way, I can perform any song from any era without having to ever change any physical connections. When I play a techno show, I don’t take the Virus, but everything else is the same and is plugged in like normal. If I need the Virus for the other genres, it plugs right back into its slot.

Efficiency is King

I also strive for the simplest setups, so that I have to do the minimum of preparation once I reach a venue. Rather than take a power strip to plug everything in, I bought a rackmount power unit which has surge protection, uses a simpler power cabling approach, and has a light to help illuminate dark stages. It’s in a 2U rackmount with the audio interface, but along with the light and power benefits, I can leave all the audio cables for the drum machine plugged in as well as some of the power cables, eliminating another hassle that happens during a setup. I also used to do extra audio cabling for shows. For example, I could route audio from any other instrument or mixdown channel into the Virus or RYTM and then route that audio back out as treated audio. Essentially this would allow me two more FX units that could be routed to various sounds, but I don’t do it because it makes the setup more complex with only I believe a small benefit; the marginal benefit of having some more audio options is outweighed by the complexity it introduces. I could also run compressors or EQs on the master outputs, but I don’t like to rely on software tools for my sound, so I don’t use them. It’s really easy to throw on a compressor somewhere, forget about it, then realize later how much it’s affecting your sound. So even though I always strive to be as cutting edge as possible, I’m not sacrificing a simple setup/teardown and simple audio routing for that. But I will sacrifice some weight in the case of the PSU to help simplify and insure the safety of a lot of important gear. In a sense, I’m trying to do as much as possible technically but with the absolute minimum of gear and with a minimum of setup and teardown fuss. A “maximalist-minimalist” approach if you will.

Making “Songs”

Second, my style of music creation is mostly to make songs. In essence, I want my music to sound like a person wrote it, even though a machine may be playing it. In pattern-based music, the difference between good and great songs generally comes down to the details. And details take time. Time for parts to be written that fit as well as the other parts of the song. Time to get the composition and arrangement just right. But this time isn’t wasted, because these songs don’t exist only inside a laptop somewhere, a snapshot in time forever resigned to slow degradation. They exist in the real world and can be recreated nearly identically by me or even someone else in the future, no matter what version of a software you’re currently on or what type of Mac you’re using. And to me, there’s a value in that. These songs are not tied to a computer, they are tied to hardware that can exist more or less indefinitely, a great idea if you want to make a living out of playing your music live like I do. Great ideas aren’t lost forever or chained to a certain set of software, slowly losing quality as digital recreations of a moment in time. But writing songs this way doesn’t consign your creations to the past, but allows them to be living creatures that can live on and grow and change just like their creator.

No One NEEDS a VST, Although They’re Super Nice to Have

My final thought is that there are no problems that can’t be worked around using my system. I don’t think about my setup as limiting, I just think that it forces me to find solutions that are different than what would be done in a DAW. Don’t get me wrong, DAWs and VSTs are magical and wonderful, but aren’t necessary to make great, contemporary music.  Are there great sounds I don’t have access to because of my setup? To some degree yes, although I could always sample. But a great sound is just a great sound, regardless of the tools used to make it. And yes, my palette isn’t as bountiful as those who use a computer-based production setup. But not only do those limitations help sometimes, they also force better decision-making during the mixing process. Maybe instead of trying to compress two bass sounds together to get them to fit, maybe give them their own space instead. You know?

OK that’s a lot of information. Let’s close today’s post and continue on in a part 4.

 

2021 European Tour Site is Up

The European Tour website is up and running now and more or less complete. Follow the tour as it starts in Budapest and passes through 12 countries on its way back home to Krakow. In this map, you can click the countries or the cities to see photos from that region. Whenever there were performance photos, they were placed first in the slideshow. The graphic was fun, if challenging, to make. I learned a lot this time which is already helping me improve future maps.

It turned out that every country in this area has red in their flag, so I made all the countries their specific shade of red and when you roll over them, they change to another color from that country’s flag. Check out Luxembourg’s cool light blue color! Some countries like Belgium and cities like Stuttgart in Germany don’t have any photos associated with them, and so there’s no rollover for them. And of course, Czechia was completely left out of everything. Maybe next time, Czechs.

Anyway, here’s the link, or click on the 2021 Tour link at the top of my homepage at www.doperobot.com. The West Africa Tour page is starting to come together as well, and you can also find that link on the homepage. Check back often to see how it’s coming along!

A Herculean Task: Producing 3 Albums DAWlessly — Part 2

In the previous blog post, I talked about my “DAWless” production setup and in this one, I’ll go into more detail and discuss some issues and limitations that I had to overcome to record these albums.

The Issues

#1: The MPC

The MPC, as great at it is, is limited in that it only has 64 tracks, which may seem like a lot, but it’s not when you’re making different variations of a pattern or experimenting with different instrumentation. And the only way to make room for new patterns when you run out of tracks is to delete and/or move them, which is a laborious and time-consuming physical process which wears out the hardware and wears out the user. Yes, copy and paste is easy, but it becomes problematic when you have to transfer this change to fifty, sometimes sixty sequences. And as things inevitably get moved around and deleted, their position in the track order changes. This is extremely problematic because before any show, when it’s time to convert these longer songs into single MIDI sequences for live performance, all tracks must be in the same “lanes” so to speak and it is a very long process of reorganizing a track for conversion. Some of these tracks have required 5 or more “reorganizations” before they reached a final state, which can sometimes take half a day or more to complete. I made frequent new versions and many backups during this time because it can be easy to blow past the original idea onto something different, and I think that’s generally a bad idea. So if it happens, I can go back and start again with a previous version. It’s also important to have backups because during long composition sessions, things can get accidentally lost or overwritten, and backups keep that from becoming an unrecoverable problem. Essentially, the MPC is a very inefficient way to record, archive, and organize many multiples of tracks, so next time I will surely use my DAW or other tools to help with some of these things.

#2: The Size of the Projects

documents with track info

There’s an enormous amount of information that needs to be stored, backed up, and tracked for these albums: Hundreds of different sequences. Dozens of different versions of patches spanning multiple hardware machines. Dozens of different audio recordings. Hundreds of hours tweaking patches. Dozens of documents (see right) containing information like where drum kit versions are, where Virus patches are stored, which effects are in use, etc. And of course, all this information needs to be backed up to computer regularly so that data loss isn’t a death sentence. (I only use hardware machines that are fairly common so that if a device fails, it can be replaced and reloaded with all the relevant sounds with minimal delay.) And on top of that, this project spanned three genres and over two dozen songs, and was interspersed with multiple live shows, studio recordings, my daily street performances, two house moves, a tour to Africa, and so much more. It was a case of information overload and organizational struggle multiplied by project size. The sizeable time commitment was unreal too, as the very first of these beats were written in December of 2020 and January 2021, putting these projects at the 3+ year mark, many multiples of the time it took to produce my previous 3-5 song EPs. And this was all happening while I was compiling the documentation and writing the software for the Roland TB-3!

#3: Limitations in Hardware/Software

UFX & TotalMix

RME UFX with TB-3 & BlackBox connected

The RME UFX is an audio interface with a very flexible digital mixing software called TotalMix. Any input can be routed to any output via submixes, and each of the inputs and outputs has its own compressor/gate and EQ. Each output channel can also be recorded using its loopback feature. In my case I send a +4dB level to the  5/6 analog outputs and a +10dB level to the 7/8 outputs. Those outputs are also mirrored to headphone outputs 9/10 and 11/12 so that I can connect to either the front or back panel outputs for live shows. The front panel (headphone) outputs are preferred though because they are easily accessible and require a single cable connection. (You can see this routing on the image on the left.) Even though I have Compressor & EQ available on the main mix outputs, I try to keep the TotalMix modifications as small as possible, so that if I’m ever in a situation where my audio interface fails, I can still more or less play a show through a regular 16 channel house mixer and not have it sound completely different. So far so good, right?

totalmix setup for electro

Well, the main limitation with TotalMix is that it contains only one reverb/delay per snapshot which is shared for all inputs and outputs. A single effect shared amongst 14 channels could be a dealbreaker, but all the other instruments have built-on effects, so the effect is somewhat mitigated. But “adding a touch of reverb” to some elements in a mix has to be done on the instrument, since I don’t have the RYTM or Virus wired up to process external signals. In addition, TotalMix currently only allows eight snapshots (mixes) to be recalled instantly without loading a new workspace. To use more than eight snapshots, a new workspace has to be loaded. This is why my live sets are almost always eight tracks long. I do occasionally make longer sets, but I either try to combine them or if not possible, I load a new workspace sometime during the set to have more snapshots available.

Virus

The Virus is an amazing machine with over 500 user RAM locations and 26 more banks of 128 sounds that can be burned to ROM locations. It also has dozens of VA voices that can be used before the CPU starts to cut off notes. But even with this amount of space and voices, notes still cut off in complex combinations of patches and patch space still runs out fairly quickly. The Virus is usually what I use for any type of melodic sound, from bass to arp to keys, but I only use the stereo digital output, so the sound usually cannot be altered any further once it leaves the machine, since any master effect will apply to all sounds on the channel. Also, there is no compressor on the Virus, but there is EQ and saturation which can accomplish some of the same things. So now I have banks of patches that contain variations of sounds that I’ve backed up to the computer for each song, and I do a full machine backup about once a month and keep a regularly-updated written inventory that documents which patches are pointed to by the multis. This process helps prevent data loss and is absolutely essential for a live-only artist, and has saved me many times.

RYTM

For the drum machine I have the snare, hihats, rimshot/clap, mid/hi tom, and cymbal/cowbell routed to individual outputs. On the main RYTM stereo output, I put kick, bass tom, low tom, and onboard effects. So I essentially send the low end and fx to the stereo output and the rest of the instruments to individual outputs to be modified separately. The RYTM has one master compressor/overdrive section. I use the overdrive section often, but even though it is powerful and I’m sure sometime in the future I will use more of it, for now I use the compressor very sparingly, if at all. I do this for a few reasons: 1) I need to keep a constant loudness throughout the songs from all eras of production since any of them can be potentially be performed live and 2) if there are frequency overlap or transient problems, I fix them with sound design instead and 3) the compressor is applied to the entire main stereo output (including effects), which is almost never the intended outcome. Other than managing backups and kits, which is fairly easy to do on the RYTM, the other main limitation is that it only has one master reverb/delay that is shared amongst all instruments.It just reminds me that in the recording and composing process, a few sounds done well is usually better than a lot of sounds all trying to work together. And one effect done well usually works better than a lot of effects competing for space in a mix.

TB-3 & BlackBox

BlackBox and TB-3

The TB-3 and BlackBox are each routed to a dedicated analog stereo input on the front of the audio interface. For the TB-3, I invented a way to back up and recall patches from the MPC, but patch backup and retrieval isn’t nearly the issue that it is with the Virus and RYTM. At the end of the projects, I back up all the created TB-3 patches to a computer using my TB-3 Editor software so I can quickly run through patches in the future when I want a new sound. As for the BlackBox, I use it just for vocal samples which are loaded onto it by SD card, so it too is not complicated to backup and maintain, other than having to carefully design the directory tree so that projects can be recalled properly with MIDI. All in all, these two machines, though important, didn’t cause many headaches with this DAWless approach.

 

And Now, On To the Mixing, Composing, and Arrangement

Come back next week when we talk about what the mixing process is like for this live-oriented, DAWless, production setup.

 

A Herculean Task: Producing 3 Albums DAWlessly — Part 1

The Goal

I want to talk about what I’ve been doing the last couple of years from a production standpoint, because I’ve published perhaps 2 tracks since 2021. Most people know I’m a hardware-only performer, but don’t know that I’m also a hardware-only producer. What that means is that I only use hardware and no DAW (Digital Audio Workstation like Ableton or FL Studio) to make full productions. At the end of the recording and mixing process, I record the fully mixed tracks as a stereo recording into Ableton. But going from something that is “good enough for live” to something that is a full representation of my art is quite a task. All my former releases have been EPs and didn’t take that long to finish, so this time I set an ambitious new goal for myself: 3 full albums of 3 different genres released all at the same time, and all produced “DAWlessly”. But when the scale of what that meant was actually laid out and attempted, we arrive at the title of this blog post. So what happened?

The Tools

Sequencer

my 2024 hardware setup

Ever since the very beginning, I’ve always sequenced with some form of MPC, and since 2018 or so I’ve been using an MPC2500 as my main sequencer, and it handles everything I need from a MIDI output standpoint, including handling system exclusive (sysex). It has four separate outputs which I use to send separate midi streams to all of my sound devices. This is very useful as it prevents any midi streams from getting “crossed-up” due to being on the same channel. The timing on the MPC is solid and it has a good song function which allows me to create full tracks for playing live, and using those songs, create long sets of songs for live performance.

 

Sound Generation

The drum machine, sampler, and two synths pictured at left handle all the sound generation, most of which is done through subtractive or FM synthesis. I have my drum machine, the Elektron RYTM MKII, set up to send a main stereo output and 5 more individual outputs to my audio interface, where each get their own channel and track strip. This machine is used for every Dope Robot performance and recording and is the backbone of my hardware system. The other workhorse in the system is the Access Virus TI2 synthesizer, which has dozens of virtual analog voices, can play up to 16 parts simultaneously, and has individual reverb, delay, and 3-band EQ per part. That machine’s audio output is summed into a single stereo digital channel and sent to the audio interface and is used when complex sound design or multiple parts are needed for a track. Next, we have the Roland TB-3 which is a very versatile monosynth and appears throughout my productions in various forms. During recent techno performances, I’ve been using just the TB-3 and the drum machine without the Virus and it has been great. Finally, whenever I need a vocal sample, I use the 1010Music Blackbox to handle all of those on its own stereo channel. It has its own effects and dynamics processing as well. And that’s it, I don’t introduce new sound sources or change the setup in any way so that from project to project and year to year, the setup doesn’t change…only the ideas do.

Software

Mixing: The audio interface I use, an RME UFX, is both the audio interface and digital mixing console in my setup. Its mixing software is called TotalMix and comes with a channel strip containing an EQ and compressor/gate for every hardware input. In TotalMix I take all the sound sources and assign their pan and level settings and any EQ or compression per channel and then record the stereo mixdown and/or individual tracks into Ableton Live. TotalMix also comes with a basic reverb and echo per snapshot which are shared amongst all channels. TotalMix’s EQ/Dynamics and effects are the only post-processing tools I have, so I try to do everything possible on the machines themselves.

DAW & VST: I’ll admit, I’m not completely “DAWless”, nor do I have any opposition to creating within the DAW. For example, sometimes I need a robotic vocal, especially for electro, and for that I use a vocoder VST called Lector inside Ableton. Once I’ve made a vocal, I bounce it down to audio and transfer them into the Blackbox to play back, which has to be done using a microSD card. Once the samples are loaded up, I play them normally from a dedicated sequencer channel. And of course I also use Ableton to record audio, but mostly just stereo mixdown tracks played live. And this is how I record songs for release and performance:

  • Up to 30 channels of simultaneous audio.
  • 1 basic delay and/or reverb shared among all channels.
  • 1 compressor per channel.
  • No VSTs. Completely outside the box. “DAWless”.

So these were the basic details about my composing and recording process, but that is only the beginning of the journey. Come back for Part 2 where we’ll discuss the pros and cons of this production workflow.