May 172014
 

Sorry for being away a little longer than planned. Vacations tend to get in the way of these things.

So, now we come to Interfaces, Converters, and DAWs. These can be very contentious topics that don’t really need to be. We have more, higher-quality choices than ever, regardless of the platform you want to use.

What’s the problem with pro-sumer interfaces?

I think the trickiest thing to deal with here is that the difference between pro-sumer and professional interfaces can be fairly big, not only in price, but configuration. The actual quality of the converters is debatable. There are quality differences, to be sure, but how big those are versus how much you want to spend – or how much you’ll be able to hear – are up to you.

So, what’s the configuration difference to be concerned with?

Here’s the deal…with most pro-sumer interfaces that have more than a 2-in, 2-out configuration, there will likely be a pair of “main outs.” Okay, that seems normal, so what’s the problem. Well there are multiple possible problems.

How does the controller software handle routing?

I’ve run into this on many different interfaces, but particularly ones that route “inside the box” or inside the interface without coming to the host computer. This capability is great for low-latency mixing. However, let’s say for a minute that you’re recording on a fairly fast rig that already has very low latency. Also, let’s say that you are using your main outs as your cue mix for the vocalist (or guitarist, or drummer, …). Well, on a set of interfaces I was using in a studio recently, we kept having this doubled vocal going into the voice artist’s cans. We tried a ton of different things only to discover that not only was the DAW routing the vocals to the main outs, as expected, but also that the interface was routing them to the main outs, causing an ever so slight phasing. So, now, in order to get around this, we have to mute the track in the DAW, then switch to the interface’s controller software and ensure the track is unmuted and routed correctly, and then it will appear in the cans properly. What a pain!

Unfortunately, this isn’t something that’s easily tested unless you can plug up in a studio and do your thing. However, this is a common (perhaps, the most common) scenario, and it doesn’t make any sense that the interface developer didn’t take this into account. That’s just poor design on some fairly high-end pro-sumer units. So, what am I going to do? I’ll take the extra outputs on the interfaces and run individual cue mixes. It’s still not perfect, as the cue mixes will all have to be set individually, but we do what we must.

Okay, what about the concept of “Main Outs”

The concept of Main Outs is not a bad one for stereo mixing. That way you have a devoted pair of outputs that doesn’t interfere or take away from your I/O count. However, if you decide to mix in a multi-channel format, the interfaces may not work the way you expect. For instance, the interfaces I mention above work well when you chain them together. If you need more I/O, just add another interface. That’s great thinking…except, only one set of Main Outs can be used.

You might say, “Well, just use some of the other outs for your surrounds or whatever.” Yeah, that’s not as easy as it seems. You see, many DAW manufacturers make certain assumptions about which outputs are going to be your main stereo pair. Switching off of those is possible, but it seems to bite you in the butt more often than not. They will tend to assume that the first pair of outs is your main pair. Also, individual apps don’t always check to see which set of outs are stereo and which set are surrounds. Thus, you’re always fighting your apps to get them to send audio to the right set. Often, they’ll just send to the first two. Again, bad design, especially since Mac OS X has had all of this built in and available for a looooong time.

What other flexibility in configuration are we talking about?

Well, while it may be a significant step up in price, the higher end units give you some extreme amounts of hardware configuration flexibility. Take, for instance, the Apogee Symphony I/O hardware. Basically, you choose a chassis and a configuration of daughter cards. Each chassis holds 2 cards with a maximum of 32 channels of 192k/24bit audio in either analog or digital format. Multiple chassis can be combined for up to 64 channels via Thunderbolt. If you’re using them with ProTools, it depends on your ProTools rig.

For me, I initially chose the 2×6 configuration, then augmented it with another chassis to get 16 more analog I/O channels and a bunch of digital I/O. Each chassis is basically empty, and you choose which cards to put in there. Now I have one knob to control the volume of all of my surround output. Also, the Apogee Maestro 2 software, while being far from perfect, allows me to rearrange the I/O into any order I want. So, for instance, I have the 16 analog input channels of my second unit actually appearing first in to the operating system, followed by the 2 on my first unit. This is just for simplicity. As for outputs, to alleviate the issues mentioned above with non-pro audio apps, I have arranged my outputs such that the first six are for stereo and surround, followed by the 16 effects sends. After all of those, the digital ins and outs are grouped across cards, for convenience. Now that’s flexibility. Granted, it came with a cost, but I love them.

In Summary…

So what’s it gonna be for you? Do you need extreme flexibility? Do you need to to mix in multichannel? Do you need your interfaces to do double duty as mixers? Let’s tackle these questions next.

How do you need to mix live instruments?

Mixing with hardware doesn’t require a computer!

Do you have a mixer that’s handling that massive rack of effects and synths? If you do have a mixer or the space for one, it will likely be less expensive to use that to handle live audio than using an interface as a mixer. However, the mixer may put some constraints on you that you don’t expect.

  1. Are you working in multi-channel? If so, your mixer needs to handle it.
  2. Are you using your mixer to control volume and selection of your monitors?
    • If so, you’ll need to have extra inputs for feeds from your interface.
  3. Are you using your interface or other module to control your monitors?
    • If so, you’ll need extra inputs to take feeds from your mixer

What about a summing box instead?

A quick side note is that if you don’t need a flexible, per-channel mixer, you could just use a summing bus to take all the channels of live audio and sum it to a stereo bus. They’re not always inexpensive, but they typically keep the audio quality high and may be less expensive than a mixer.

Will mixing “in the box” (in the interface) work for you?

While it requires that your computer be on to hear your instruments, the flexibility of mixing “in the box” is unsurpassed. You also don’t need to have the extra space for a mixer. The downside is that you need a lot of inputs.

Used to be that this option was not so great, as latency was a problem for recording in realtime (like recording guitars); however, with the newest generation of Thunderbolt interfaces like the Apogee Symphony, roundtrip latency has dropped below 2ms, well below temporal fusion, the point at which two sounds appear as the same.

Monitor control is your last big choice before the interface

It’s worth calling out here a piece of much forgotten gear in the project studio…monitor control. I mentioned this above in the mixer section, but I didn’t go into detail. Monitor control is a bit of a luxury item, but it quickly becomes more necessary when you have multiple sets of monitors. For instance, if you’re fortunate enough to have the room, you should really consider having two or more sets of monitors, near-field monitors, mid-field monitors, headphones, and small “Radio Shack” computer speakers. You can use these to ensure your mixes stand up in the majority of playback environments. As you can tell, this means that manually switching cables around would be a nightmare, thus…monitor controls to the rescue.

Keep in mind that if you’re mixing in multi-channel, this could get a bit expensive, so you may want to handle it in the box, like I’ve done. In time, though, when I have the space, I’ll definitely be looking for more-convenient solutions.

May 112014
 

In the previous post, I talked a lot about how to think about building your studio and how gear makes its way into your workflow. Now we’ll start talking about the decisions and how they affect each other.

Stereo or Surround, ain’t that the eternal question?

Before getting into the next bits, we need to answer a critical question…Are you going to be recording in stereo or surround?

If you’re recording music whose ultimate destination is to be streamed, you’re likely not going to want or need surround. However, if your audience is likely to have surround, it’s awfully nice to be able to make a smile come across their face when they notice something come up from the rear speakers.

Subjectiveness aside, surround is a very different beast than stereo. Instead of just dealing with phase issues from left to right, you deal with them all over the place.

I learned how different mixing in surround is when I was in college. We were mixing in one of the surround studios at MTSU for an Audio for Video project. I think it may’ve been Aliens or Last of the Mohicans. I’m sitting there wondering where this monstrous hit for one of the effects was. It should’ve been fairly big, but it was totally pathetic and weak…WTF?!? Then, I moved to one side…WHAM! There it was dominating everything.

What had happened was that the recording (and processing) of the effect had been in stereo, but we moved it out of the strict L&R position back into the surround field. Just so happens, I had been sitting right where a big chunk of the low frequencies cancelled each other out in the room. So, you see, phase cancellation is a HUGE deal in surround.

This decision about stereo or surround will help determine nearly everything you need from here on, so take it pretty seriously.

Back to our regularly scheduled program

Now, lets talk about the actual things you’re going to need.

Obviously, there are a few basic items that are required. At a minimum, you’re going to need these:

  • Monitors (speakers, not displays)
  • Digital Audio Workstation (software)
  • Audio Interface
  • Computer

So, let’s look back at the list to see how we should choose these…

Monitors

Setting aside the myriad choices out there, what do you need? If you haven’t ever bought monitors before, I suggest you don’t go for expensive units. I know, “What do you mean by expensive?” Well, for a small room, I probably wouldn’t go above $300-400 per monitor, maybe less.

Here are some things to consider when purchasing monitors:

  1. Are they active or passive
  2. Do you require surround or stereo?
  3. How revealing are they?
  4. How flat is the response across the 20-20k frequency band?
  5. Do I need a sub(woofer)

Notice that I didn’t say, “How great do they sound?” That’s because these are monitors for critical listening during mixing and mastering, not simply pleasant sound reproduction like the speakers in your living room.

Do you prefer Active or Passive monitors

I say ‘prefer’ because that is exactly what this is, a preference, nothing more. I’ll lay out some of the differences, and I think you’ll be able to make the choice yourself. To cut a long story short, if this is your first set of monitors, I strongly suggest going for Active monitors so that you don’t have to worry about the extra expense and setup of Passives.

Active Monitors

  • Have built-in amplifiers in each monitor or subwoofer.
    • These amplifiers are often built specifically for the response of the combination of drivers in the unit.
  • Require power to each of the units
  • While each unit may cost a little more, the system may cost less due to not needing to purchase separate amplifiers
  • Generally easy to setup and calibrate
  • Often come in systems designed for both stereo and surround monitoring

Passive Monitors

  • Require a separate amplifier or amplifiers to power each unit
    • For instance, if you monitor in stereo without a sub, you’ll likely require only a single stereo amplifier for your monitors.
  • Due to the separation of amp and monitor, allows for significant customization.
  • Only require a speaker wire for connection, as they do not require further power.

As you can see, it’s really just a choice you have to make based on your own preferences. Don’t let anyone tell you that one way is the right way.

There isn’t a right way. You get to choose what works for you, and even that may change.

Stereo or Surround, what does it mean for monitors?

Honestly, given that most of us are in small rooms with less than stellar acoustics, surround is probably not in the cards. But if you need (or want) it (yes, I do ūüėÄ ), here are some things to consider when it comes to monitors.

First off, you’ll likely need a sub. For all modern surround formats, you’ll have an LFE channel specifically for low frequencies. Theoretically, if you get monitors that can deal with frequencies close to 20Hz, and since bass becomes more uni-directional the lower the frequency, this can work without a sub. The side benefit of having a sub, though, is that your other monitors can typically be a bit smaller.

Next, you’re going to need five (5) main monitors: left, right, center, surround left, surround right(, sub, possibly). Instead of giving you all the technical details about placement, etc., I will leave it to the pros at Dolby to do that. Do a quick search for Dolby Surround Mixing Manual, and you should be able to find Dolby’s instructional guides that give you all the gritty details and reasoning.

Since you’re going to have many more monitors needing power in the room, you’ll need amplification for any passive components, so you’ll either need active monitors or you’ll need amplification for each of your 5 passive monitors (or 6 counting the sub). So, the decision to go active or passive has some serious ramifications here.

Something to consider, manufacturers often have lines of monitors that are constructed specifically for surround. For instance, Genelec has several lines that use the sub as the central hub of the system so that its crossover can take out the low frequencies, passing everything else to the respective monitors elsewhere in the room. One of the benefits of a system like this is that you just run a snake from your interface to the sub, then all the individual cables go out from there. This can be a little cleaner in that small studio space.

How revealing are they?

So, here’s the deal with monitors…over time, you’re going to grow attuned to your monitors and how they reproduce sound relative to the environments where the sound will eventually be played, like a living room or out of a typical car stereo. No set of monitors can sound like the infinite possibilities of playback locations.

That’s not to say that you don’t care if the monitors sound nice to your ear. You’re going to be spending a lot of time with these, so you want to avoid ear fatigue, but you also need them to work for you. So how can the really work for you?

Monitors should reveal imperfections in the recording process.

Monitors should help you hear all the nasty bits of the process so that you can get rid of or mitigate those imperfections. Note that I said ‘imperfections in the process’ and not ‘imperfections in the performance.’ It’s up to you whether a flubbed lead or missed note should be rerecorded. Imperfections in the process refers to things like digital overs or pops and crackles in the line, things the artist is not responsible for.

How flat are they across the frequency spectrum?

Say…those are some flat monitors?

Are you hitting on me?

Okay, bad jokes aside, we next need to look at how well they represent the various pieces of the audio spectrum, hence the need for a reasonably flat response curve from 20-20k. If you have a distinct drop off either on the high end or low end, you’re not going to be able to hear any nastiness that happens there, and, subsequently, you’re not going to be able to fix it.

Also, be sure to listen for tight, controlled bass response. You don’t want monitors that have a tendency to fill the room with throbbing bass.

A note about bass-hyped monitors: Given today’s common listening preferences, manufacturers will often create a somewhat bass-heavy monitor or headphone. While this may sound pleasant to the ear, it will cause your mixes to be anemic in the bass region.

With all this flatness in mind, understand that the room you’re going to be working in probably has issues with resonant frequencies at particular locations, given its likely small size. However, by using components that are known to be relatively neutral (flat, in this case), it will make it easier to deal with the room issues later.

Do I need a sub?

The choice is really up to you. If you have full range main monitors, you can probably get away without it. Personally, I do use a sub because it allows me to have smaller monitors around the room and neater wiring with my active monitor setup. With a passive monitor setup, this wouldn’t necessarily be the case. With that said, I could possibly change the way I work in the future.

So, how do I choose?

First off, do your best to hear any monitors yourself before purchasing.

When choosing monitors, take your own source material with you, so that you have a reference that you know well. Preferably, it should be something that you’ve mixed yourself. In any case, your reference audio should represent the following:

  • Wide dynamic ranges: Pop is not typically good for this. Pick classical, jazz, etc.
  • Articulation: You should be able to hear delicate sounds at both loud and quiet levels.
  • Known Process Issues: Take some material that has slight artifacting or recording issues. These should jump out at you.

One of the coolest things for me was when I actually played an MP3 through some nice reference monitors. All of a sudden the artifacts from the codec started jumping out at me like never before. As an engineer, this was a wonderful indicator of how having revealing monitors can help the whole recording process from tracking to mastering.

*Up next…The Basics (Interfaces, DAWs, and Computers)

May 022014
 

This series of posts is a bunch of slightly cohesive thoughts I had while rebuilding my project studio/workspace in Stockholm. I wanted to capture this information, as I’ve been somewhat less than satisfied with the information on the web. These posts are more about how you should think when building your project studio and less about specific gear (thought some gear will be mentioned).

Originally, this was one post, but it just got too big, so I’ll post it in multiple, more focused chunks.

Note: While any specific software I mention here is Mac-based, I’m sure there are Windows counterparts and, possibly, Linux counterparts. Just do a little Googling to find them. These days, which platform you choose is largely personal preference, so I’d suggest you go with what you know. If you haven’t chosen your platform, yet, feel free to contact me via comments for discussion.

Let’s get started. What do you want to do?

It’s really critical that you understand what type of recording or audio creation that you want to do. Gear purchases and the overall structure of your space changes, sometimes dramatically, depending on whether you’re doing voiceovers, recording guitars, creating ambient soundscapes, or doing audio for video.

Remember the objective things, the things you can’t change.

What constraints do you work under? I suggest you keep a list like this, as you need to refer to it when making purchasing decisions in the heat of the moment. I say this from personal (bad) buying experience.

Here are mine:

  • Surround – I mix in it.
  • Analog Synthesis – I have a modular, as well as several analog synths.
  • Outboard Gear – I have both rackmount and non-rackmount outboard gear.
  • Guitars – Guitar is my primary instrument. I have many, both electric and acoustic and need to mic them.
  • Video – I need to sync to it, match sound, etc.
  • Small space – As much as I’d love to have a large studio area, I don’t…especially here in Stockholm where apartments are generally small.
  • Moderate Volume – Thanks to thick walls and interesting separation from our neighbors, I can make moderate amounts of racket without fear of retribution or complaints.

Now that we have that out of the way…

What are your personal workflow choices?

Just as you should keep a list of your constraints, you should also keep a list of your workflow preferences. I’m not talking about how you think you’d like to work in the future, but the way you currently work right now or have worked in the past, workflows that you’ve learned over time that have helped you. It’s very important that you don’t use wishful thinking here…trust me.

Again, here are (some of) mine:

  • Tracking and Mixing by touch – Not having to look at what I’m doing is very handy to me.
  • Mobile – Not the whole studio, but definitely the computer.
  • Outboard and in-the-box effects together – Combining these on a single, realtime track is critical to me.

Pretty short list, huh? Well, it doesn’t take much. I have other preferences, obviously, but this is focused on workflow, not just random preferences.

Learn and exercise patience (the value of working in a pull model)

Patience seems to be the hardest thing to learn for most folks, and doubly so for gearheads. However, if you do, you’ll save yourself a lot of headaches and a lot of money.

Remember those lists I had you create? One of the key purposes of those is to help you exercise patience and restraint. The other is to guide your decisions in an increasingly bewildering market of musical gear that magazines, catalogs, and salespeople are trying to sell you. Your whole studio should be based around those lists.

I can almost hear you saying, “WTF is this pull model that you’re talking about?”

The pull model, in this case, applies to the way you make a decision to acquire gear.

You have a need pulling gear into the studio, as opposed to an unguided purchase that pushes gear into your studio that you may rarely (or never) use.

There needs to be a reason for each piece of gear in your studio, and that reason shouldn’t be “building a collection.” It clutters the racks and the floor making it hard to get anything done, making it hard to be creative or productive. While it’s contrary to our gearhead psyche that more gear isn’t necessarily better, having the right gear and only the right gear at hand is easily a better way to go. Keep in mind, we’re not talking about a commercial studio where having a variety of microphones, preamps, outboard gear is critical. We’re talking about a personal project studio.

With that in mind, there are many things that you often need that you don’t necessarily need to keep in your studio. For instance, of the many guitars I have, I only keep two, distinctly different-sounding guitars in the studio. However, for specific sounds, I go to my storage room and pull out particular guitars, amps, or pedals for temporary use in the studio. We each have our special cases, and guitars are mine.

Up next…The Basics (Monitors)

Apr 142013
 

Sorry for not posting in a while. I’ve been acquiring more modules and making a little noise. Check it out…I was working on some incidental (background) music, and came up with this little ditty:

Not much to it, but I like it.

Anyway, on to today’s topic…

Filters (also known as VCFs)

If you read through my earlier post, The Basics of an Analog Synthesizer, you will be familiar with the fact that most analog synths have something known as a filter. You’ll also know that there are a few types of filters. For more details about that, please refer to the earlier post. For this post, I’m going to talk about a few different styles of filters and how to control them.

First, and perhaps most famous is the…

Moog Ladder Filter

Doepfer A-120 VCF1This type of filter is detailed pretty well here¬†by¬†Timothy E. Stinchcombe in much better detail than I can provide, so for those interested, head over there for a read of all the technical details. Technically, this is just a low pass filter. Well, okay, perhaps not just a low pass filter, but it doesn’t do all the fancy stuff of a multi-band or band pass filter. However, what it does do is sound friggin’ fantastic. Get one. You’re rack will thank you, and you will be happy. In fact, instead of letting all the specs on the myriad filters out there get to you, I’d say just ignore them for right now and get one of these. They are, seriously, just that useful. Here’s a great one that I picked up from Doepfer. I was longing for that fantastic, fat filter that my Prodigy gives me in a Eurorack format. This is pretty much it!

As you can see, there’s a control for frequency cutoff, CV 2 & 3 attenuation, input level, and resonance. This is a very simple filter that takes the sum of the CVs and controls the frequency cutoff with the result.

What’s interesting about filters (this one being no exception) is that there is a CV for a 1 volt/octave. “Hey, wait a minute,” you might say, “I thought that’s something you use to set the frequency of an oscillator.” Yes, it is. Why on earth would you control a filter like that? Well, think about it this way…a filter lowers the volume of a given set of frequencies, yes? Well if the frequency cutoff is set at, say 220 Hz, then it will have a distinctly different sound if you played at 110 Hz vs 440 Hz. At 110Hz, assuming there are significant harmonics, you’d still hear the fundamental and it’s first set of harmonics. However, when you play at 440 Hz, you will only hear a muted tone, as the fundamental is¬†an octave above the cutoff frequency of 220 Hz. So, by adjusting the cutoff frequency based on the same frequency control that the VCO pitch is set, the filter will always pivot around the note being played. With this one, you can also have a couple of other VC inputs that will modulate that, too, as you often use an envelope generator to create that great WOMP sound from a Moog. The second VC input allows you to use an LFO or other CV to further modulate the cutoff frequency. Very simple, and¬†very useful.

Vactrol-Based Filters

Make Noise MMG Vactrol-based FilterVactrol-based filters aren’t exactly a different style from others, but the way they’re controlled is a bit different. Because of this control, they do, indeed, sound different. Take, for instance, the stellar Make Noise MMG (left). In addition to this being a stellar filter, it is controlled via vactrol.

What’s a Vactrol?

Basically, it is a completely light-sealed device with a light-emitting portion and a photo-resistor (light-dependent resistor, LDR, for short) whose resistance goes down when more light hits it. Nowadays these are built with LEDs for their quick response. In earlier times, small incandescent lamps were used and before that lightning bugs in a jar. That last bit might only be a rumor, though.

But, a resistor is a resistor, right? Well, yes, and no. While the LDR does, indeed, look just like a resistor to the rest of the device, it has some interesting characteristics. First, since it has no electrical connection to the CV, it is completely decoupled from noise that may enter via CV. Second, while LEDs may be very fast to respond, LDRs are less so. The result is what some folks refer to as creamy, soft, or mellow sounding. It’s definitely audible, and I, personally, really like it.

Other Filters

There are, obviously, other filter types out there, and, to be totally honest, you just have to play around with them to see what you like. Several, like the Intellijel Korgasmatron/Corgasmatron are two filters in one unit that can be cross-faded or used separately. These are very handy and sound great.

Here’s the Deal With Filters

There are so many wonderful filters out there, you’re not going to just have one in your rack. So, play through as many as you can get your hands on. As I mentioned earlier, I highly recommend that you get at least one ladder filter, as they always sound wonderful, then broaden your horizons with the various other amazing units out there. You almost can’t go wrong.

Mar 172013
 

I mentioned last time that I’d be covering filters and envelopes in this post, but after receiving the rest of my components and working with them this week, I began to feel like I should, perhaps, change that to amplification and envelopes, as amplification is an even more basic piece of the puzzle. Not to worry, I’ll get to filters in the next post or so.

Voltage Controlled Amplifiers (VCA)

WMD Multimode VCASimply put, amplifiers make quiet sounds louder. Strangely enough, the contrary is also true when it comes to VCAs, they can attenuate the signal, as well. Perhaps, a better phrase for these would be Voltage Controlled Dynamics. In modulars (and pretty much every other kind of analog synth), we use voltage to control how loud the sound is. We can even vary the voltage to create tremolo, a repetitive raising and lowering of volume which can give various sensations from stuttering to long, flowing waves of sound. One of the simplest effects, it was the very first effects pedal brought to market, with the DeArmond Model 60 Tremolo Control. And, yes, I have one.

Author’s Note: The DeArmond Model 60 (A & B) used fluid resistance to vary the volume, not voltage control. I was just pointing to how cool an effect something so simple could be.

¬†It’s true, the VCA is a simple tool; however, keep in mind that the crazy and incredible minds out there have found some cool things to do with the VCA. For instance, the¬†WMD Multimode VCA¬†allows you to deal with not one, but two signals simultaneously. It also allows you to use one or two control voltages to control both or each signal, respectively. It also allows you to use a CV to crossfade between the two signals in a couple of ways. It’s a very versatile tool, especially in stereo and multi-channel environments. WMD is not the only manufacturer with great VCAs, checkout tons of them at Analogue Haven,¬†Control, or, better yet, at a local shop, if you’re lucky enough to have one.

Envelope Generators (EGs)

Since we’ve already gone over envelope generators in¬†The Basics of an Analog Synthesizer, we know how they can generate CV to control something like the WMD Multimode VCA and others. So, I’ll devote this section to looking at what ingenious minds have added to make some of the more evolved EGs special.

Circuit Abbey ADSRJr + Expansion

Circuit Abbey ADSRjr ADSRjrExpFirst off, let’s take a look at the Circuit Abbey ADSRJr + Expansion. If you look at just the ADSRJr without the Expansion, you’ll see just the basics of the ADSR envelope generator, a gate input, four knobs to adjust attack, decay, sustain, release, and a CV output.

There are two other buttons there, though. First is the time button that allows you to choose short, medium, or long times for the parameters. For instance, for percussive sounds, you may want a super-tight response, so the short setting might be best. However, if you’re doing ambient work, the long setting may be appropriate. It will give you those extremely¬†long, luxurious attacks that swell and decay slowly. Then, there’s the in-between, medium setting for the rest.

Also, one of the handiest things on any EG is the button below the release knob. On the ADSRJr, it’s called Cycl (for Cycle, get it?). This allows you to have the envelope cycle over and over again. When you’re working just with the modular and no keyboard, I cannot overstate how handy this is for previewing and testing your envelopes. Also, it can be used to create a complex LFO pattern.

REMEMBER THIS!!! When you buy an EG, always try to get one with a trigger or cycle button on it. It is one of the handiest things you’ll use on a modular.

So, that’s the ADSRJr, a pretty straightforward ADSR EG. So, let’s take a look at the Expansion and what it does. First, you’ll notice that we have a¬†LOT of CV I/O. There are inputs for ADSR which allow you to control any stage of the envelope via CV,¬†AWESOME! Next, you have a set of buttons labelled “curve” with A, D, and R next to them. These buttons change the way the voltage changes between these stages. Without these buttons, the CV goes to the next level in a straight line (linear). That’s not always the most natural or musical sounding, as ears do not hear in a linear fashion, even though most EGs operate this way. If you press the button, next to one of the letters, that portion of the envelope will change from linear to¬†exponential, which means that it will approach the next value in an exponential curve, moving slowly at first, then accelerating rapidly to the final volume. Notice that there isn’t a button for sustain, as sustain is just the level at the end of decay and is active for as long as the gate is “high.”¬†Okay, that’s a pretty nice addition, more natural sounding envelopes that you can control via CV. And just beneath the curve buttons is…wait for it…¬†a trigger button¬†for manually firing the envelope, very nice.

But, the Expansion goes further, still. There are three¬†trigger¬†outputs labeled EOA, EOD, EOR. Get this…these fire¬†at the end of their respective stages. So, if you want your sound to evolve by, say, mixing in other sounds at the end of the attack or decay or if you’d like to fire a second envelope at the end of this envelope, these trigger outputs will do the job for you. Pretty badass, huh?

Finally, there are two more features, a trigger input to fire one full cycle of ADSR and an inverted output. “Inverted output?” you say.¬†“Yes, inverted output.” I say. What this does is to flip the envelope upside down. So, if you’re using the envelop to drive frequency of a VCO, a filter, or some other type of module, instead of attack going up, it will go down, decay will come back up, and release will rise to the pre-attack position.

WMD Multimode Envelope + Expansion

WMD Multimode Envelope + ExpansionThe WMD Multimode Envelope + Expansion¬†is quite similar to the Circuit Abbey ADSRJr + Expansion, but has a few very nice additional features. First, let’s go over the similarities. You’ll find attack, decay, sustain, release knobs on the front panel above the gate/trigger and 0-8V out (CV out). On the Expansion, you’ll find all the CV inputs for the various stages, along with EOA, EOD, EOR triggers. You’ll also notice a¬†Manual button, which is effectively a trigger button on the main panel.

Now, let’s look at the differences. If we look at the WMD MME main unit, you’ll also see a set of 4 knobs that control the shape of the stage. Remember the buttons that went from linear to exponential on the ADSRJr+Expansion? Well the MME allows not only that, but also¬†logarithmic curves, too, and¬†anything in between. So, you can choose your A, D, and R to go from fast to slow as it approaches maximum, static velocity, or slow to fast. Nice, huh? There’s also a trigger on end of sustain, which is a nice extra. There are also what are called¬†full-swing outputs. Basically, these are CV outputs that swing from -8V to +8V during the cycle and vice-versa. This can be useful when acting like a complex LFO.

Oh, wait. There’s one more knob here. There are a ton of labels on it like ADSR, ADR, AR, ADAR, ADSAR, ADSRst, ADRcy… Well, let’s just call this the¬†Amazing Knob. This knob allows you to choose what type of envelope you want. It supports the normal envelopes like ADSR, AR, ADR (note that any envelope without the ‘S’ in it is a triggered envelope. It fires, but doesn’t hold the note since there’s no sustain portion). It also supports rearranging these a little with envelopes like ADAR, and ADSAR.

The coolness doesn’t stop there, though. The envelopes that you see with ‘st’ following them are¬†step-based envelopes. Each time it receives a trigger signal, it will perform the next step of the envelope and stay there, only proceeding when it receives another trigger. The envelopes with ‘cy’ following them are¬†cycling enveloped. Once you start them, they will run indefinitely, firing over and over, as if you took the EOR trigger and patched it to the Retrig input. This is some serious awesomeness.

Now, let’s slip over to the Expansion side of things. Over here, you’ll notice that there are three more inputs (Retrig, Hold, and ADR Time Scale) and a switch (Reset on Gate). Retrig simply listens for a trigger and restarts the envelope from the beginning. Hold, well that does just what you might think. When a gate is sent to Hold, then the EG stops right where it is until the gate is low again, then continues where it left off. ADR Time Scale is a little different, though. ADR Time Scale allows you to send a CV to the EG that will adjust how long the A, D, and R stages are simultaneously. I haven’t quite had use for this, yet, but it seems pretty neat, and I expect I’ll find it cool when I do.

Now, let’s talk about that switch, Reset on Gate. In order to understand this one, we need to discuss how analog synths play. Since keyboards are inherently mono-timbral, i.e. they can only communicate which pitch and when to play, not what sound to make, they send a voltage (which key was pressed) and gate (that a key is pressed). A traditional analog keyboard was effectively a long set of resistors running the length of the keyboard. Each key was effectively a short circuit that would either increase or decrease the number of resistors the control voltage would have to go through, and, thus, was not only mono-timbral, but also monophonic (one note). So, in its resting state with no keys pressed, no current would flow, the¬†gate would remain low (off), and the envelopes would not fire. As soon as you pressed a key, the gate would rise, signaling the envelope to open, and the control voltage that was governed by¬†which key was pressed would control the pitch of the oscillator.

So, what happens if you press another key before you release the previous one? The pitch control voltage changes, but there’s nothing to cause the gate to come down, thus starting the¬†release cycle of the envelope. The answer? In the old days,¬†nothing would happen with the envelope. It would stay in its sustain phase since the gate remained high (on).

Skip to November 1981, when Dave Smith and Chet Wood presented what would become the MIDI standard to AES. All of a sudden, we had a standard way to tell synths from various manufacturers how to play multiple notes at the same time. There were some proprietary methods before MIDI, but none became the standard. In any case, each note effectively had its own start and end, not just a gate for the entirety of the keyboard. So, now when we use our handy-dandy MIDI to CV converters in order to play our modular synths with our MIDI keyboards, we can tell these envelopes to re-trigger by sending a new gate signal for each note played.

And that’s the long answer to what the Reset on Gate button does. It is¬†very handy, indeed.

In Conclusion

I think it’s pretty obvious how important VCAs and EGs are to your modular world, at this point. Honestly, you can’t have too many of them, and you can use them in any number of ways. The two I’ve presented above I’ve been very happy with over the last week, but other manufacturers have wonderful additions, as well. I’m sure I’ll be adding more to the arsenal.

Now that I have the modular up and running, I’m hoping to start including some audio in these things, but the day has gotten away from me, and I want to play now. Bye bye.

Mar 092013
 

Now that I have all the basic, and some not-so-basic, pieces ordered, I’ll take a look at the modules I’ve bought and explain why I bought these, specifically.

The Challenge

The biggest challenge in putting one of these things together is that shops that sell modular synth bits are few and far between. There’s nowhere to really audition different pieces of gear. If you’re fortunate enough to have a friend or two that have some gear, that’s great, but unless you want to buy just what they have, you still have some work to do.

With that in mind, I decided that this whole process is going to be somewhat trial and error (that’s the fun part, right?) So this synth that I’m building should start out with the most flexible basic modules I can get, and go from there. I knew what pieces I needed (see The Basics of an Analog Synthesizer), so I headed to eBay to see what I could find. One of the most obvious pieces to look for is the oscillator, so after digging around and comparing, I came upon this guy…

Plan B Model 15 Complex VCO

Plan B Model 15 Complex VCO

This VCO has all your standard waves (sine, triangle, saw, square/PWM), and from all accounts it is completely top-shelf on each of those. What it also has is a really cool¬†Morph feature, which can morph the sound from a sine to either a square or sawtooth wave. The morph is also voltage controlled…which brings me to another thing. Almost everything on this VCO can be controlled via voltage, meaning you can have the entire VCO change over time.

In reality, the morph is just a style of crossfade between a couple of the output signals. If you have an external crossfade module, you can do the same thing, but here it’s built in, so no need for another mixer.

The other thing I’d say is to keep your eye out for FM, which this has. FM allows another oscillator to affect the waves in this module. This type of synthesis is what made the DX-7 so awesome (even though it was digital).

Next up…

Malekko Heavy Industry / Wiard Oscillator

Malekko / Wiard Oscillator

Most folks just say Malekko, but that just doesn’t have the great ring of Malekko Heavy Industry.

Anyway, this oscillator is interesting because, in addition to allowing almost complete control via voltage, it adds the interesting notion of phase shifting right in the oscillator. On the main outputs you have the original signal. On the secondary outputs, you have the phase shifted signal. When you put these two signals together, you get a whole bunch of wonderfulness. When you start modulating the phase, it really gets swirly.

There’s a hidden gem in this one, too. It has a push/pull knob. That alone is cool, but what it does is even cooler. It turns the oscillator into an LFO…with phase shifting!!! So you can use any of the outputs to modulate other things altogether. And since you can control the phase shift via CV, you can affect all sorts of changes across your synth. SUHWEEET!!!

And finally, we have…

The Intellijel Rubicon

Intellijel Rubicon VCOWhen I emailed Shawn Cleary at Analogue Haven, I was putting together my order for the stuff I couldn’t find on eBay. I mentioned that I was considering the Z3000 from TipTop (a fine oscillator, itself), but since I was a noob, would he recommend anything else. He replied with this. So, after a little looking around and hearing/reading everyone rave about this module, I added it to the order.

I don’t really want to walk down the feature list (you can do that here), but it’s made of all kinds of awesome. What’s really interesting about this guy is that in addition to all the standard stuff, it can be turned into an LFO, it has a -1 or -2 sub-octave, and some of the coolest FM circuitry around.

The downside, this thing isn’t cheap. It’s around $400 for just this module. By comparison, most other oscillators that I was looking at were in the $210-$250 range.

In conclusion

So here’s what I was shooting for: I wanted to get an extremely flexible set of oscillators that could give me loads of different sounds to mix and match. That gives me a much more solid foundation on which to build the rest of this synth. I just wish everything would get here so I can plug it all together!

Up next, filters and envelope generators. Stay tuned….

Mar 032013
 

The process that I’m following to put my modular together is similar to the process I recommend to folks new to buying motorcycles. For your first one, get a good, simple synth that covers all the basic functionality. There’s a good chance that you’ll eventually grow out of it or want to change it. You should not and cannot purchase the perfect synth, so don’t try.

“My goodness! You say that with such authority. How do you know that I can’t purchase the perfect synth?” you might be thinking. Well, because it doesn’t exist. The perfect synth (or motorcycle) is a transient thing. You will change your mind, your likes, and your dislikes, and your taste will evolve. This isn’t something to worry about, it’s something to be enjoyed.

With all that out of the way, let’s look at what makes up a basic synthesizer.

Just the Facts, Ma’am

In a basic synthesizer you have a few, simple pieces:

  • Tone generation
  • Frequency control
  • Mixing/Volume control
  • Controls (for controlling all of the above)

Each of these items is just as important as the others. Even if the controls don’t technically make sound, without them, you’re dead in the water. Also, as I go through each of these, I’ll point out typical, basic configurations and the types of control they require. Once we get to the control section, all of it will come together in a neat, little package…I hope. ūüėÄ

Tone Generation – Oscillators

Tone generation typically (but not always) starts with the oscillators. These little modules pump out a particular waveform at a particular frequency. Basic analog synthesizers typically have 2 or more of these. Usually, a single oscillator will pump out a sine, triangle, saw, and square wave all at the same frequency. By having two oscillators, you can mix them together to get more complex tones, particularly when they’re a little out of tune with each other.

The minimum typical set of controls for oscillators is:

  • Frequency – Sets the frequency of the oscillator
  • Fine tuning – Adjusts the frequency at a much more fine-grained level

Frequency control – Filters

Once the audio comes out of the oscillators, it’s pretty cool, but after a while, you’ll find that it gets a little boring since all you can do is put a couple of waves together. You don’t have any volume control, yet, so it’s quite literally two tones just smashed together with no movement other than the offset of the tuning which can cause subtle pulsing. One really cool thing that using different waveforms gets you, though, is¬†harmonics. Harmonics are multiples of the fundamental frequency. So, if you are playing a square wave, for instance, it will have a ton of harmonics, in addition to the base frequency that you’ve set on the oscillator. So, you can play with those harmonics to further change the sound.

Now, we know we want to change those frequencies around a little bit, and for that, we need filters. Filters are like an extreme version of the equalizers that you may be familiar with on your stereo (though, as I write this, the notion of a “stereo” seems a little dated). Think¬†Bass and¬†Treble. Except in this case it’s quite a bit more extreme. Go watch a blaxploitation film and listen for the wakkachikkawakkachikka of the guitar playing through a wah pedal or listen to Bootsy Collins and Bernie Worrell playing through Mu-Tron III’s on the older P/Funk records. Wah pedals are great examples of filters. They diminish certain frequencies while boosting others.

Filters are typically one or more of the following: Low-pass, high-pass, or band-pass. “What do these mean?” you may ask. And I may reply, “A low-pass filter passes low frequencies while filtering out high frequencies; a high-pass filter passes high frequencies while filtering out the low; and a band-pass passes through a specific band of frequencies while filtering out frequencies higher and lower than that band.” Simple as that. On basic analog synthesizers, like my Prodigy, there is only one filter and it is of the low-pass type.

The minimum typical set of controls for filters is:

  • Cutoff Frequency –¬†Sets the cutoff frequency at which the filter “rolls off” or roughly begins filtering
  • Depth – Sets the amount of filtering applied to the signal
  • Q – In the case of band-pass filters, this sets how many (or few) frequencies around the cutoff frequency to pass

Mixing and Volume Control

Mixing and volume control are just what you expect. These mix multiple signals, change the level of the audio, and may appear at multiple places in the signal chain. There will often be a mixer just after the oscillators and before the filter, and there will be an amplifier at the end of the chain controlling the outgoing volume.

The minimum typical set of controls is:

  • Level – Pretty self explanatory, huh? Sets the level of the signal

Controls

So, lets take a look at what we have so far…

We have oscillators making frequency noises at as many frequencies as we have oscillators (two in the case above). These are getting mixed together and filtered and finally having their level set. Okay, that’s really cool. Buuuut, after playing this way for a little while, you start getting bored. “Hey, man…where’s the motion? Violins and guitars and drums don’t just come on and go off. They start loud and get soft. The play different notes, too,” you’d probably say. And you’d be correct. So, what can we do to put some motion in the ocean? How can we wiggle those waves?

Well, let me tell you how we’re gonna put a little pep in your step! We’re gonna control the hell out of all this stuff. This is where synths really get fun.

The Envelope Generator

Let’s start with volume and dynamics control. Everything you hear has a characteristic called a volume envelope. Think about when a gunshot goes off (Pew! Pew!) What you hear is the attack of the shot as it rises rather quickly to its maximum volume (the ‘P’ in Pew) and the decay as it dies out (the ‘ew’ in Pew). Now, think about a violin being aggressively bowed. You hear the attack of bowed string as it climbs to maximum volume, then the decay from this initial bowing to the sustain of the continued bowing, and finally, when the bow is taken away, you hear the¬†release of the volume back down to its initial silent state. This is what we refer to as a simple ADSR (Attack, Decay, Sustain, Release) envelope. And the controller we use for this is called an Envelope Generator.

An envelope generator, when triggered, generates a control voltage that changes over a period of time to match the envelope that you set. You feed this control voltage to the amplifier at the end of the audio chain to control loudness automatically. In this way, you can create that Pew! envelope if you want to create a percussive sound. It also lets you set a sustain level for the sound to decay to when you keep the generator triggered (think, the continued bowing of a string). As long as the envelope is sending a control voltage to the amplifier, you will hear something at its respective level.

Okay, about this¬†trigger thing…Well, since many of our controls are temporal, they need to know when to start. Therefore, we need to send them a control signal of some sort. For our purposes, we send a¬†gate signal. Let’s picture a keyboard (the piano, ebony and ivory sort) with one key. You press that one key, it turns on the sound. You release it, the sound stops. When you use this with an envelope generator (EG or ADSR for short), the ¬†EG will start the cycle of attack, decay, sustain, release. This gets slightly trickier at this point. If you release that key at any time, the EG will immediately skip to the release phase. If the EG gets through attack and decay and the key is still pressed, it will go to the sustain level until you release the key, then it will enter the release cycle.

The EG is a¬†very powerful tool for adding motion to your sound. Along with being used for volume control, it is also used to control filters. Let’s go back and pull out those P/Funk records again. You hear Bootsy’s bass? It’s kinda freaky how it sometimes makes this WOWMP! sort of sound. Well, that’s an EG controlling a filter. It just happens to be triggered by the bass reaching a certain threshold level.

The Keyboard

“Hey, wait a minute. My keyboard has, like, a bunch of keys on it. I want to play different pitches!” Well, good for you, because your keyboard takes care of at least two crucial things. Not only does it tell the EG when to start, by sending a gate signal, it also generates a¬†control voltage¬†that tells the oscillators the frequency that they need tune to. In this case, your keyboard is like the conductor of a very small, electronic orchestra. “You, oscillator, over there…play at 440Hz. And you, envelope generator over there, you turn up the volume.”

The LFO (Low Frequency Oscillator)

The LFO is simply another type of oscillator that runs at frequencies below the threshold of hearing (<20Hz). They use a waveform (triangle, saw, square, random, just like the regular oscillator) to modulate a control voltage (CV, for short) that you can send to whatever you have that needs controlling. Technically, you could use an LFO to control your volume instead of an EG. Or, you could control your filter’s cutoff frequency. Or¬†you could even control the level of the sustain of the EG which is in-turn controlling the volume of the amplifier.¬†Which brings us to…

Controls Can Control Anything that Can Be Controlled

This is where modular synthesis breaks away from the typical, all-in-one synthesizer. In an all-in-one synth, you typically have a set of oscillators feeding a mixer, which then goes through an envelope controlled filter, which finally goes through an envelope controlled amplifier. There will often be an LFO controlling one or more presettable parameters.

With modular synths, you can have anything control anything else, assuming that the thing you want to control has a gate or CV input. You could have an EG controlling the relative level of an LFO, which is in-turn controlling the fine tuning of one of the oscillators and the cutoff frequency of your filter. We even have  modules that simply count off time and send gate signals or CV on the beat. Those are called sequencers, and even they can be controlled by CV from something else.

It’s¬†totally badass!!!

Mar 032013
 

Sorry for the looooong break between posts. Since posting last, I’ve found a home in San Francisco, found a kick-ass job as a Technology Evangelist at ngmoco:) / DeNA, and decided that I need yet another time-intensive, expensive hobby.

So, I’ve had a love affair with analog synthesizers and analog processing ever since I bought my Moog Prodigy back around ’92. Since then, I’ve accumulated over 20 guitars, 8 amplifiers, 80-90 stomp boxes, and 3 synths. During this time, I’ve¬†had more of each of these, but sold them for whatever reason. Yes, I’m a gearhead, and I¬†LOVE creating neat sounds.

A few years back, a friend of mine loaned me his Dave Smith Evolver, as well as his MacBeth M5. While it was a little bit of a steep learning curve for me at the time, I knew that I wanted to work with this stuff.

Fast forward to last year where I met this fellow¬†at work – where he typically doesn’t wear the funny hat. We began talking, and one thing led to another until he finally put together his modular this past February. Given that I’d been toying with building a modular for a while, I visited him, and we spent some time sipping whiskey and talking about modulars while I contented myself with twiddling knobs on the various modules. At that point, I had no other choice than to start building my own.

Voila!!! Look no more for that time-intensive, expensive hobby. And, let me tell you, modular synthesis is a gearhead’s¬†DREAM!!!

So, with all that said, I’ve begun collecting modules to build my very own modular. Since posting my purchases on Facebook, a friend has been asking that I post more about how I go about purchasing and why I purchase particular modules, so I’ll be doing just that in the next few articles. (And they should come more briskly this time )

For those of you that don’t know much about synths, take a look at¬†The Basics of an Analog Synthesizer

Aug 082011
 

So far, I’ve gotten more questions about getting Cocoa Views to work in Audio Units than other requests, so here we go.

Since I’ve been on a lengthy vacation, I didn’t think too much about it until I tried using the built-in template for Audio Unit Effects in Xcode 4.x. It doesn’t make it at all clear how to get it all working. Fortunately, it’s not terribly difficult. It just takes a bunch of changes to the files and projects that are generated by the template.

Also, there have been some changes in Mac OS X Lion that require attention to make this (and any new Audio Units) work.

Create the Project

First off, go ahead and create a new project. In the project sheet, select Audio Unit Effect as your project type and click Next.

In the next sheet you will be naming your effect. Make note of your Company Identifier, as you’ll be needing that later.¬†Ensure that with Cocoa View is checked¬†and click Next.

Go ahead and save your project as you normally would.

In my case, I’m using the simple project name of CV. So, wherever you see CV¬†(in italics) in the following code, just insert your project name.

Tweak the Project

Once the project is created, we’ll start to adjust various project settings along with adding and removing some files to make things work on Lion.

First off, find the following files in your Xcode project’s AUPublic folder and delete them:

  • AUDebugDispatch.h
  • AUDebugDispatch.cpp

Next, and this is part of the Lion changes, Control-Click on your AUPublic folder and select “Add Files to CV…”

In the sheet that appears, navigate to /Developer/Extras/CoreAudio/AudioUnits/AUPublic/AUBase and select the following files for addition.

  • AUPluginDispatch.h
  • AUPluginDispatch.cpp

We now have one more file to add. Right click on CV¬†(or your project folder in Xcode) and select New File…

Create an empty file and name it CV-Prefix.pch.

Now, we could just turn the prefix setting in the build settings to not look for this file, but for consistency with the FilterDemo sample code, I just figured we’d put it in there in the event we want to use it later on for precompiled headers.

Last, we need to decide what architecture(s) we want to build this for. For simplicity’s sake, I build a universal component for 64-bit and 32-bit platforms. You’ll need to decide for yourself and make the changes accordingly. Here’s how.

Select your project in the left-most pane of Xcode. It will probably be right at the top of the pane and list how many Targets you have.

Selecting the Xcode Project

Once you’ve selected your target, you can make modifications to your build settings. Of particular interest are Architectures¬†and Build Active Architecture Only. If you want to test both the 32 and¬†64-bit versions, you’ll need to set the following:

  • Architectures – Standard (32/64-bit Intel)
  • Build Active Architecture Only – No (for both Debug and Release – see below)

This is a point of confusion for many people. Basically, this is where you’re telling the compiler which versions to build. By default, the Debug¬†setting is Yes. Which means that on any machines sold today, it will build only 64-bit versions since all the machines now are 64-bit. The problem ¬†with that is that AULab runs, by default, in 32-bit mode, so you won’t ever see your Audio Unit in the list.

Now, let’s modify some files…

Changing the Files

I’m just going to list the files to change along with the changes to make.

CV.h

Locate the following lines and remove them.

#if AU_DEBUG_DISPATCHER
	#include "AUDebugDispatcher.h"
#endif

CV.cpp

Locate the following lines and remove them.

#if AU_DEBUG_DISPATCHER
	mDebugDispatcher = new AUDebugDispatcher (this);
#endif

Locate the following line:

COMPONENT_ENTRY(CV)

and change it to:

AUDIOCOMPONENT_ENTRY(AUBaseFactory, CV)

This is one piece of a fairly significant change the Apple is making with respect to Audio Units and the move away from Component Manager.

Locate this line in your GetProperty method:

CFBundleRef bundle = CFBundleGetBundleWithIdentifier( CFSTR("com.audiounit.CV") );

This needs to reflect the bundle identifier (typically your Company Identifier¬†+ ‘.audiounit.’ + projectname)¬†of your target. For me it is com.squishycat.audiounit.CV¬†. This is typically set when you create the project and specify the name. So, change that line to something like…

CFBundleRef bundle = CFBundleGetBundleWithIdentifier( CFSTR("com.yourcompany.audiounit.CV") );

CV.exp

Add the line:

_CVFactory

Be sure to put a carriage return at the end of the line to avoid syntax warnings.

CV-Info.plist

You need to add a set of properties that used to belong in CV.r. Basically, you just add these just before the last </dict> tag. Obviously, put your own company and text in where you see SquishyCat, etc. Take a look at TechNote TN2276 for more information on this change.

	<key>AudioComponents</key>
	<array>
	<dict>
	<key>description</key>
	<string>SquishyCat Cocoa View</string>
	<key>factoryFunction</key>
	<string>CVFactory</string>
	<key>manufacturer</key>
	<string>SquishyCat</string>
	<key>name</key>
	<string>SquishyCat: Cocoa View Test</string>
	<key>subtype</key>
	<string>Pass</string>
	<key>type</key>
	<string>aufx</string>
	<key>version</key>
	<string>65536</string>
	</dict>
	</array>

CVVersion.h

Modify the CV_COMP_MANF to your 4-digit company code (Get that from Apple)
Modify the CV_COMP_SUBTYPE to something more appropriate for you AU (See Apple Documentation on Component Subtypes)

Finally, Build That Sucker!!!

HOLD ON!!! HOLD ON!!! Before slamming those fingers down on Command-B, make sure that your Scheme is set to compile your Component, not just the View bundle. For some reason, every time I set one of these up, Xcode defaults to wanting to build the ViewFactory rather than the Actual component. Since the component is set to have a dependency on the View, it’ll force the view to be built automagically.

Once you do that, see if you can’t successfully build the project. Once it’s built, move the component (just the component, you don’t need to manually move the View bundle) to either /Library/Audio/Plug-ins/Components or ~/Library/Audio/Plug-ins/Components, then start up AULab and see if your new component with Cocoa View shows up.

Apr 142011
 

Setting expectations

Just to set expectations, I have never developed Audio Units before, so this is a first time thing for me. All I’m trying to do is share my experiences as an enthusiastic, though not formally trained, developer.

Also, while the information that Apple provides for Audio Unit development is excellent, it is a bit out of date with respect to the tools usage and getting started as a new developer on both the platform and Audio Units.

The tools you’ll be using are all included in Xcode 4.

  • The Xcode IDE
  • AU Lab, located in /Developer/Applications/Audio
  • AUVal, a command line tool

So grab your copy of Xcode 4 over at the Apple Mac OS X Developer Center, and let’s get going!

Xcode 4 and the Audio Unit project templates

Follow these steps to setup your initial project.

  1. Open Xcode 4
  2. Select File -> New -> New Project… from the menubar.
  3. In the “Choose a template for your new project” sheet, select System Plug-in in the category list on the left
  4. Choose Audio Unit Effect as a template type on the right and click Next
  5. In the next page, name your Audio Unit and add your Company Identifier in the form of com.mycompany. Do Not select With Cocoa View at this point. Click Next
  6. Select the folder into which you will save your project via the standard save sheet and you’re good to go.

At this point, you have a basic, compilable Audio Unit project. Feel free to select Product -> Build and make sure that you don’t have any errors. I’m assuming you won’t because I’ve walked through this several times and haven’t run into errors, yet.

This basic Audio Unit has the following parameters that will be important a little bit later.

TYPE
aufx – This is the type specifier for a basic effects unit, as opposed to one that takes MIDI input. These are defined by Apple in AUComponent.h in the AudioUnit.framework. This value is set in your MyPlugin.r resource file.
SUBTYPE
Pass – This is a further, more-specific specifier. This particular one is user defined. There are some already defined in the AUComponent.h header, and you should follow those whenever possible. However, you are free to define your own. This value is set in your MyPluginVersion.h header.
MANF
Demo – This is the default Manufacturer ID for the template. You should replace it with your own creator code, when you get it from Apple. http://developer.apple.com/support/mac/creator-code-registration.html This value is set in your MyPluginVersion.h header.

Validating that your Audio Unit works

Step One: Copy your MyPlugin.component to the appropriate location

In order for Mac OS X to use your component, you need to copy it to one of the following locations:

  1. ~/Library/Audio/Plug-Ins/Components
  2. /Library/Audio/Plug-Ins/Components

Either will work, but for now, just copy it to the first location. If you happen to be unfamiliar with the ~ syntax, it just means your home folder. So, in this case, you’d just copy your MyPlugin.component to /Users/yourusername/Library/Audio/Plug-Ins/Components folder

But wait!!! Where is my Audio Unit to begin with???

Good question. It’s actually changed since the last version of Xcode. The easiest way to find it is to look Xcode’s Project Navigator pane. Find the folder named Products. Inside it, you will see your plugin. Control-click it and select Show in Finder.

Step Two: Run auval to validate your plug-in.

Open the Terminal.app and type in the following:

auval -v aufx Pass Demo

Remember those parameters (aufx, Pass, and Demo) we talked about earlier? Do they look familiar? They’re what we’re using to specify which Audio Unit to validate.

Why don’t AU Lab and AUVal see my plug-in?

If you’re doing this on one of the 64-bit Macs out there, as I am, you’re likely to run into a problem getting your just-compiled Audio Unit to show up in AU Lab.

In fact, when you run AUVal from the command line, you’re probably seeing something like this:

MachineName:~ username$ auval -v aufx Pass Demo

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
       AU Validation Tool
       Version: 1.6.1a1
        Copyright 2003-2007, Apple, Inc. All Rights Reserved.

       Specify -h (-help) for command options
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

--------------------------------------------------
VALIDATING AUDIO UNIT: 'aufx' - 'Pass' - 'Demo'
--------------------------------------------------
ERROR: Cannot get Component's Name strings
ERROR: Error from retrieving Component Version: -50

* * FAIL
--------------------------------------------------
TESTING OPEN TIMES:
FATAL ERROR: didn't find the component
MachineName:~ username$

So, what’s the deal?

Well, it’s really simple. By default, Xcode is compiling for your machine in 64-bit. Auval, by default, checks the list of 32-bit Audio Units by default. So, to check that your 64-bit version is good to go, use the following, instead:

auval -64 -v aufx Pass Demo

Now you should see something like different. It should look like this:

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
       AU Validation Tool
       Version: 1.6.1a1
        Copyright 2003-2007, Apple, Inc. All Rights Reserved.

       Specify -h (-help) for command options
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

--------------------------------------------------
VALIDATING AUDIO UNIT: 'aufx' - 'Pass' - 'Demo'
--------------------------------------------------
Manufacturer String: __MyCompanyName__
AudioUnit name: MyPlugin
Component Version: 1.0.0 (0x10000)

* * PASS
--------------------------------------------------
TESTING OPEN TIMES:
COLD:
Time to open AudioUnit:         0.537 ms
WARM:
Time to open AudioUnit:         0.016  ms
ERROR: Component Version mismatch: Res Vers = 0x10000, Comp Vers = 0xFFFFFFFF
FIRST TIME:
Time for initialization:        0.007 ms

* * FAIL
--------------------------------------------------
VERIFYING DEFAULT SCOPE FORMATS:
Input Scope Bus Configuration:
 Default Bus Count:1
    Default Format: AudioStreamBasicDescription:  2 ch,  44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved

Output Scope Bus Configuration:
 Default Bus Count:1
    Default Format: AudioStreamBasicDescription:  2 ch,  44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved

* * PASS
--------------------------------------------------
VERIFYING REQUIRED PROPERTIES:
  VERIFYING PROPERTY: Sample Rate
    PASS
  VERIFYING PROPERTY: Stream Format
    PASS
  VERIFYING PROPERTY: Maximum Frames Per Slice
    PASS
  VERIFYING PROPERTY: Last Render Error
    PASS

* * PASS
--------------------------------------------------
VERIFYING RECOMMENDED PROPERTIES:
  VERIFYING PROPERTY: Latency
    PASS
  VERIFYING PROPERTY: Tail Time
WARNING: Recommended Property is not supported

  VERIFYING PROPERTY: Bypass Effect
    PASS

* * PASS
--------------------------------------------------
VERIFYING OPTIONAL PROPERTIES:
  VERIFYING PROPERTY Host Callbacks
    PASS

* * PASS
--------------------------------------------------
VERIFYING SPECIAL PROPERTIES:

VERIFYING CUSTOM UI
Cocoa Views Available: 0

VERIFYING CLASS INFO
    PASS

TESTING HOST CALLBACKS
    PASS

* * PASS
--------------------------------------------------
PUBLISHED PARAMETER INFO:

# # # 1 Global Scope Parameters:
Parameter ID:0
Name: Parameter One
Parameter Type: Linear Gain
Values: Minimum = 0.000000, Default = 0.500000, Maximum = 1.000000
Flags: Readable, Writable
  -parameter PASS

Testing that parameters retain value across reset and initialization
  PASS

* * PASS
--------------------------------------------------
FORMAT TESTS:

Reported Channel Capabilities (implicit):
      [-1, -1]

Input/Output Channel Handling:
1-1   1-2   1-4   1-5   1-6   1-7   1-8   2-2   2-4   2-5   2-6   2-7   2-8   4-4   4-5   5-5   6-6   7-7   8-8
X                                         X                                   X           X     X     X     X     

* * PASS
--------------------------------------------------
RENDER TESTS:
Input Format: AudioStreamBasicDescription:  2 ch,  44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved
Output Format: AudioStreamBasicDescription:  2 ch,  44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved
Render Test at 512 frames
Slicing Render Test at 64 frames
  PASS

Render Test at 64 frames, sample rate: 22050 Hz
Render Test at 137 frames, sample rate: 96000 Hz
Render Test at 4096 frames, sample rate: 48000 Hz
Render Test at 4096 frames, sample rate: 192000 Hz
Render Test at 4096 frames, sample rate: 11025 Hz
Render Test at 512 frames, sample rate: 44100 Hz
  PASS

1 Channel Test:
In and Out Format: AudioStreamBasicDescription:  2 ch,  44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved
Render Test at 512 frames
  PASS

Checking connection semantics:
Connection format:
AudioStreamBasicDescription:  2 ch,  44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved
  PASS

Bad Max Frames - Render should fail
  PASS

Checking parameter setting
Using AudioUnitSetParameter
Using AudioUnitScheduleParameter
  PASS

* * PASS
--------------------------------------------------
AU VALIDATION SUCCEEDED.
--------------------------------------------------

You may notice that one section of the validation failed. Don’t worry, that’s just a versioning mismatch due to compiling for debugging rather than release. Also, we’ll address the need for building for both 32-bit and 64-bit later.

Setting up the tools

Now comes the fun part. You can actually see your Audio Unit working inside a host! Okay, it doesn’t do anything, but this way you’ll know that when you come up with that amazing algorithm, you’ll be able to get you DAW to use it.

So, there’s a little utility app that Apple ships with Xcode called AU Lab. It’s located in /Developer/Applications/Audio. It allows you to play existing audio or process incoming audio from your audio interface. It’s pretty straightforward to use, so go ahead and get a feel for it before the next step,

There’s just one catch with AU Lab. Just like with auval, AU Lab runs in 32-bit by default. This is because most existing Audio Unit development is in 32-bit, so it makes sense, I guess.

Anyway, if you’re running on a 64-bit Mac and have compiled your Audio Unit to be 64-bit, you’ll need to tweak AU Lab a little in order to see your Audio Unit

The first method is to find the AU Lab app, highlight it, select File -> Get Info and deselect Open in 32-bit mode.

While this method is okay, it’s a bit of a pain when you need to check two different versions regularly. So, I cooked up a different way to do this. Initially, I did this with a shell script, but the extra terminal window annoyed me, so I moved it to an AppleScript. Here’s what you do…

  1. Open AppleScript Editor in /Applications/Utilities
  2. Enter the following into the script window:
    do shell script "arch -x86_64 \"/Developer/Applications/Audio/AU Lab.app/Contents/MacOS/AU Lab\" > /dev/null 2>&1 &"
  3. Select File -> Save As…
    1. Name it whatever you like. I used AU Lab 64-bit
    2. In the File Format: dropdown, select Application
    3. In the Options: checkboxes, only select Run Only

Now, when you double click on this file, it will run AU Lab in 64-bit mode. I also create a regular link to AU Lab so I can easily start up AU Lab in 32-bit mode from the same place. Easy peasy.

Finally, now that you’re running AU Lab in the mode that you compiled with, you should be able to see your Audio Unit plug-in in the effects dropdown. It will probably still say something like __MyCompanyName__, but you’ll change that later.

Wrapping up this session

So, you’ve built your first Audio Unit in Xcode and installed it. I’ll be adding some more articles in the coming weeks detailing more discoveries and adjustments that we can make to our Audio Unit, so please stay tuned…