Friday, May 30, 2008

Musical Programming Languages

Musical programming languages are a strange breed of flexibility and usability in the computer music domain. There's actually a long history of musical programming languages (in the context of programming in general) that recently has been seeing a surge of popularity among the arts community. C++, we all would agree bears very little usage bias, while CSound is obviously not a good programming language to check your e-mail with. C++ may be perfectly capable of creating the same results as a CSound program, but the initial knowledge level required to do so is a vastly different range.

The major (read most-popular) open-source audio programming languages are PD (pure data), CSound, SuperCollider, and ChucK. Each of these languages have their own strengths and weaknesses and some are easier to learn than others. For most musicians PD will be the most appealing with the smoothest learning curve as it is the only graphical language of the four. ChucK and SuperCollider are much closer to "real" programming languages in their syntax. CSound is the oldest of these four, with roots from back when you'd write a computer score, set it to compile, and come back three days later to hear your audio file. Because of this, CSound is a very powerful synthesis engine that was written with computational resources in mind.

Many musicians will never need to learn a music programming language, but recently more and more artists are turning to programming languages to give the extra flexibility their project requires. A few possible applications include; using a wii controller to adjust a synthesizer's settings; building your own ultimate modular synthesizer; altering a live signal with a complex chain of adjustments - each of which are manipulated on the fly; trigger effects on/off based on the frequency of the input signal; building audio effects not otherwise available; generating algorithmic melodies, rhythms, etc..; extending the functionality of your favourite program; creating your own custom-designed software; or working with homemade hardware controllers (linux hackers get excited about this stuff).

These are all good reasons for using audio programming languages, but there are some common downfalls to creating these custom audio apps/plugins/tools. The first and most obvious is that the time spent coding the tools quite often overshadows the usefulness of the resulting tool. Many gear-heads fall into this trap when they first learn to program in PD, as they realize that nearly anything is possible so they try to do everything. This could be argued to be comparable to the amount of time/money gear-heads spend on buying hardware in comparison to their actual musical output. The second is that most musicians aren't computer programmers. This may sound like an obvious statement, but it's also a serious problem in music programming languages. The software that musicians develop is more often than not: VERY buggy, inefficient, half-finished, unportable, not documented (not even comments in the code), and essentially broken in design. Because of this, nearly all of the software musicians write becomes a one-off, non-distributed tool that even the creator soon refuses to use (often after the first performance) - in the software world these are called "dead projects" and are seen as massive failures. Even the software some of my university professors - who get grants to write them - create, will be prone to these pitfalls, and it comes down to musicians thinking like musicians rather than programmers; though some musicians will claim they make more musical tools that way. There are exceptions to these faults, but they're unfortunately exceptions and not the norm.

Now that you're aware of these pitfalls, you're one step ahead of the average musical programmer, and you can avoid them in your own code; right? Okay, the I guess I can teach you a bit of programming now.

PD is the easiest for a beginner to stomach because of both it's visual element and it's patch-cord based interface that resembles musical hardware routing. I wrote a quick beginner's tutorial for PD a while back that has been incorporated into the Ubuntu Community Documentation here: https://help.ubuntu.com/community/HowToPureDataIntroduction and I don't want to repeat myself.

#####********* IF YOU KNOW NOTHING ABOUT PD, STOP NOW AND GO READ THAT LINK ********######
At the end of that tutorial I had promised some further examples, so I guess I should deliver. The first thing I'll touch on is the ability to create an abstraction. This is thoroughly explained in the PD documentation so if you're lost after this, go take a read there. Essentially an abstraction allows you to package PD code into a custom named PD object for use in other patches. This feature is invaluable for PD programmers. Without abstractions, code would become a rats nest of patch cords quite fast, and each patch would be a one-off code snippet - preventing any real software development. Abstractions allow the programmer to clean up the code, easily re-use code snippets, and speed up their coding process.

By encasing a mundane (yet laborious) set of instructions into a single abstraction, anytime those instructions need to be called, it's simple and elegant to create a new object and call the abstraction's name. To create an abstraction simply save your pd patch as examplepatch.pd, then open 'Path...' from the File menu, enter the folder that your abstraction was saved to (you may want to save the settings), then open a new patch and create an object named examplepatch. To see the original code, right click on the new object and select open.

You can send data to and from these abstractions in two different ways. The first is to use "inlet", "inlet~", "outlet", and "outlet~" objects in the abstraction's code. These will allow you to connect patch cords to and from your abstraction. This method is simple and best for beginners. The second method involves the "send", "receive", "throw~", and "catch~" objects. These all take a name for an argument that will point to the send/receive/throw~/catch~ object that's giving/receiving the data. The upside of the send/receive method is its flexibility, whereas the downside is that it allows you to loose track of signal flow quite easily - particularly when multiple signals are going to the same place or vice/versa.

This encapsulation of a code snippet is an essential element of the Object Oriented Programming style. This style is a very prominent code writing method in modern computer programming. I'm not computer scientist, so I won't attempt a definition for you here, but I will tell you that PD is designed to use the OOP style. By coding a snippet such as a soundfile looper, then saving it as an abstraction, you'll be able to reuse that code chunk to make life easier when you go to build your ultimate sampler, and again when you want to incorporate sampling into a custom-built guitar processor. Abstractions can be nested inside one another too, allowing for very complex patches to be organized very efficiently.

I'll end by saying that PD is a wonderful beginner's programming language that can accomplish quite a bit, but it does have its limitations. For a VERY detailed read on PD, I'd recommend Miller Puckette's (the man who wrote PD) new book The Theory and Technique of Electronic Music, available as a free download online. The other three music programming languages that I mentioned at the top of this post should also be explored by those who have a penchant for flexibility and power (my personal favourite is ChucK).

Wednesday, May 14, 2008

Compression for beginners

Dynamics processing is a dangerous but essential beast in audio processing. It is also one of the most important effects in the mastering process of a song. But before we begin, I must refer you to the problem with over-compression:

Ear-fatigue and excessive loudness as described by turnmeup.org

Dynamics processing (compression/limiting/expansion/gating etc...) is an essential tool in getting the right mix of sound, but if abused, can totally ruin the listen-ability of a track or album. Don't think that louder is always better, monitor your songs at different volumes (via your amplifier volume control) before deciding on your final settings.

Ok, so now that that's out of the way, I can introduce the wonderful things dynamics processing can do for you. The most common, is to increase the presence of dynamic midtones.

I once heard an explanation of a compressor as a man sitting at a volume knob, auto adjusting the signal's gain, based on how loud the incoming signal is. This is a neat little visual that helps beginners understand some of the settings.

Compression is the most commonly used form of dynamics processing. In the LADSPA set, there are a number of compression plugins: SC4 (as well as the other SC numbers), SE4, TAP Dynamics, Dyson Compressor, C* Compress, and Simple Compressor. There's also the compressors inside JAMin that have specific frequency bands, along with many other mastering tools. For now, we'll look at SC4, by Steve Harris, for the example as JAMin's tools deserve a full post on their own. Here's a look at SC4 hosted in Ardour:

Compressors track the amplitude (either sample level or RMS level) of the incoming signal and adjust the output volume according to it's settings. Quite often the compressor's settings are pictured as a dynamics graph. Unfortunately, no LADSPA plugin has done this as it is quite helpful for beginners to understand things, and for novices to see the settings (I don't think it's even possible in the LADSPA language). However, JAMin does have some nice graphs for its multiband compressor (see below):

The graphs can be read with the incoming level on the x-axis, and the corresponding output level on the y-axis (the thick black horizontal line is 0db). The red graph shows no change to the sound, while the other two have similar characteristics (though the green has more makeup gain and a sharper knee).

The most important setting on a compressor is the threshold level. This sets the threshold db level for the compressor to begin lowering the gain. The red graph has a threshold of 0db and therefore is never triggered to begin turning the volume down (though other settings also need to be set properly for this to truly occur). The threshold can be most clearly seen on the green graph, though the same threshold setting was used on the blue one. It is essentially the joining point between the two vectors of gain settings.

The knee setting on compressors can be seen best in the blue graph. It softens the joining point between the two vectors. SC4 has theirs labeled as the knee radius in db, which can be visualized better if you imagine a full circle nestled into the threshold as tight as it can go. A softer knee (i.e. a larger knee radius) will effectively give you a decrease in volume (more compression) around the threshold area and an overall smoother curve to your compression (but less precision on the threshold point).

The ratio could also be argued as the most important setting on a compressor; it determines the ratio at which the signals above the threshold are compressed. A ratio of 1:1 would nullify the compressor - what comes in is what goes out (the red graph could be achieved with this, though I did it with a threshold of 0db). A ratio of 2:1 means that for every 2db increase (from the threshold) in the incoming signal the increase will be cut to 1db in the output. The ratio on the green slope is about a 4:1. Graphically, the ratio is the inverse slope (i.e. 2:1 has a run of 2 and a rise of 1 - and slope is rise over run 1/2).

So far all the compressor has done is turned down the volume during the loud bits, which makes the overall mix quieter. The makeup gain is there to boost the signal back to a healthy level. A good rule of thumb is to always have a similar volume level coming out as you had going in (though not exactly), this helps the ear understand what the compressor has actually done to the sound as music is perceived differently at different volumes.

Attack and release times are the speed at which the little man inside the compressor reacts to the incoming signal in his changing of the gain knob (attack being how fast he turns it down after an attack is sensed, release is the other way round). These settings allow for fine tuned control over the sound, such as allowing the attack transients through. Attack settings will limit the amount of makeup gain you can have without your attacks clipping, so watch out. Release settings should be used where the instrument has a slow release time itself.

The SC4 compressor has a nice feature of RMS/peak mix. This allows you to select the mix between RMS or peak envelope following. RMS is generally a more natural sounding compression as its values are closer to what the human ear perceives as loudness. Peak values are more useful if you're wanting to decrease sharp spikes in the sound source.

Lastly, but not least is the sidechain input that can be found on some compressors (such as SC2 and SC3). What this does is allows you to control the gain of a particular track (just as you would in any other compressor) but with a separate input source providing the gain reduction control (i.e. the little man is listening to an entirely different track - the sidechain input - adjusting the gain accordingly, but he's effecting the track he's not paying attention to). This effect is most commonly used in electronica where the kick drum needs to be very prominent. The bass line is compressed with the kick drum as the sidechain input, allowing the kick drum to stand out as the most prominent low-end instrument without having the bassline whimper away.

This covers the basics of what compressors do, but by no means gives you an explanation on how to effectively use them (I shudder to think of the over compressed songs that will now be created). Some of the only parting words I can offer is that less is more.... LESS IS MORE. Don't kill your dynamic range just to make things sound louder - your listener will just turn the track down and all you've done is squash the waveform to a pulp. Play around, and when you think you've got it right, reduce the settings a bit; less is more.

Saturday, May 10, 2008

Netlabels

So my blog's subtitle is vague enough that I can safely stretch the subject matter of my posts every now and then. Today, the open-source phenomenon of a netlabel is my subject.

The netlabel is an online record label. Quite often, netlabels give their music away for free and could be seen more as a promotional tool than a money-making business. The most common netlabel is the Creative-Commons licensed netlabel, as the freedom associated with the Creative-Commons license allows for easy distribution of artists music.

The Internet Archive is a wonderful resource for netlabels; it catalogs and backs up the offerings of netlabels on a regular basis (some even utilize it as a hosting service). Furthermore one can search through the netlabel section of the IA at the main netlabel page: http://www.archive.org/details/netlabels though its services are only available to Creative Commons licensed labels (for obvious legal reasons). Their listing also gives a nice means of estimating the growth in the netlabel trend; the number of "sub-collections" can be translated to be number of "labels" and the number of "items" is the number of albums (of any length) released. As of this writing there are 896 "sub-collections" and 12,788 "items"; I read an article written just over three years ago, that claimed the numbers were 170 and 3,000 ish - which gives a bit of perspective on the growth rate of the netlabel phenomenon.

The benefit of Creative Commons music freely available online is huge. Radio producers can use them as backing-tracks for any commercial, movies can use them as soundtracks, etc.. without worrying about the royalties and legal issues normally involved. Also, these songs are available for other artists to utilize as a vast array of sampling options in their own music. Of course there are various versions of Creative Commons licenses and not all allow for commercial use of the song without the artist's consent. But, most importantly, music is becoming a legally shared commodity and artists are realizing that money isn't what they do this for - more would be happy to reach thousands of people than to earn $20 for selling thousands of iTunes downloads (just an exaggeration - not actual earnings numbers). In my personal opinion art should be open for people to hear/see/experience it, though I also realize that financial restrictions are a part of life.

A few labels that I personally enjoy and listen to regularly (your tastes may vary) are:

Sutemos - a somewhat avant garde, but high-quality, electronica label from Lithuania.

20kbps - a genre-defying label whose acceptance requirement is that every song needs to be encoded at 20kbps or less. I don't enjoy all of their stuff, but some is brilliant (this may just be the computer audio nerd in me appreciating the worship of lo-fi).

CCMixter - probably the most famous netlabel (soulseek records might also take that title), it's full of various genres and quality artists. It's major interest is not in releasing an end product, but rather in the continuing evolution of their projects and songs.

Clinical Archives - an avant garde, experimental label, open to all forms of sound art. They have a fairly extensive collection from many talented experimenters.

EVIL Records - an electronica label based in Spain. They don't have many releases, but they're fairly good.

Sunday, May 4, 2008

Rhythms with Hydrogen

So today I'd like to run through the features of my favourite drum machine. Okay, so it's not actually a machine, but rather a piece of software that slaughters any drum machine I've ever seen.

Hydrogen is a well developed soft drum kit for Linux (I'm running version 0.9.3). It features a fully configurable pattern editor, a song editor, a mixer, an instrument editor, and a drumkit manager. The pattern editor and song editor are both fairly straightforward and fully configurable. Beginners should take note of the song editor's select mode, which can be toggled in the buttons above the pattern names; this allows for much quicker editing. The pattern editor allows you to either enter a pattern manually or record a pattern with either a MIDI device or Hydrogen's built in keyboard bindings: Hydrogen's keyboard bindings

The mixer has a couple nice features; the humanize controls for both velocity and timing, as well as the four LADSPA effect inserts that have sends from each instrument channel. The instrument editor is where I think Hydrogen shines. It allows up to 16 simultaneous layers of samples to be used per drum, contains an ADSR envelope control, a lowpass filter with adjustable resonance, a random pitch percentage, and a manual pitch control - FOR EACH INSTRUMENT. Finally, once you're done tweaking all these settings to perfection, you can save your instrument settings as a drumkit. The drumkit manager allows easy switching between saved or downloaded drumkits (see Hydrogen's website for free downloadable drumkits).

One of the few options I wish was available would be relative tempo changes, as currently the only solution is to manually program the slower/faster tempo sections of any song to fit onto a pattern that internally is still running at the global tempo setting. This is a bit of a hack that I would rather not have to bother with.

All this is nice, but more than just software is needed to make good rhythms, you need to know a bit about anticipation.

Musically, any event that's repeated three (or more) times will be expected by the listener to continue being repeated. This even applies to non-audible pulses. One of the key features to any good beat is how it plays with this anticipation factor. Some beats emphasize the anticipation and build up toward the expected repetition, others use the anticipation to throw the listener off (usually considered a break-beat if it's repeated as part of the rhythm).

I should note that three is not a common pattern size in most music, the average song uses four (duh). This is called duple meter (things can be broken easily into two), and usually the overall structure of the bar phrases, song sections, and micro-rhythms will relate to the duple. I've heard it explained that three is the minimum number of repetitions to set up a pattern, five starts to get too long, so four is just right.

When writing your rhythms, consider where your listener's micro, macro, and normal anticipations are being drawn. If you've been focusing on downbeats and suddenly switch to an upbeat focused pattern, you'll be messing with their heads a bit (it's kinda fun to mess with people's heads sometimes, but they'll get pissed off if it happens too much or unpredictably). Overall, try to find a balance between predictable and unexpected so that everything remains interesting.

Friday, May 2, 2008

UbuntuStudio Sound Servers

Sound servers in Ubuntu have changed for this release of Hardy, and it's confusing some. This is a post I made on Ubuntu Forums, to explain the current situation a little and I thought it deserved to be posted to this blog as it may help some people who are just getting started, or freshly confused with this new PulseAudio thing:

Well the simple answer to most of your questions is "variety is the spice of life".

when entering the field of audio recording, there are those who want to talk to their grandson on voip/skype, and those who want to master a CD of their choir/orchestra/rock band recording; oh and EVERYONE in-between. There CAN'T be a single button solution.

Unix/Linux kinda takes pride in the fact that there are so many versions of the systems available in nearly every aspect of the software ladder (or attempts at such). The fact that all of these various systems exist, gives the other programmers choice of design and execution method. Some are more robust, some are faster, some are simpler, and some are dead/dying.

-ALSA, for example, is Ubuntu's default go-to sound driver. That makes things do ding when you login, plays movies, etc... for most, this will suffice as their sound.
-OSS is not used by default on many (I'm not really sure about the exact accuracy of this, but I know it's general truth) or any audio apps in Ubuntu, but it's there as a legacy system that ALSA is nearly entirely compatible with (I'm no developer, just a user).
-Jack is the server best suited for rich and hearty audio WORK with your computer. Think of this as a really great... (insert excellent hot rod analogy car here) where the one who works with that car(d) can tune it just how they need to make it purr.
-Pulse Audio is the new guy on the block for Ubuntu, which allows communication to the same soundcard from various different other sound servers at any given time. (i.e. Ardour can still be on, running jack, while you pause to listen to a firefox youtube video. This wasn't possible prior, as Jack's compexity {/* developers, please read as 'robust' */} was preventing general app developers from adopting it as a output choice in their apps) Think of it as a really crazy multi-adapter for all your sound plugs in the virtual world.


I also did a bit of digging for the original thread starter and learned what a mux was from wikipedia: http://en.wikipedia.org/wiki/Multiplexer and now I know where that most exellent local electronic music producer gets his ever so appropriate moniker.

Thursday, May 1, 2008

My working methods

To start things off, I'd like to give a quick rundown of my usual working method with audio. For the most part I utilize sound samples and recordings as my means of composition (we'll call that music concrete style work), though I'm getting further and further into synthesis techniques everyday now. I also regularly write acoustic music using lilypond (no GUI, just hard coding straight into lilypond), but even then I'm inclined to sample/quote other music in my creation process. I personally think my inclination for twisting samples stems from my DJ roots.

To record things I generally either use Ardour or Rezound though there are tons of different recording apps loaded onto my computer (not to mention the audio programming lanugages that can all be used to record). If I just want the audio file as a sample Rezound is much faster and simpler to use, but if I know what I'm recording is going to end up in an Ardour session anyway (such as if I'm recording soundscapes etc...) then I'll record straight into Ardour.

Rezound's editing capabilities are quite nice. It gives remarkable feedback for clipped samples (a bright pink bar against the blue/green background); even if it's just one sample that's clipped, it will be visible at any zoom ratio. During recording, it's dialog gives a count of the number of clipped samples as well as a very responsive volume meter. It also comes with many essential wave editor functions such as mix paste, resample, curved gain, LADSPA plugin support, various looping playback modes, and a bunch of built-in effects, such as a convolution filter and morphing arbitrary FIR filter (essentially a parametric eq that can change over time). These don't quite give the same flexibility that Ardour's LADSPA features do, but for a wav editor I'm VERY happy with rezound. Furthermore it can function perfectly fine without jack running, which is nice if I ever run into troubles with jack (or freebob for that matter).

Ardour should need no introduction. It's the behemoth of the linux audio world - better than pro tools in my opinion. It's octophonic mixing capabilities are one of my current favorite features as SFU regularly allows me access to octophonic performace spaces. With it's ability to automate LADSPA plugin settings, not to mention it's ability to host them in nearly any combination imaginable, nearly any sound needed can be achieved. Generally I use Ardour as my music concrete composition tool, arranging and tweaking samples as I see fit. I then use it's export feature to bounce everything down to a wav output file or perform it live through my soundcard (a Presonus Firepod) depending on the situation.