I’ve collected and edited some recordings I made with my “DAWless” mobile rig in Japan this summer.
It’s been interesting try to set something up that has the flexibility that I want, while still being portable enough not to take up too much space (and weight) in my luggage. Of course, as it’s often said, limitations can often lead to greater creativity.
A failed attempt at machine learning for real-time sound design in Pure Data Vanilla.
I’ve previously shown artificial neurons and neural networks in Pd, but here I attempted to take things to the next step and make a cybernetic system that demonstrates machine learning. It went good, not great.
This system has a “target” waveform (what we’re trying to produce). The neuron takes in several different waveforms, combines them (with a nonlinearity), and then compares the result to the target waveform, and attempts to adjust accordingly.
While it fails to reproduce the waveform in most cases, the resulting audio of a poorly-designed AI failing might still hold expressive possibilities.
0:00 Intro / Concept 1:35 Single-Neuron Patch Explanation 3:23 The “Learning” Part 5:46 A (Moderate) Success! 7:00 Trying with Multiple Inputs 10:07 Neural Network Failure 12:20 Closing Thoughts, Next Steps
More music and sound with neurons and neural networks here:
How to create an envelope follower in Reaktor 6 Primary and set your synth to automatically follow the amplitude of drums.
An envelope follower is a device that converts the amplitude envelope of an audio input into a control signal. Once we have that control signal, we can use it to control whatever we want. We can make the amplitude of an oscillator follow the amplitude of the input, or we could move the cutoff frequency of a filter, panning, etc.
Building the envelope follower is rather straightforward, just two steps: of rectifying and then low-pass filtering. In this video I walk through the process, and then show a few different applications.
Talking through bidirectional OSC (Open Sound Control) in my 2018 piece Baion (倍音), that I perform on a custom-built interface, “the catalyst”.
I’m performing my 2017 piece “Baion” this week, and I thought it was a good chance to revisit some of the mechanics of the piece, specifically the communication between the different elements–the custom interface, the Kyma timeline, and the game built in Unity3D. In this video, I go through how the musical work emerges from the bidirectional OSC communication between software.
Building a comb filter in Pure Data Vanilla from scratch.
A comb filter is a filter created by adding a delayed signal to itself, creating constructive and destructive interference of frequencies based on the length of the delay. All we have to do is delay the signal a little bit, feed it back into itself (pre-delay), and we get that pleasing, high-tech robotic resonance effect.
There’s no talking on this one, just building the patch, and listening to it go.
0:00 Playing back a recorded file 0:35 Looping the file 1:00 Setting up the delay 2:08 Frequency controls for the filter 2:52 Setting the range 3:48 Automatic random frequency 4:25 Commenting the code 5:39 Playing with settings
Talking through the concept of an artificial neuron, the fundamental component of artificial intelligence and machine learning, from an audio perspective.
I’ve made a few videos recently with “artificial neurons” including in Pure Data and in Eurorack, and, in this video, I discuss the ideas here in more detail, specifically how an artificial neuron is just a nonlinear mixer.
An artificial neuron takes in multiple inputs, weights them, and then transforms the sum of them using an “activation function”, which is just a nonlinear transformation (of some variety).
Of course just making a single neuron does not mean you’ve made an artificial intelligence or a program capable of “deep learning”, but understanding these fundamental building blocks can be a great first step in demystifying the growing number of machine learning programs in the 21st Century.
More music and sound design with artificial neurons:
I set out to make a tutorial about making a simple sequencer in Reaktor 6 Primary, and got way too long-winded, so this first part is just about making a low-pass gate (LPG).
A low-pass gate is a low-pass filter that is functioning as a VCA. When it isn’t triggered, the filter’s cutoff frequency is subaudio, not letting any audio pass. When triggered, though, the cutoff frequency goes up, letting all frequencies through. In analog, too, this motion of the cutoff frequency is performed by vactrols, adding a quick attack and release that some compare to the sound of bongo.
In this video, I make a digital LPG, talking through the best numbers for an effective result.
0:00 Let’s make a sequencer! 0:54 Making our usual sawtooth synth 1:24 Explaining a low-pass gate (LPG) 2:06 Starting with a low-pass filter 2:35 AR envelope 3:01 Modifying the envelope 4:25 Multiplication 5:14 Subtraction 6:12 Switching to 1-pole filter 7:26 Resonance (not usual for an LPG) 8:21 Talking through the macro
Part 2, where I actually make the sequencer, here: