New Music! “Hanamaki Sessions 2023”

I’ve collected and edited some recordings I made with my “DAWless” mobile rig in Japan this summer.

It’s been interesting try to set something up that has the flexibility that I want, while still being portable enough not to take up too much space (and weight) in my luggage. Of course, as it’s often said, limitations can often lead to greater creativity.

In this setup I have my 54HP Eurorack (which can be battery powered if I want to play on top of a lookout tower somewhere), and my Arturia DrumBrute Impact. I do mixing with a little Mackie mixer, and recording with a Zoom H4N (which lets me record sound from the microphones at the same time as the line inputs).

Last year, some might remember, I went around with just the Eurorack synth (with some different modules in it–a benjolin in particular) and recorded my three-track “Ihatov MU” album. This year’s sessions were a fun extension of those ideas.

Perhaps I should do some performing out in New England in the next few months.

Pd Machine Learning Fail

A failed attempt at machine learning for real-time sound design in Pure Data Vanilla.

I’ve previously shown artificial neurons and neural networks in Pd, but here I attempted to take things to the next step and make a cybernetic system that demonstrates machine learning. It went good, not great.

This system has a “target” waveform (what we’re trying to produce). The neuron takes in several different waveforms, combines them (with a nonlinearity), and then compares the result to the target waveform, and attempts to adjust accordingly.

While it fails to reproduce the waveform in most cases, the resulting audio of a poorly-designed AI failing might still hold expressive possibilities.

0:00 Intro / Concept
1:35 Single-Neuron Patch Explanation
3:23 The “Learning” Part
5:46 A (Moderate) Success!
7:00 Trying with Multiple Inputs
10:07 Neural Network Failure
12:20 Closing Thoughts, Next Steps

More music and sound with neurons and neural networks here:

Reaktor 6 Envelope Follower

How to create an envelope follower in Reaktor 6 Primary and set your synth to automatically follow the amplitude of drums.


An envelope follower is a device that converts the amplitude envelope of an audio input into a control signal. Once we have that control signal, we can use it to control whatever we want. We can make the amplitude of an oscillator follow the amplitude of the input, or we could move the cutoff frequency of a filter, panning, etc.

Building the envelope follower is rather straightforward, just two steps: of rectifying and then low-pass filtering. In this video I walk through the process, and then show a few different applications.

More Reaktor 6 intermediate tutorials here.

Bidirectional OSC in “Baion 倍音” with Kyma & Unity3D

Talking through bidirectional OSC (Open Sound Control) in my 2018 piece Baion (倍音), that I perform on a custom-built interface, “the catalyst”.

I’m performing my 2017 piece “Baion” this week, and I thought it was a good chance to revisit some of the mechanics of the piece, specifically the communication between the different elements–the custom interface, the Kyma timeline, and the game built in Unity3D. In this video, I go through how the musical work emerges from the bidirectional OSC communication between software.

More videos on Symbolic Sound Kyma here:

Pd Comb Filter Patch from Scratch

Building a comb filter in Pure Data Vanilla from scratch.

A comb filter is a filter created by adding a delayed signal to itself, creating constructive and destructive interference of frequencies based on the length of the delay. All we have to do is delay the signal a little bit, feed it back into itself (pre-delay), and we get that pleasing, high-tech robotic resonance effect.

There’s no talking on this one, just building the patch, and listening to it go.

0:00 Playing back a recorded file
0:35 Looping the file
1:00 Setting up the delay
2:08 Frequency controls for the filter
2:52 Setting the range
3:48 Automatic random frequency
4:25 Commenting the code
5:39 Playing with settings

More no-talking Pd patch from scratch:

Artificial Neurons and Nonlinear Mixing

Talking through the concept of an artificial neuron, the fundamental component of artificial intelligence and machine learning, from an audio perspective.

I’ve made a few videos recently with “artificial neurons” including in Pure Data and in Eurorack, and, in this video, I discuss the ideas here in more detail, specifically how an artificial neuron is just a nonlinear mixer.

An artificial neuron takes in multiple inputs, weights them, and then transforms the sum of them using an “activation function”, which is just a nonlinear transformation (of some variety).

Of course just making a single neuron does not mean you’ve made an artificial intelligence or a program capable of “deep learning”, but understanding these fundamental building blocks can be a great first step in demystifying the growing number of machine learning programs in the 21st Century.

More music and sound design with artificial neurons:

Reaktor 6 Primary Sequencer

I set out to make a tutorial about making a simple sequencer in Reaktor 6 Primary, and got way too long-winded, so this first part is just about making a low-pass gate (LPG).

A low-pass gate is a low-pass filter that is functioning as a VCA. When it isn’t triggered, the filter’s cutoff frequency is subaudio, not letting any audio pass. When triggered, though, the cutoff frequency goes up, letting all frequencies through. In analog, too, this motion of the cutoff frequency is performed by vactrols, adding a quick attack and release that some compare to the sound of bongo.

In this video, I make a digital LPG, talking through the best numbers for an effective result.

0:00 Let’s make a sequencer!
0:54 Making our usual sawtooth synth
1:24 Explaining a low-pass gate (LPG)
2:06 Starting with a low-pass filter
2:35 AR envelope
3:01 Modifying the envelope
4:25 Multiplication
5:14 Subtraction
6:12 Switching to 1-pole filter
7:26 Resonance (not usual for an LPG)
8:21 Talking through the macro

Part 2, where I actually make the sequencer, here:

Part 3 here:

Zoomscapes Updates

For the last few years, I’ve been messing around with internet-based, no-input feedback loops in collaboration with Will Klingenmeier.

What does that mean? Why would I do that? What does it sound like? All those questions are answered in the brief PechaKucha below:

Zoomscape Pecha Kucha – Understand it all in less than 8 minutes!

While I’m sure we’ll continue to mess with these ideas in the future, we’ve come to at least a short-term culmination of this project in a tape release of these experiments on bandcamp.

You can also retroactively join our “Tape Release Party” here:

Zoomscapes Tape Release Party from 2/5/23

To catch up on all of the previous experiments, check out this playlist: