Artificial Neurons for Music and Sound Design (5-minute lecture video)

Video presentation I made for the 2024 “Explainable AI for the Arts” (XAIxArts) Workshop, part of the ACM Creativity and Cognition Conference 2024.

A lot of these points I’ve discussed elsewhere (see playlist below), but this quickie presentation brings together these ideas, focusing on the aesthetic potential of this approach.

Check out the complete playlist for more hands-on creation of neurons and neural networks:

Mixing Synths: Early Reflections for Added Dimension

Adding some subtle dimension to your synthesized instruments with early reflections.


Synthesized instruments, unlike recorded sounds, have never existed in the acoustic world. This means that these synthesized sounds are 100% direct signal. To add some subtle dimension to these synthesized sounds, then, we can craft some “early reflections” on these tracks.

Here, I demonstrate this concept using ChromaVerb in Logic Pro X.

0:00 Introduction
0:55 Understanding Reverb and Early Reflections
2:18 Creating Early Reflections
3:18 Project Setup
4:26 Early Reflections Aux Track
5:15 ABing on the Synth Track
6:19 Why Separate these from the Main Reverb?
6:40 ABing on the Whole Arrangement

More Logic Pro X videos here.

Compression-Controlled Feedback Loops in Your DAW

Creating DAW-based feedback loops, then using side-chain compression to regulate them.


Here, working on a project with @SpectralEvolver , I show in Logic Pro X how we can use a compressor side-chained to a beat to control a feedback loop for some noisy, industrial sounding music that sounds evocative of the artist Emptyset. I found this was a great way to create a chaotic sound, but keep it under control (and out of the way of the drums).

0:00 Intro
0:29 The audio tracks
1:15 Side-chain compression
2:03 The feedback loop
3:13 Controlling the loop with compression
5:00 Emptset
5:13 Two aux tracks sending to each other
6:23 A note about time-based effects
6:50 Will it blow up?!
8:06 Closing, next steps

Check out Emptyset’s bandcamp here. Here’s Emptyset talking about their ionospheric propagation work, “Signal”:


More Logic Pro Tutorials from me here:

Spotting Subaudio

Finding and removing subaudio from sample files with a waveform editor.

Subaudio are frequencies below the range of human hearing (below 20Hz). These frequencies can sneak into our recordings, and work against us in a number of ways. If we can address subaudio in our samples, we can do ourselves a favor in the later stages of our mixing process.

0:00 Defining Subaudio
0:59 Example 1: Spotting Subaudio
2:04 Example 1: Doing the Math
2:50 Why Did This Happen?
3:11 Removing Subaudio with Parametric EQ
5:53 Example 2: Not Really Subaudio
7:27 Harmonics of Subaudio
8:31 Example 3: Trimming
9:15 Example 4: Bringing It All Together
10:16 Closing. Next Steps

Nonlinear Data-Driven Instruments with Simple Artificial Neural Networks (Max/MSP)

Building a simple artificial neural network in Max/MSP for nonlinear, chaotic control of data-driven instruments.


I’ve talked before about data-driven instruments, and I’ve talked before about artificial neurons and artificial neural networks, so here I combine the ideas to use a simple neural network to give some chaotic character to incoming data from a mouse and joystick before converting into into MIDI music. The ANN (Artificial Neural Network) reinterprets the data in way that isn’t random, but also isn’t linear, perhaps giving some interesting “organic” sophistication to our data-driven instrument.

In this video, I work entirely with musical control in MIDI, but these ideas could also apply to OSC or directly to any musical characteristics (like cutoff frequency of a filter, granular density, etc.).

0:00 Intro
1:43 [mousestate] for Data Input
2:58 Mapping a linear data-driven instrument
7:19 Making our Artificial Neuron
15:27 Simple ANN
20:06 Adding Feedback
22:23 Closing Thoughts, Next Steps

More Max/MSP Videos:

More Artificial Neurons and Neural Networks:


Bass + AI: Improvisation (Python, Unity3D, and Kyma)

An improvised duet(?) with a AI agent trained on the “Embodied Musicking Dataset. “

In this performance, Python listens to live audio input from the bass, and, based on models trained with the dataset, sends out data to Unity3D and Kyma. Unity3D creates the visuals (the firework), and Kyma processes the audio from the bass.

First, though, the dataset used for training was collected from several pianists in the US and UK. As pianists played, we recorded multiple aspects of their performance: audio, video of their hands, EEG, skeletal data, and galvanic skin response. After playing, pianists listened to their own performance and were asked to record their state of “flow” over the course of the performance. All of these different dimensions of data, then, were associated over time, and so neural networks can be trained on these different dimensions to make associations.

This demonstration uses the trained models from Craig Vear’s Jess+ project to generate X&Y data (from the skeletal data), and “flow”, from the amplitude of the input. These XY coordinates, “flow”, and amplitude are sent out from Python as OSC Data, which is received by both Unity3D (for visuals) and Kyma (for audio).

In Unity, the XY data moves the “firework” around the screen. Flow data affects its color, and amplitude affects its size. Audio in Kyma is a bit more sophisticated, but X position is left/right pan, and the flow data affects the delay, reverb, and live granulation.

As you can see, amplitude to XY mapping is limited, with the firework moving along a kind of diagonal. Possible next steps would be to extract more features of the audio (e.g. pitch, spectral complexity, or delta values), and train with those.

Applying this data trained on pianists to a bass performance (in a different genre) does not have the same goals music-generation AI such as MusicGen or MusicLM. Instead of automatically generating music, the AI becomes a partner in performance. Sometimes unpredictable, but not random, since its behavior is based on rules.

New Music! “Hanamaki Sessions 2023”

I’ve collected and edited some recordings I made with my “DAWless” mobile rig in Japan this summer.

It’s been interesting try to set something up that has the flexibility that I want, while still being portable enough not to take up too much space (and weight) in my luggage. Of course, as it’s often said, limitations can often lead to greater creativity.

In this setup I have my 54HP Eurorack (which can be battery powered if I want to play on top of a lookout tower somewhere), and my Arturia DrumBrute Impact. I do mixing with a little Mackie mixer, and recording with a Zoom H4N (which lets me record sound from the microphones at the same time as the line inputs).

Last year, some might remember, I went around with just the Eurorack synth (with some different modules in it–a benjolin in particular) and recorded my three-track “Ihatov MU” album. This year’s sessions were a fun extension of those ideas.

Perhaps I should do some performing out in New England in the next few months.

Pd Comb Filter Patch from Scratch

Building a comb filter in Pure Data Vanilla from scratch.

A comb filter is a filter created by adding a delayed signal to itself, creating constructive and destructive interference of frequencies based on the length of the delay. All we have to do is delay the signal a little bit, feed it back into itself (pre-delay), and we get that pleasing, high-tech robotic resonance effect.

There’s no talking on this one, just building the patch, and listening to it go.

0:00 Playing back a recorded file
0:35 Looping the file
1:00 Setting up the delay
2:08 Frequency controls for the filter
2:52 Setting the range
3:48 Automatic random frequency
4:25 Commenting the code
5:39 Playing with settings

More no-talking Pd patch from scratch:

Zoomscapes Updates

For the last few years, I’ve been messing around with internet-based, no-input feedback loops in collaboration with Will Klingenmeier.

What does that mean? Why would I do that? What does it sound like? All those questions are answered in the brief PechaKucha below:

Zoomscape Pecha Kucha – Understand it all in less than 8 minutes!

While I’m sure we’ll continue to mess with these ideas in the future, we’ve come to at least a short-term culmination of this project in a tape release of these experiments on bandcamp.

You can also retroactively join our “Tape Release Party” here:

Zoomscapes Tape Release Party from 2/5/23

To catch up on all of the previous experiments, check out this playlist: