Artificial Neurons for Music and Sound Design (5-minute lecture video)

Video presentation I made for the 2024 “Explainable AI for the Arts” (XAIxArts) Workshop, part of the ACM Creativity and Cognition Conference 2024.

A lot of these points I’ve discussed elsewhere (see playlist below), but this quickie presentation brings together these ideas, focusing on the aesthetic potential of this approach.

Check out the complete playlist for more hands-on creation of neurons and neural networks:

Interactive Neural Net in Eurorack (Joystick & Artificial Neuron)

Combining human input from a joystick with a two-neuron artificial neural network for chaotic interactive music.

This Eurorack joystick is going into a simple neural network to control multiple dimensions of the timbre of this synth voice. Joystick dimensions X, Y, and Z go into different inputs of the Nonlinear Circuits Dual Neuron, and are mixed together and transformed by a nonlinearity (more here). In addition to the output controlling the waveform and filter cutoff of the synth, the outputs of each neuron is fed back into the other, creating a chaotic artificial organism with which to improvise.

Affiliate links for modules in this patch (though you really don’t need them; you can probably work this out with the gear or software that you currently have):
Doepfer A-174-4 3D Joystick (Perfect Circuit)
NLC Dual Neuron (Reverb)
Noise Engineering Ataraxic Translatron (Reverb)
Hikari Ping Filter (Perfect Circuit)
Noise Engineering Sinclastic Empulatrix (Reverb)
Arturia DrumBrute Impact (Perfect Circuit)
Korg SQ-1 (Perfect Circuit)

More Music with Artificial Neurons:

Nonlinear Data-Driven Instruments with Simple Artificial Neural Networks (Max/MSP)

Building a simple artificial neural network in Max/MSP for nonlinear, chaotic control of data-driven instruments.


I’ve talked before about data-driven instruments, and I’ve talked before about artificial neurons and artificial neural networks, so here I combine the ideas to use a simple neural network to give some chaotic character to incoming data from a mouse and joystick before converting into into MIDI music. The ANN (Artificial Neural Network) reinterprets the data in way that isn’t random, but also isn’t linear, perhaps giving some interesting “organic” sophistication to our data-driven instrument.

In this video, I work entirely with musical control in MIDI, but these ideas could also apply to OSC or directly to any musical characteristics (like cutoff frequency of a filter, granular density, etc.).

0:00 Intro
1:43 [mousestate] for Data Input
2:58 Mapping a linear data-driven instrument
7:19 Making our Artificial Neuron
15:27 Simple ANN
20:06 Adding Feedback
22:23 Closing Thoughts, Next Steps

More Max/MSP Videos:

More Artificial Neurons and Neural Networks:


Pd Machine Learning Fail

A failed attempt at machine learning for real-time sound design in Pure Data Vanilla.

I’ve previously shown artificial neurons and neural networks in Pd, but here I attempted to take things to the next step and make a cybernetic system that demonstrates machine learning. It went good, not great.

This system has a “target” waveform (what we’re trying to produce). The neuron takes in several different waveforms, combines them (with a nonlinearity), and then compares the result to the target waveform, and attempts to adjust accordingly.

While it fails to reproduce the waveform in most cases, the resulting audio of a poorly-designed AI failing might still hold expressive possibilities.

0:00 Intro / Concept
1:35 Single-Neuron Patch Explanation
3:23 The “Learning” Part
5:46 A (Moderate) Success!
7:00 Trying with Multiple Inputs
10:07 Neural Network Failure
12:20 Closing Thoughts, Next Steps

More music and sound with neurons and neural networks here:

Artificial Neurons and Nonlinear Mixing

Talking through the concept of an artificial neuron, the fundamental component of artificial intelligence and machine learning, from an audio perspective.

I’ve made a few videos recently with “artificial neurons” including in Pure Data and in Eurorack, and, in this video, I discuss the ideas here in more detail, specifically how an artificial neuron is just a nonlinear mixer.

An artificial neuron takes in multiple inputs, weights them, and then transforms the sum of them using an “activation function”, which is just a nonlinear transformation (of some variety).

Of course just making a single neuron does not mean you’ve made an artificial intelligence or a program capable of “deep learning”, but understanding these fundamental building blocks can be a great first step in demystifying the growing number of machine learning programs in the 21st Century.

More music and sound design with artificial neurons:

Pure Data Artificial Neuron Patch from Scratch

Patching up an artificial neuron in Pure Data Vanilla for some nonlinear mixing. There’s no talking on this one, just building the patch, and listening to it go.

An artificial neuron is basically just a mixer: inputs come in, and are weighted differently, modelling the dendrites of a biological neuron; then the mixed signal is transformed by an “activation function”, usually nonlinear, and output, modelling the axon.

Now, we can say that “learning” occurs when we adjust the weights (levels) of the inputs based on the output, but let’s not do that here, let’s just revel in our our nonlinear mix.

More details in my blog post here

0:00 Nonlinear Mixing and Artificial Neurons
1:17 Adding “Bias”
2:28 Neuron Complete
3:27 Automating the Weights
7:09 Adding Feedback
8:42 Adding Noise
9:58 Commenting our Code
11:25 Trying the ReLU Activation Function
12:04 Linear Mixing (with Hard Clipping)

Pure Data introductory tutorials here
More no-talking Pure Data jams and patch-from-scratch videos

Music and Synthesis with a Single Neuron

Recently, I’ve been hooked on the idea of neurons and electronic and digital models of them. As always, this interest is focused on how these models can help us make interesting music and sound design.

It all started with my explorations into modular synths, especially focusing on the weirdest modules that I could find. I’d already spent decades doing digital synthesis, so I wanted to know what the furthest reaches of analog synthesis had to offer, and one of the modules that I came across was the nonlinearcircuits “neuron” (which had the additional benefit that it was simple enough for me to solder together on my own for cheap).

Nonlinear Circuits “Dual Neuron” (Magpie Modular Panel)

Anyway, today, I don’t want to talk about this module in particular, but rather more generally about what an artificial neuron is and what it can do with audio.

I wouldn’t want to learn biology from a composer, so I’ll keep this in the most simple terms possible (so I don’t mess up). The concept here is that neuron is receives a bunch of signals into its dendrites, and, based off of these signals, send out its own signal through its axon.

Are you with me so far?

In the case of biological neurons these “signals” are chemical or electrical, and in these sonic explorations the signals are the continuous changing voltages of an analog audio signal.

So, in audio, the way we combine multiple audio source is a mixer:

Three signals in, One out

Now, the interesting thing here is that a neuron doesn’t just sum the signals from its dendrites and send them to the output. It gives these inputs different weights (levels), and combines them in a nonlinear way.

In our sonic models of neurons, this “nonlinearity” could be a number of things: waveshapers, rectifiers, etc.

Hyberbolic Tan Function (tanh)

In the case of our sonic explorations, different nonlinear transformations will lead to different sonic results, but there’s no real “better” or “worse” choices (except driven by your aesthetic goals). Now, if I wanted to train an artificial neural net to identify pictures or compose algorithmic music, I’d think more about it (and there’s lots of literature about these activation function choices).

But, OK! A mixer with the ability to control the input levels and a nonlinear transformation! That’s our neuron! That’s it!

Just one neuron

In this patch, our mixer receives three inputs: a sequenced sine wave, a chaotically-modulated triangle wave, and one more thing I’ll get back to in a sec. That output is put through a hyperbolic tan function (soft-clipping, basically), then run into a comparator (if the input is high enough, fire the synapse!), then comparator is filtered, run to a spring reverb, and then the reverb is fed back into that third input of the mixer.

Now, as it stands, this neuron doesn’t learn anything. That would require the neuron getting some feedback from it’s output (it feeds back from the spring reverb, but that’s a little different) Is the neuron delivering the result we want based on the inputs? If not, how can it change the weights of these inputs so that it does?

We’ll save that for another day, though.

EDIT 05.18.22 – Taking it on the road!