Music and Synthesis with a Single Neuron

Recently, I’ve been hooked on the idea of neurons and electronic and digital models of them. As always, this interest is focused on how these models can help us make interesting music and sound design.

It all started with my explorations into modular synths, especially focusing on the weirdest modules that I could find. I’d already spent decades doing digital synthesis, so I wanted to know what the furthest reaches of analog synthesis had to offer, and one of the modules that I came across was the nonlinearcircuits “neuron” (which had the additional benefit that it was simple enough for me to solder together on my own for cheap).

Nonlinear Circuits “Dual Neuron” (Magpie Modular Panel)

Anyway, today, I don’t want to talk about this module in particular, but rather more generally about what an artificial neuron is and what it can do with audio.

I wouldn’t want to learn biology from a composer, so I’ll keep this in the most simple terms possible (so I don’t mess up). The concept here is that neuron is receives a bunch of signals into its dendrites, and, based off of these signals, send out its own signal through its axon.

Are you with me so far?

In the case of biological neurons these “signals” are chemical or electrical, and in these sonic explorations the signals are the continuous changing voltages of an analog audio signal.

So, in audio, the way we combine multiple audio source is a mixer:

Three signals in, One out

Now, the interesting thing here is that a neuron doesn’t just sum the signals from its dendrites and send them to the output. It gives these inputs different weights (levels), and combines them in a nonlinear way.

In our sonic models of neurons, this “nonlinearity” could be a number of things: waveshapers, rectifiers, etc.

Hyberbolic Tan Function (tanh)

In the case of our sonic explorations, different nonlinear transformations will lead to different sonic results, but there’s no real “better” or “worse” choices (except driven by your aesthetic goals). Now, if I wanted to train an artificial neural net to identify pictures or compose algorithmic music, I’d think more about it (and there’s lots of literature about these activation function choices).

But, OK! A mixer with the ability to control the input levels and a nonlinear transformation! That’s our neuron! That’s it!

Just one neuron

In this patch, our mixer receives three inputs: a sequenced sine wave, a chaotically-modulated triangle wave, and one more thing I’ll get back to in a sec. That output is put through a hyperbolic tan function (soft-clipping, basically), then run into a comparator (if the input is high enough, fire the synapse!), then comparator is filtered, run to a spring reverb, and then the reverb is fed back into that third input of the mixer.

Now, as it stands, this neuron doesn’t learn anything. That would require the neuron getting some feedback from it’s output (it feeds back from the spring reverb, but that’s a little different) Is the neuron delivering the result we want based on the inputs? If not, how can it change the weights of these inputs so that it does?

We’ll save that for another day, though.

EDIT 05.18.22 – Taking it on the road!

Open Sound Control (OSC) in Pure Data Vanilla

How to receive and parse OSC (Open Sound Control) messages in Pure Data Vanilla for real-time musical control.


Open Sound Control, like MIDI is a protocol for transmitting data for musical performance. Unlike MIDI, though, OSC data is transmitted over a network, so we can easily transmit wirelessly from our iPhones or other devices. Another, difference, though, is that OSC messages don’t have standard designations (like MIDI “Note On” or “Note Off”), so we need to set up ways to parse that data and map it to controls ourselves.

Here, I go over the basics of receiving and parsing OSC data in Pure Data Vanilla, setting us up to make our own data-driven instruments.

0:00 Intro
2:46 [netreceive]
4:07 Sending OSC Messages
5:28 [oscparse]
6:02 Data!
7:11 [list trim]
8:09 [route]
9:03 [unpack]
9:46 Using the Data for Musical Control
13:52 Recap (Simplified Patch)
14:55 Explanation of Opening Patch

More Pure Data tutorials here.

Control, Communication, and Performance in Electronic Music (MaxMSP & Eurorack)

Talking about ideas of live electronic performance of electronic music using USB Controllers, Max/MSP, and Eurorack.

Here, I walk through how you can use a USB joystick to MIDI synthesizers (like my Eurorack modular) using Max/MSP as a “translator.” Information from the joystick and its buttons comes in on the [hi] (“human interface”) object, and we can shape that data and pass it out a MIDI data to whatever we want.

In this way, we can give ourselves nuanced control of our musical performance, enhancing our electronic music instruments.

0:00 Introduction
0:35 Generative Music and Feedback
1:31 Human Agency in Musical Systems
2:18 Devices for Human Interface
3:05 Today’s Goals
3:36 The [hi] Object
5:36 Looking at the Data
6:25 Isolating the Data with [route]
7:34 Converting the Numbers to MIDI
10:10 2D Piano
11:18 Sending MIDI to the NiftyCase
15:45 Controlling Effects (Wavefolder and Filter)
17:54 A Note about Resolution
18:49 Adding an Amplitude Envelope
19:58 Quick Recap
20:46 More Sophisticated Interactions of Data
23:04 Conclusion, Next Steps

More Max/MSP videos
More Talking Eurorack

Eurorack Neural Network Jam: “An Explanation of the Universe”

A mess of Eurorack CV feedback that’s not random. It’s chaotic!

This instrument creates chaotic synthesized music that I interact with using four knobs. The music that this synthesizer creates is not random. It is determined by a set of “rules” created by the different components interacting with each other. However, because each of these modules influences and is influenced by several others, the interconnected network of interactions obfuscates the rules of the system. This leads to the instrument’s chaotic, incomprehensible behavior.

As with all chaotic systems, though, if it were possible to understand all of the different components and their relationships, and do complex enough calculations, we would be able to predict the outcome of all of our interactions.

Patch notes: ….Uh…. I just kept patching things back into each other, and this is where I ended up.

Reaktor 6 Primary Resonant EQ (Faking the Serge Resonant EQ in Reaktor)

Building a resonant EQ in Reaktor Primary, taking inspiration from the Serge Resonant EQ’s unevenly-spaced frequencies and nonlinear controls.

In my regular journeys across the internet, I came across the Random*Source Serge Resonant EQ, a reissue of the resonant EQ from the Serge Synthesizer, and became a bit taken with its implementation and ideas. $400 is a bit too much for an impulse buy, so let’s see what we can do in Reaktor.

Random*Source Serge Resonant EQ

Even if we don’t end up with something that sounds perfect, we can use this as an opportunity to think more about subtractive synthesis, and talk about “parametric support” in our control schemes.

0:00 Purchase Your Way to Music Proficiency!
0:43 Random*Source Serge Resonant EQ
1:14 What’s interesting about this?
2:59 Disclaimer
3:22 Reaktor Primary Peak EQ
5:00 “Boost” vs. “Resonance”
5:53 Making Selectable Sound Sources
8:18 Throwing in an Oscilloscope
8:49 Starting the Resonant EQ Macro
9:28 Creating a Single Band
11:24 Level Controls to Avoid Clipping
13:13 One Knob for Resonance and Boost
14:28 “Funny Math”
21:13 Recapping the Flow / Fine Tuning
22:49 Duplicate! (for each frequency)
23:23 Setting the Frequencies
25:09 Adding a ByPass Switch
25:53 Sound Test
27:14 Saturator
28:04 Waveform Variance Across Instrument Range
29:38 Feedback
35:30 Next Steps

Internet-Based Feedback Loops: Eurorack vs. Zoom (with Spectral Evolver)

Using the latency from videoconferencing software as a delay for Eurorack feedback loops, creating (noisy) evolving sonic textures.

I’m in Connecticut, Spectral Evolver  is in Colorado, but that doesn’t mean we can’t connect our Eurorack systems.

Through the “magic” of Zoom, we create a feedback loop: I’m ring modulating the signal coming in from Zoom, he’s filtering the signal coming in on his end. This creates a “no-input” system across the world wide web, allowing us to create evolving textures inspired by Dutch composer Jaap Vink.

More on feedback loops and cybernetic systems here:

Understanding Mid/Side Stereo in Synthesis (Pure Data, Reaktor, and Eurorack)

Mid/Side is a different way of working with stereo, where, rather than one channel for the left, and one for the right, you have one channel for the “mid” information, and one channel for the “side”. This format allows for different approaches to stereo processing, playing with the stereo image in new and interesting ways.

I’ve seen a lot of videos about mid/side for mixing or mastering but I thought I’d talk a bit about the potential for this approach in sound design, and how it can help us think about 3D audio and ambisonics too.

Modules in the Eurorack modular demonstration:
-Winterbloom Castor&Pollux dual oscillator
-Shakmat SumDif precision adder
-Hikari Instruments Ping Filter
-Instruo Tanh saturator

Analog. Audio Multiplication, Ring Modulation, & AM Synthesis (Snazzy FX Dual Multiplier)

I got a Snazzy FX “Dual Multiplier” the other day, and thought it might be a good opportunity to talk about audio multiplication and the difference between AM synthesis and ring modulation.

Both AM synthesis and RM can be accomplished by multiplying a waveform (the “carrier”) by another waveform in the audible range You don’t need an analog multiplier to do this! You can do this in whatever synthesis environment you’re working in–Pd, Max/MSP, Kyma, Reaktor. All you have to do is multiply your signals, being mindful of whether the signals are unipolar (0 to 1) or bipolar (-1 to 1).

More on modulation synthesis: