A failed attempt at machine learning for real-time sound design in Pure Data Vanilla.
I’ve previously shown artificial neurons and neural networks in Pd, but here I attempted to take things to the next step and make a cybernetic system that demonstrates machine learning. It went good, not great.
This system has a “target” waveform (what we’re trying to produce). The neuron takes in several different waveforms, combines them (with a nonlinearity), and then compares the result to the target waveform, and attempts to adjust accordingly.
While it fails to reproduce the waveform in most cases, the resulting audio of a poorly-designed AI failing might still hold expressive possibilities.
0:00 Intro / Concept 1:35 Single-Neuron Patch Explanation 3:23 The “Learning” Part 5:46 A (Moderate) Success! 7:00 Trying with Multiple Inputs 10:07 Neural Network Failure 12:20 Closing Thoughts, Next Steps
More music and sound with neurons and neural networks here:
Talking through the concept of an artificial neuron, the fundamental component of artificial intelligence and machine learning, from an audio perspective.
I’ve made a few videos recently with “artificial neurons” including in Pure Data and in Eurorack, and, in this video, I discuss the ideas here in more detail, specifically how an artificial neuron is just a nonlinear mixer.
An artificial neuron takes in multiple inputs, weights them, and then transforms the sum of them using an “activation function”, which is just a nonlinear transformation (of some variety).
Of course just making a single neuron does not mean you’ve made an artificial intelligence or a program capable of “deep learning”, but understanding these fundamental building blocks can be a great first step in demystifying the growing number of machine learning programs in the 21st Century.
More music and sound design with artificial neurons:
Patching up an analog feedback loop in Eurorack with some generic modules.
I don’t do a lot of videos talking about Eurorack for two main reasons:
(1) I’ve actually only been doing Eurorack for a couple years now, even though I’ve been doing digital synthesis and sound design for decades, and
(2) I don’t want my videos to be about any particular piece of hardware that you need to get (as always, I’m not sponsored by anyone).
But, the patch I put together in this video could be done by any number of modules, all I have is a sine wave, a ring modulator (multiplier), a reverb, a filter, and a limiter/compressor/saturator (anything to stop hard clipping). Put them together, feed them back, and you have some dynamic, analog generative soundscapes.
Building some feedback loops in the digital domain using Symbolic Sound’s Kyma 7.
In audio feedback loops, the output of the system is fed back into an input. We’re probably most familiar with this when we put a microphone in front of a speaker and we get the “howling” sound. Here, though, I’m intentionally building digital feedback loops in order to explore the sonic possibilities of these rather unpredictable systems.
In order to keep my feedback loop interesting, though, I need to keep it from dying away to silence, or blowing up into white noise. By considering the different processes we apply to the audio in the loop (are they adding spectral complexity or removing it?), we can try to make feedback patches that are dynamic and interesting over time.
0:00 The Continuum of Spectral Complexity 3:13 Staring with an Sine Wave in Kyma 4:45 Delay with Feedback 5:49 Building Feedback Loops Manually 8:40 Ring-Modulating the Feedback 11:20 Gain and Saturation 14:22 Exploring the Sound 16:16 Filter Bank 19:05 Jamming with the Patch 22:18 Thinking about Control 23:25 Performing the Sound 26:34 Feedback Loop with Reverb 28:10 Making it into IDM with the Chopper 29:22 So What? Next Steps
Inspired by the cybernetic and feedback works of Roland Kayn, Éliane Radigue, Bebe Barron, and Jaap Vink, and embracing an anything-goes noise music aesthetic, this collection of works from early 2022 explores analog feedback loops and self-regulating patches in Eurorack modular.
In these pieces, audio signals are routed back into themselves, and used to control processes and trigger events. While these are performed improvisations, “performance” in this case does not mean strict control, since these systems influence themselves as much as the performer does.
A quick acknowledgement that these noisy soundscapes might not be for everyone. Don’t worry. I won’t be offended.
A mess of Eurorack CV feedback that’s not random. It’s chaotic!
This instrument creates chaotic synthesized music that I interact with using four knobs. The music that this synthesizer creates is not random. It is determined by a set of “rules” created by the different components interacting with each other. However, because each of these modules influences and is influenced by several others, the interconnected network of interactions obfuscates the rules of the system. This leads to the instrument’s chaotic, incomprehensible behavior.
As with all chaotic systems, though, if it were possible to understand all of the different components and their relationships, and do complex enough calculations, we would be able to predict the outcome of all of our interactions.
Patch notes: ….Uh…. I just kept patching things back into each other, and this is where I ended up.
Better explained on the La Synthèse Humaine channel, “cybernetics” here follows Norbert Wiener’s definition: “control and communication in the animal and the machine,” since these audio feedback systems are “self-regulating” (self-controling?) sonic ecosystems.
Anyway, I’ve put together a playlist of my improvisations, collaborations, and ramblings about feedback and cybernetic systems. Enjoy!