Video presentation I made for the 2024 “Explainable AI for the Arts” (XAIxArts) Workshop, part of the ACM Creativity and Cognition Conference 2024.
A lot of these points I’ve discussed elsewhere (see playlist below), but this quickie presentation brings together these ideas, focusing on the aesthetic potential of this approach.
Check out the complete playlist for more hands-on creation of neurons and neural networks:
Combining human input from a joystick with a two-neuron artificial neural network for chaotic interactive music.
This Eurorack joystick is going into a simple neural network to control multiple dimensions of the timbre of this synth voice. Joystick dimensions X, Y, and Z go into different inputs of the Nonlinear Circuits Dual Neuron, and are mixed together and transformed by a nonlinearity (more here). In addition to the output controlling the waveform and filter cutoff of the synth, the outputs of each neuron is fed back into the other, creating a chaotic artificial organism with which to improvise.
Building a simple artificial neural network in Max/MSP for nonlinear, chaotic control of data-driven instruments.
I’ve talked before about data-driven instruments, and I’ve talked before about artificial neurons and artificial neural networks, so here I combine the ideas to use a simple neural network to give some chaotic character to incoming data from a mouse and joystick before converting into into MIDI music. The ANN (Artificial Neural Network) reinterprets the data in way that isn’t random, but also isn’t linear, perhaps giving some interesting “organic” sophistication to our data-driven instrument.
In this video, I work entirely with musical control in MIDI, but these ideas could also apply to OSC or directly to any musical characteristics (like cutoff frequency of a filter, granular density, etc.).
0:00 Intro 1:43 [mousestate] for Data Input 2:58 Mapping a linear data-driven instrument 7:19 Making our Artificial Neuron 15:27 Simple ANN 20:06 Adding Feedback 22:23 Closing Thoughts, Next Steps
A failed attempt at machine learning for real-time sound design in Pure Data Vanilla.
I’ve previously shown artificial neurons and neural networks in Pd, but here I attempted to take things to the next step and make a cybernetic system that demonstrates machine learning. It went good, not great.
This system has a “target” waveform (what we’re trying to produce). The neuron takes in several different waveforms, combines them (with a nonlinearity), and then compares the result to the target waveform, and attempts to adjust accordingly.
While it fails to reproduce the waveform in most cases, the resulting audio of a poorly-designed AI failing might still hold expressive possibilities.
0:00 Intro / Concept 1:35 Single-Neuron Patch Explanation 3:23 The “Learning” Part 5:46 A (Moderate) Success! 7:00 Trying with Multiple Inputs 10:07 Neural Network Failure 12:20 Closing Thoughts, Next Steps
More music and sound with neurons and neural networks here: