Exploring some ideas of using mono effects for stereo processing with the Eurorack Random*Source Serge Resonant EQ.
The Serge Resonant EQ is mono, so, usually, if you want to do stereo processing, you would need two of this (expensive) module to process your signal. With just a single module, though, if we mess with the two “comb” outputs, and consider mid/side processing, we can create some compelling stereo outputs.
0:00 Introduction, I/O overview 0:47 Mono signal in, mono signal out 1:10 Using the “comb” outputs to stereoize the signal 2:01 Mid/side processing the “comb” outs of a mono signal 3:14 Processing a stereo signal, side only 5:10 Processing a stereo signal, mid only 6:20 Changing the source material, side-only processing 9:16 Closing, next steps, mid-only processing
Combining human input from a joystick with a two-neuron artificial neural network for chaotic interactive music.
This Eurorack joystick is going into a simple neural network to control multiple dimensions of the timbre of this synth voice. Joystick dimensions X, Y, and Z go into different inputs of the Nonlinear Circuits Dual Neuron, and are mixed together and transformed by a nonlinearity (more here). In addition to the output controlling the waveform and filter cutoff of the synth, the outputs of each neuron is fed back into the other, creating a chaotic artificial organism with which to improvise.
I’ve collected and edited some recordings I made with my “DAWless” mobile rig in Japan this summer.
It’s been interesting try to set something up that has the flexibility that I want, while still being portable enough not to take up too much space (and weight) in my luggage. Of course, as it’s often said, limitations can often lead to greater creativity.
Last year, some might remember, I went around with just the Eurorack synth (with some different modules in it–a benjolin in particular) and recorded my three-track “Ihatov MU” album. This year’s sessions were a fun extension of those ideas.
Perhaps I should do some performing out in New England in the next few months.
Patching up an analog feedback loop in Eurorack with some generic modules.
I don’t do a lot of videos talking about Eurorack for two main reasons:
(1) I’ve actually only been doing Eurorack for a couple years now, even though I’ve been doing digital synthesis and sound design for decades, and
(2) I don’t want my videos to be about any particular piece of hardware that you need to get (as always, I’m not sponsored by anyone).
But, the patch I put together in this video could be done by any number of modules, all I have is a sine wave, a ring modulator (multiplier), a reverb, a filter, and a limiter/compressor/saturator (anything to stop hard clipping). Put them together, feed them back, and you have some dynamic, analog generative soundscapes.
Recently, I’ve been hooked on the idea of neurons and electronic and digital models of them. As always, this interest is focused on how these models can help us make interesting music and sound design.
It all started with my explorations into modular synths, especially focusing on the weirdest modules that I could find. I’d already spent decades doing digital synthesis, so I wanted to know what the furthest reaches of analog synthesis had to offer, and one of the modules that I came across was the nonlinearcircuits “neuron” (which had the additional benefit that it was simple enough for me to solder together on my own for cheap).
Anyway, today, I don’t want to talk about this module in particular, but rather more generally about what an artificial neuron is and what it can do with audio.
I wouldn’t want to learn biology from a composer, so I’ll keep this in the most simple terms possible (so I don’t mess up). The concept here is that neuron is receives a bunch of signals into its dendrites, and, based off of these signals, send out its own signal through its axon.
Are you with me so far?
In the case of biological neurons these “signals” are chemical or electrical, and in these sonic explorations the signals are the continuous changing voltages of an analog audio signal.
So, in audio, the way we combine multiple audio source is a mixer:
Now, the interesting thing here is that a neuron doesn’t just sum the signals from its dendrites and send them to the output. It gives these inputs different weights (levels), and combines them in a nonlinear way.
In our sonic models of neurons, this “nonlinearity” could be a number of things: waveshapers, rectifiers, etc.
In the case of our sonic explorations, different nonlinear transformations will lead to different sonic results, but there’s no real “better” or “worse” choices (except driven by your aesthetic goals). Now, if I wanted to train an artificial neural net to identify pictures or compose algorithmic music, I’d think more about it (and there’s lots of literature about these activation function choices).
But, OK! A mixer with the ability to control the input levels and a nonlinear transformation! That’s our neuron! That’s it!
In this patch, our mixer receives three inputs: a sequenced sine wave, a chaotically-modulated triangle wave, and one more thing I’ll get back to in a sec. That output is put through a hyperbolic tan function (soft-clipping, basically), then run into a comparator (if the input is high enough, fire the synapse!), then comparator is filtered, run to a spring reverb, and then the reverb is fed back into that third input of the mixer.
Now, as it stands, this neuron doesn’t learn anything. That would require the neuron getting some feedback from it’s output (it feeds back from the spring reverb, but that’s a little different) Is the neuron delivering the result we want based on the inputs? If not, how can it change the weights of these inputs so that it does?
How to receive and parse OSC (Open Sound Control) messages in Pure Data Vanilla for real-time musical control.
Open Sound Control, like MIDI is a protocol for transmitting data for musical performance. Unlike MIDI, though, OSC data is transmitted over a network, so we can easily transmit wirelessly from our iPhones or other devices. Another, difference, though, is that OSC messages don’t have standard designations (like MIDI “Note On” or “Note Off”), so we need to set up ways to parse that data and map it to controls ourselves.
Here, I go over the basics of receiving and parsing OSC data in Pure Data Vanilla, setting us up to make our own data-driven instruments.
0:00 Intro 2:46 [netreceive] 4:07 Sending OSC Messages 5:28 [oscparse] 6:02 Data! 7:11 [list trim] 8:09 [route] 9:03 [unpack] 9:46 Using the Data for Musical Control 13:52 Recap (Simplified Patch) 14:55 Explanation of Opening Patch
Talking about ideas of live electronic performance of electronic music using USB Controllers, Max/MSP, and Eurorack.
Here, I walk through how you can use a USB joystick to MIDI synthesizers (like my Eurorack modular) using Max/MSP as a “translator.” Information from the joystick and its buttons comes in on the [hi] (“human interface”) object, and we can shape that data and pass it out a MIDI data to whatever we want.
In this way, we can give ourselves nuanced control of our musical performance, enhancing our electronic music instruments.
0:00 Introduction 0:35 Generative Music and Feedback 1:31 Human Agency in Musical Systems 2:18 Devices for Human Interface 3:05 Today’s Goals 3:36 The [hi] Object 5:36 Looking at the Data 6:25 Isolating the Data with [route] 7:34 Converting the Numbers to MIDI 10:10 2D Piano 11:18 Sending MIDI to the NiftyCase 15:45 Controlling Effects (Wavefolder and Filter) 17:54 A Note about Resolution 18:49 Adding an Amplitude Envelope 19:58 Quick Recap 20:46 More Sophisticated Interactions of Data 23:04 Conclusion, Next Steps