Building a “clamping VCA” in Reaktor for subtle distortion, imitating the envelopes in Roland TR-808.
Normally, an amplitude envelope for your synths are just that: a control envelope on the amplitude of the signal. When we use a “clamping VCA”, though, instead of controlling the amplitude of the waveform, we clip it at the desired maximum envelope. This means, when the VCA is all the way up, it sounds the same, but during the attack and release, we’ll get the addition of subtle (or perhaps not-so-subtle) distortion to our waveform.
I use the “Mod. Clipper” in Reaktor 6 to achieve this effect, stealing the idea from Noise Engineering’s “Sinclastic Empulatrix” module, which, in turn, stole the idea from from the Roland TR-808 drum machine’s cymbal envelopes.
0:00 The “Mod. Clipper” 0:33 Clamping VCA 1:25 Simple Sine Oscillator 2:03 Mod-Clipping the Sine Wave 3:51 Standard VCA for comparison 4:58 Pulse Wave 5:41 Sawtooth Wave 6:34 Adding a Filter 7:35 Next Steps
Adding envelopes to our synthesizer that aren‘tan ADSR.
ADSRs might be the envelope generators that we encounter most often, but they’re not the only way to shape our sound. There are a number of other musical ways to craft change in our synthesizer over time with these non-periodic TVCs.
Let’s check out what other options there are in Reaktor 6 primary.
Coding (well, “patching”) an artificial neural network in Pure Data Vanilla to create some generative ambient filter pings.
From zero to neural network in about ten minutes!
In audio terms, an artificial neuron is just a nonlinear mixer, and, to create a network of these neurons, all we need to do is run them into each other. So, in this video, I do just that: we make our neuron, duplicate it out until we have 20 of them, and then send some LFOs through that neural network. In the end, we use the output to trigger filter “pings” of five different notes.
There’s not really any kind of true artificial intelligence (or “deep learning”) in this neural network, because the output of the network, while it is fed back, doesn’t go back an affect the weights of the inputs in the individual neurons. That said, if we wanted machine learning, we would have to have some kind of desired goal (e.g. playing a Beethoven symphony or a major scale). Here, we just let the neural network provide us with some outputs for some Pure Data generative ambient pings. Add some delay, and you’re all set.
There’s no talking on this one, just building the patch, and listening to it go.
0:00 Demo 0:12 Building and artificial neuron 2:00 Networking our neurons 3:47 Feeding LFOs into the network 4:20 Checking the output of the network 5:00 Pinging filters with [threshold~] 8:55 Adding some feedback 10:18 Commenting our code 12:47 Playing with the network
Creating an artificial neuron in Pd:
Pinging Filters in Pd:
More no-talking Pure Data jams and patch-from-scratch videos:
Making some chiptune French house using the Commodore 64 and Alesis 3630.
Here, I’m using Paul Slocum’s CynthCart to turn my old C64 into a SID synthesizer. We run those licks into an Alesis 3630 compressor, side-chained to a kick drum (from an Alesis D-4), and then we have some pumping French house. Finally, we add some finishing touches with delay, reverb, and EQ in Logic Pro, as well as a cameo by an Electrix Warp Factory hardware vocoder. Download the track here for free:
Patching up an analog feedback loop in Eurorack with some generic modules.
I don’t do a lot of videos talking about Eurorack for two main reasons:
(1) I’ve actually only been doing Eurorack for a couple years now, even though I’ve been doing digital synthesis and sound design for decades, and
(2) I don’t want my videos to be about any particular piece of hardware that you need to get (as always, I’m not sponsored by anyone).
But, the patch I put together in this video could be done by any number of modules, all I have is a sine wave, a ring modulator (multiplier), a reverb, a filter, and a limiter/compressor/saturator (anything to stop hard clipping). Put them together, feed them back, and you have some dynamic, analog generative soundscapes.
Creating retro sounds with hard-synced oscillators in Reaktor 6 Primary.
“Hard sync” is synthesis technique that uses two oscillators: when one oscillator (the “leader”) finishes a cycle, it resets the period of the other oscillator (the “follower”), creating a period at the frequency of the leader, but a timbre from the incomplete cycles of the follower.
This is a really easy way to create original, complex sounds, using just two oscillators.
0:00 Defining “Hard Sync” 0:38 Building a Single Oscillator 1:35 Adding the “Follower” 3:03 Changing the Pitch Relationship 4:40 That Hard Sync Sound 4:57 How it Works 6:30 Follower Lower than Leader 7:25 Adding an Amplitude Envelope 8:10 Adding a Filter (for a bit) 9:28 Closing, Next Steps
A simple digital feedback patch in Pure Data build from just delay, ring-modulation, and saturation.
Building on my digital feedback video from a few weeks ago, here’s a quick patch for setting up a dynamic controllable feedback loop in Pd Vanilla. I’ve set up a way to get things going with a little sine-wave beep, and you can hear that the feedback loop makes things pretty complex pretty quickly. WATCH THOSE LEVELS! It gets loud in the middle.
Recently, I’ve been hooked on the idea of neurons and electronic and digital models of them. As always, this interest is focused on how these models can help us make interesting music and sound design.
It all started with my explorations into modular synths, especially focusing on the weirdest modules that I could find. I’d already spent decades doing digital synthesis, so I wanted to know what the furthest reaches of analog synthesis had to offer, and one of the modules that I came across was the nonlinearcircuits “neuron” (which had the additional benefit that it was simple enough for me to solder together on my own for cheap).
Anyway, today, I don’t want to talk about this module in particular, but rather more generally about what an artificial neuron is and what it can do with audio.
I wouldn’t want to learn biology from a composer, so I’ll keep this in the most simple terms possible (so I don’t mess up). The concept here is that neuron is receives a bunch of signals into its dendrites, and, based off of these signals, send out its own signal through its axon.
Are you with me so far?
In the case of biological neurons these “signals” are chemical or electrical, and in these sonic explorations the signals are the continuous changing voltages of an analog audio signal.
So, in audio, the way we combine multiple audio source is a mixer:
Now, the interesting thing here is that a neuron doesn’t just sum the signals from its dendrites and send them to the output. It gives these inputs different weights (levels), and combines them in a nonlinear way.
In our sonic models of neurons, this “nonlinearity” could be a number of things: waveshapers, rectifiers, etc.
In the case of our sonic explorations, different nonlinear transformations will lead to different sonic results, but there’s no real “better” or “worse” choices (except driven by your aesthetic goals). Now, if I wanted to train an artificial neural net to identify pictures or compose algorithmic music, I’d think more about it (and there’s lots of literature about these activation function choices).
But, OK! A mixer with the ability to control the input levels and a nonlinear transformation! That’s our neuron! That’s it!
In this patch, our mixer receives three inputs: a sequenced sine wave, a chaotically-modulated triangle wave, and one more thing I’ll get back to in a sec. That output is put through a hyperbolic tan function (soft-clipping, basically), then run into a comparator (if the input is high enough, fire the synapse!), then comparator is filtered, run to a spring reverb, and then the reverb is fed back into that third input of the mixer.
Now, as it stands, this neuron doesn’t learn anything. That would require the neuron getting some feedback from it’s output (it feeds back from the spring reverb, but that’s a little different) Is the neuron delivering the result we want based on the inputs? If not, how can it change the weights of these inputs so that it does?