Having some fun using mid/side stereo for sound design in Pure Data vanilla.
Here, we encode our stereo signal into mid/side with some simple math, then ring-modulate and delay the side material before decoding back into left-right stereo.
0:00 Bleep bloops in left-right stereo 1:12 Creating a mid/side encoder & decoder 2:19 Adding ring modulation to only the sides 3:23 Adding a delay to the sides 4:49 Expanding the range of ring modulation
Click here for a deeper explanation of mid/side stereo and synthesis:
Tutorial on “no-input mixing” in a DAW (Logic Pro X, in this case) for wild feedback-based sound design.
With a little knowledge of digital signal flow, we can easily set up an aux track in our DAW as a feedback loop–sending the track back into itself. Once we start adding effects, we can achieve new and unexpected sounds. This technique could be a way to generate some new sonic material, add some interest to a drum loop, or even generate vast, evolving soundscapes.
0:00 Intro / Casio Beat 0:39 Output to Aux Track 1:06 Feeding Back with a Bus Send 2:20 Adding Effects to the Loop 4:14 More Subtle Effects 4:58 More Extreme (Pitch Shifter) 5:17 Removing the “Input” 6:47 Talking through the No-Input Mixer 8:18 Closing Thoughts
All sound can be broken down into individual frequency components, and the lowest frequency component of a sound is called the “fundamental” (all the frequencies above that fundamental frequency are the “partials”). By cleverly setting the relationships of the amplitude and frequencies of the harmonic spectrum, though, you can trick your ear into hearing the pitch of a sound as an octave below the lowest frequency component.
Here, I’ve built a quick demo in Reaktor 6. Listen and see what you think.
Recently, I’ve been hooked on the idea of neurons and electronic and digital models of them. As always, this interest is focused on how these models can help us make interesting music and sound design.
It all started with my explorations into modular synths, especially focusing on the weirdest modules that I could find. I’d already spent decades doing digital synthesis, so I wanted to know what the furthest reaches of analog synthesis had to offer, and one of the modules that I came across was the nonlinearcircuits “neuron” (which had the additional benefit that it was simple enough for me to solder together on my own for cheap).
Anyway, today, I don’t want to talk about this module in particular, but rather more generally about what an artificial neuron is and what it can do with audio.
I wouldn’t want to learn biology from a composer, so I’ll keep this in the most simple terms possible (so I don’t mess up). The concept here is that neuron is receives a bunch of signals into its dendrites, and, based off of these signals, send out its own signal through its axon.
Are you with me so far?
In the case of biological neurons these “signals” are chemical or electrical, and in these sonic explorations the signals are the continuous changing voltages of an analog audio signal.
So, in audio, the way we combine multiple audio source is a mixer:
Now, the interesting thing here is that a neuron doesn’t just sum the signals from its dendrites and send them to the output. It gives these inputs different weights (levels), and combines them in a nonlinear way.
In our sonic models of neurons, this “nonlinearity” could be a number of things: waveshapers, rectifiers, etc.
In the case of our sonic explorations, different nonlinear transformations will lead to different sonic results, but there’s no real “better” or “worse” choices (except driven by your aesthetic goals). Now, if I wanted to train an artificial neural net to identify pictures or compose algorithmic music, I’d think more about it (and there’s lots of literature about these activation function choices).
But, OK! A mixer with the ability to control the input levels and a nonlinear transformation! That’s our neuron! That’s it!
In this patch, our mixer receives three inputs: a sequenced sine wave, a chaotically-modulated triangle wave, and one more thing I’ll get back to in a sec. That output is put through a hyperbolic tan function (soft-clipping, basically), then run into a comparator (if the input is high enough, fire the synapse!), then comparator is filtered, run to a spring reverb, and then the reverb is fed back into that third input of the mixer.
Now, as it stands, this neuron doesn’t learn anything. That would require the neuron getting some feedback from it’s output (it feeds back from the spring reverb, but that’s a little different) Is the neuron delivering the result we want based on the inputs? If not, how can it change the weights of these inputs so that it does?
Asymmetrical clipping is clipping (truncation of a waveform), where the positive and negative amplitude peaks of a waveform are clipped to different values. This means we could clip the negative at -1, and the positive at -0.8 for example, and create some interesting harmonics.
This asymmetrical clipping is common in guitar effect pedals, since it’s relatively cheap to accomplish in electronics (with a few diodes). Unsurprisingly, it’s pretty easy to accomplish in Pd too, just using the [clip~] object. The fun part comes in deciding how we can use it musically.
Subaudio are sounds below the range of human hearing–below about 20 Hz. While we can’t hear these sounds, they can make their way into our audio files in various ways and cause some issues for us. Understanding these issues can help us make decisions in tracking, mixing, and mastering to ensure clean bass sounds and the highest possible fidelity in our recordings.
Talking about binaural beats, claims about their ability to entrain brainwaves, and walking through how easy they are to make yourself in Pure Data.
In binaural beats, two pitches with slightly different frequencies are played, one in each ear, supposedly creating a vibration at the difference tone inside your head, which can be used to entrain your brainwaves to help you relax, get you high, or even affect your behavior. The science isn’t there, but that doesn’t mean we can’t embrace binaural beats as a musical aesthetic, using Pd to make a fun, free “healing music generator.”
…just as long as we use our critical thinking and our ability to find credible resources.
Please TRUST YOUR DOCTOR (not the internet, including my videos) when making your medical decisions.
The new PS5 audio engine, Tempest 3D AudioTech, creates 3-dimensional sound on any headset by using HRTFs, head-related transfer functions. So what are HRTFs? How does this work? Will it work for everyone? What does this mean for surround-sound setups? What are the five “Types” in the 3D Audio Profile Settings?
This video is a quick overview of what Tempest 3D AudioTech is reportedly doing now at launch (November 2020), and what possibilities and questions there will be in the future.
Since COVID-19 has pushed a great deal of teaching and learning online, I’ve been converting a lot of my synthesis lessons into “micro-lectures”, 5- to 10-minute videos, that can be integrated into online learning.
These videos are all software-agnostic, focusing on principles and fundamental ideas of sound synthesis over any particular synthesis environment.
More instructional playlists are available on my “Teaching” page.