Finding and removing subaudio from sample files with a waveform editor.
Subaudio are frequencies below the range of human hearing (below 20Hz). These frequencies can sneak into our recordings, and work against us in a number of ways. If we can address subaudio in our samples, we can do ourselves a favor in the later stages of our mixing process.
0:00 Defining Subaudio 0:59 Example 1: Spotting Subaudio 2:04 Example 1: Doing the Math 2:50 Why Did This Happen? 3:11 Removing Subaudio with Parametric EQ 5:53 Example 2: Not Really Subaudio 7:27 Harmonics of Subaudio 8:31 Example 3: Trimming 9:15 Example 4: Bringing It All Together 10:16 Closing. Next Steps
Building a simple artificial neural network in Max/MSP for nonlinear, chaotic control of data-driven instruments.
I’ve talked before about data-driven instruments, and I’ve talked before about artificial neurons and artificial neural networks, so here I combine the ideas to use a simple neural network to give some chaotic character to incoming data from a mouse and joystick before converting into into MIDI music. The ANN (Artificial Neural Network) reinterprets the data in way that isn’t random, but also isn’t linear, perhaps giving some interesting “organic” sophistication to our data-driven instrument.
In this video, I work entirely with musical control in MIDI, but these ideas could also apply to OSC or directly to any musical characteristics (like cutoff frequency of a filter, granular density, etc.).
0:00 Intro 1:43 [mousestate] for Data Input 2:58 Mapping a linear data-driven instrument 7:19 Making our Artificial Neuron 15:27 Simple ANN 20:06 Adding Feedback 22:23 Closing Thoughts, Next Steps
In this performance, Python listens to live audio input from the bass, and, based on models trained with the dataset, sends out data to Unity3D and Kyma. Unity3D creates the visuals (the firework), and Kyma processes the audio from the bass.
First, though, the dataset used for training was collected from several pianists in the US and UK. As pianists played, we recorded multiple aspects of their performance: audio, video of their hands, EEG, skeletal data, and galvanic skin response. After playing, pianists listened to their own performance and were asked to record their state of “flow” over the course of the performance. All of these different dimensions of data, then, were associated over time, and so neural networks can be trained on these different dimensions to make associations.
This demonstration uses the trained models from Craig Vear’s Jess+ project to generate X&Y data (from the skeletal data), and “flow”, from the amplitude of the input. These XY coordinates, “flow”, and amplitude are sent out from Python as OSC Data, which is received by both Unity3D (for visuals) and Kyma (for audio).
In Unity, the XY data moves the “firework” around the screen. Flow data affects its color, and amplitude affects its size. Audio in Kyma is a bit more sophisticated, but X position is left/right pan, and the flow data affects the delay, reverb, and live granulation.
As you can see, amplitude to XY mapping is limited, with the firework moving along a kind of diagonal. Possible next steps would be to extract more features of the audio (e.g. pitch, spectral complexity, or delta values), and train with those.
Applying this data trained on pianists to a bass performance (in a different genre) does not have the same goals music-generation AI such as MusicGen or MusicLM. Instead of automatically generating music, the AI becomes a partner in performance. Sometimes unpredictable, but not random, since its behavior is based on rules.
I’ve collected and edited some recordings I made with my “DAWless” mobile rig in Japan this summer.
It’s been interesting try to set something up that has the flexibility that I want, while still being portable enough not to take up too much space (and weight) in my luggage. Of course, as it’s often said, limitations can often lead to greater creativity.
Last year, some might remember, I went around with just the Eurorack synth (with some different modules in it–a benjolin in particular) and recorded my three-track “Ihatov MU” album. This year’s sessions were a fun extension of those ideas.
Perhaps I should do some performing out in New England in the next few months.
Building a comb filter in Pure Data Vanilla from scratch.
A comb filter is a filter created by adding a delayed signal to itself, creating constructive and destructive interference of frequencies based on the length of the delay. All we have to do is delay the signal a little bit, feed it back into itself (pre-delay), and we get that pleasing, high-tech robotic resonance effect.
There’s no talking on this one, just building the patch, and listening to it go.
0:00 Playing back a recorded file 0:35 Looping the file 1:00 Setting up the delay 2:08 Frequency controls for the filter 2:52 Setting the range 3:48 Automatic random frequency 4:25 Commenting the code 5:39 Playing with settings
For the last few years, I’ve been messing around with internet-based, no-input feedback loops in collaboration with Will Klingenmeier.
What does that mean? Why would I do that? What does it sound like? All those questions are answered in the brief PechaKucha below:
Zoomscape Pecha Kucha – Understand it all in less than 8 minutes!
While I’m sure we’ll continue to mess with these ideas in the future, we’ve come to at least a short-term culmination of this project in a tape release of these experiments on bandcamp.
You can also retroactively join our “Tape Release Party” here:
Zoomscapes Tape Release Party from 2/5/23
To catch up on all of the previous experiments, check out this playlist:
Doing some live processing of sleigh bells in Pure Data to create an “Interactive Holiday Noise Music System.”
Since it’s mid-December, let’s make some holiday music. If you’re sick of the standard cloying Muzak fare, though, you can make your own feedback delay sample-crushing interactive music system in Pure Data in an afternoon.
The main point here is getting a “trigger” from audio input crossing a loudness threshold. Once we have that trigger, we can use it to make changes in live-processing of a sound and trigger other sounds too. This is a simple idea, but its effectiveness is going to depend on what these changes are and how we play with the system.
0:00 Demo 0:26 Introduction / Goals 1:23 Input Monitoring 2:41 Direct (“Dry”) Output 4:08 Feature Extraction with [sigmund~] 6:55 Amplitude as Trigger 8:43 Triggering Changes in Delay 12:44 Sample-Crushing 17:03 Triggering an Oscillator 19:37 Oscillators into Harmony 23:35 Putting it all together 25:33 Closing Thoughts
Listening to electromagnetic radiation around the house using a homemade elektrosluch.
I was cleaning up, and found an “elektrosluch” that I made a few years back, and figured I’d dust it off and make sure that it still works. This is a device designed by LOM-Instruments that converts the vibration electromagnetic fields into sound (specifically vibrations of voltage that we can listen to through headphones, more info here ).
Adding envelopes to our synthesizer that aren‘tan ADSR.
ADSRs might be the envelope generators that we encounter most often, but they’re not the only way to shape our sound. There are a number of other musical ways to craft change in our synthesizer over time with these non-periodic TVCs.
Let’s check out what other options there are in Reaktor 6 primary.