Vocal Sample to Oscillator in Symbolic Sound Kyma

Turning a single cycle of a recorded sample into a wavetable for Kyma oscillators.

When composing music with samples, it’s worthwhile to explore all of the musical opportunities in that sample–reversing it, timestretching it, granulating it, etc. Along those same lines, you can take a wavetable fro a sample and use it in your oscillators, so, instead of using the usual sawtooth, square, or sine waves, you create an oscillator that has a timbral connection to the sampled material.

Here, I show how to take take two vowel sounds from a vocal sample–an “ah” and an “oh”–and cycle them in a Kyma oscillator, creating unique timbres that blend with the original sample and its processing.

0:00 Intro / Why?
0:41 Finding a Single Cycle
3:14 Changing Duration to 4096 Samples
4:16 Cycling the Wavetable in an Oscillator
6:33 Making a Different Oscillator Wavetable
9:21 Implementation Example: Chords
11:49 Adding Vibrato
14:08 SampleCloud Plus Chords

More Symbolic Sound Kyma videos:

Bass + AI: Improvisation (Python, Unity3D, and Kyma)

An improvised duet(?) with a AI agent trained on the “Embodied Musicking Dataset. “

In this performance, Python listens to live audio input from the bass, and, based on models trained with the dataset, sends out data to Unity3D and Kyma. Unity3D creates the visuals (the firework), and Kyma processes the audio from the bass.

First, though, the dataset used for training was collected from several pianists in the US and UK. As pianists played, we recorded multiple aspects of their performance: audio, video of their hands, EEG, skeletal data, and galvanic skin response. After playing, pianists listened to their own performance and were asked to record their state of “flow” over the course of the performance. All of these different dimensions of data, then, were associated over time, and so neural networks can be trained on these different dimensions to make associations.

This demonstration uses the trained models from Craig Vear’s Jess+ project to generate X&Y data (from the skeletal data), and “flow”, from the amplitude of the input. These XY coordinates, “flow”, and amplitude are sent out from Python as OSC Data, which is received by both Unity3D (for visuals) and Kyma (for audio).

In Unity, the XY data moves the “firework” around the screen. Flow data affects its color, and amplitude affects its size. Audio in Kyma is a bit more sophisticated, but X position is left/right pan, and the flow data affects the delay, reverb, and live granulation.

As you can see, amplitude to XY mapping is limited, with the firework moving along a kind of diagonal. Possible next steps would be to extract more features of the audio (e.g. pitch, spectral complexity, or delta values), and train with those.

Applying this data trained on pianists to a bass performance (in a different genre) does not have the same goals music-generation AI such as MusicGen or MusicLM. Instead of automatically generating music, the AI becomes a partner in performance. Sometimes unpredictable, but not random, since its behavior is based on rules.

New Music! “Hanamaki Sessions 2023”

I’ve collected and edited some recordings I made with my “DAWless” mobile rig in Japan this summer.

It’s been interesting try to set something up that has the flexibility that I want, while still being portable enough not to take up too much space (and weight) in my luggage. Of course, as it’s often said, limitations can often lead to greater creativity.

In this setup I have my 54HP Eurorack (which can be battery powered if I want to play on top of a lookout tower somewhere), and my Arturia DrumBrute Impact. I do mixing with a little Mackie mixer, and recording with a Zoom H4N (which lets me record sound from the microphones at the same time as the line inputs).

Last year, some might remember, I went around with just the Eurorack synth (with some different modules in it–a benjolin in particular) and recorded my three-track “Ihatov MU” album. This year’s sessions were a fun extension of those ideas.

Perhaps I should do some performing out in New England in the next few months.

Pd Machine Learning Fail

A failed attempt at machine learning for real-time sound design in Pure Data Vanilla.

I’ve previously shown artificial neurons and neural networks in Pd, but here I attempted to take things to the next step and make a cybernetic system that demonstrates machine learning. It went good, not great.

This system has a “target” waveform (what we’re trying to produce). The neuron takes in several different waveforms, combines them (with a nonlinearity), and then compares the result to the target waveform, and attempts to adjust accordingly.

While it fails to reproduce the waveform in most cases, the resulting audio of a poorly-designed AI failing might still hold expressive possibilities.

0:00 Intro / Concept
1:35 Single-Neuron Patch Explanation
3:23 The “Learning” Part
5:46 A (Moderate) Success!
7:00 Trying with Multiple Inputs
10:07 Neural Network Failure
12:20 Closing Thoughts, Next Steps

More music and sound with neurons and neural networks here:

Pd Comb Filter Patch from Scratch

Building a comb filter in Pure Data Vanilla from scratch.

A comb filter is a filter created by adding a delayed signal to itself, creating constructive and destructive interference of frequencies based on the length of the delay. All we have to do is delay the signal a little bit, feed it back into itself (pre-delay), and we get that pleasing, high-tech robotic resonance effect.

There’s no talking on this one, just building the patch, and listening to it go.

0:00 Playing back a recorded file
0:35 Looping the file
1:00 Setting up the delay
2:08 Frequency controls for the filter
2:52 Setting the range
3:48 Automatic random frequency
4:25 Commenting the code
5:39 Playing with settings

More no-talking Pd patch from scratch:

Artificial Neurons and Nonlinear Mixing

Talking through the concept of an artificial neuron, the fundamental component of artificial intelligence and machine learning, from an audio perspective.

I’ve made a few videos recently with “artificial neurons” including in Pure Data and in Eurorack, and, in this video, I discuss the ideas here in more detail, specifically how an artificial neuron is just a nonlinear mixer.

An artificial neuron takes in multiple inputs, weights them, and then transforms the sum of them using an “activation function”, which is just a nonlinear transformation (of some variety).

Of course just making a single neuron does not mean you’ve made an artificial intelligence or a program capable of “deep learning”, but understanding these fundamental building blocks can be a great first step in demystifying the growing number of machine learning programs in the 21st Century.

More music and sound design with artificial neurons:

Pure Data Clamping VCA with [clip~]

Creating an ambient music machine in Pure Data Vanilla with a “clamping VCA” that adds subtle distortion, imitating the envelopes in Roland TR-808.

I made a clamping VCA in Reaktor a few weeks back, and now here’s another example in Pd. Normally, amplitude envelopes in synths are a control envelope on the amplitude of the signal. When we use a “clamping VCA”, though, instead of controlling the amplitude of the waveform, we clip it at the desired maximum envelope. This means, when the VCA is all the way up, it sounds the same, but during the attack and release, we’ll get the addition of subtle (or perhaps not-so-subtle) distortion to our waveform.

I use [clip~] in Pd to achieve this effect, stealing the idea from Noise Engineering’s “Sinclastic Empulatrix” module, which, in turn, stole the idea from from the Roland TR-808 drum machine’s cymbal envelopes.

More Pure Data Tutorials:

Clamping VCA in Reaktor 6 Primary

Building a “clamping VCA” in Reaktor for subtle distortion, imitating the envelopes in Roland TR-808.

Normally, an amplitude envelope for your synths are just that: a control envelope on the amplitude of the signal. When we use a “clamping VCA”, though, instead of controlling the amplitude of the waveform, we clip it at the desired maximum envelope. This means, when the VCA is all the way up, it sounds the same, but during the attack and release, we’ll get the addition of subtle (or perhaps not-so-subtle) distortion to our waveform.

I use the “Mod. Clipper” in Reaktor 6 to achieve this effect, stealing the idea from Noise Engineering’s “Sinclastic Empulatrix” module, which, in turn, stole the idea from from the Roland TR-808 drum machine’s cymbal envelopes.

0:00 The “Mod. Clipper”
0:33 Clamping VCA
1:25 Simple Sine Oscillator
2:03 Mod-Clipping the Sine Wave
3:51 Standard VCA for comparison
4:58 Pulse Wave
5:41 Sawtooth Wave
6:34 Adding a Filter
7:35 Next Steps

More Reaktor 6 Intermediate Tutorials:

Subscribe for more videos