The MIDI Protocol: System Messages

An overview of MIDI System messages and how they can support MIDI programming and synchronization in your studio.

I ran away from an explanation of system messages in my previous video on MIDI Messages, instead focusing entirely on channel messages. In this video, though, I’m back to talk about System Exclusive Messages, System Common Messages, and System Realtime Messages, and how you can implement them for additional musical control.

0:00 Introduction
0:22 Quick Review of bits and bytes
0:57 Channel vs. System Messages
1:59 Categories of System Messages
2:36 System Exclusive (SysEx) Messages
4:50 System Common Messages
5:08 Song Select, Song Position Pointer
6:38 MIDI Time Code
7:31 Time Code Quarter Frame Message
9:10 Tune Request Message
9:58 System Real Time Messages
10:41 Active Sensing
11:25 Reset Message
11:56 MIDI Clock, Start, Continue, & Stop
12:39 MIDI Sync Demo in Max
13:06 MIDI Sync Demo in Logic Pro X
13:26 Wrap-up

MIDI Protocol 1: Bits, Bytes, and Binary

MIDI Protocol 2: MIDI Messages

Nonlinear Data-Driven Instruments with Simple Artificial Neural Networks (Max/MSP)

Building a simple artificial neural network in Max/MSP for nonlinear, chaotic control of data-driven instruments.

I’ve talked before about data-driven instruments, and I’ve talked before about artificial neurons and artificial neural networks, so here I combine the ideas to use a simple neural network to give some chaotic character to incoming data from a mouse and joystick before converting into into MIDI music. The ANN (Artificial Neural Network) reinterprets the data in way that isn’t random, but also isn’t linear, perhaps giving some interesting “organic” sophistication to our data-driven instrument.

In this video, I work entirely with musical control in MIDI, but these ideas could also apply to OSC or directly to any musical characteristics (like cutoff frequency of a filter, granular density, etc.).

0:00 Intro
1:43 [mousestate] for Data Input
2:58 Mapping a linear data-driven instrument
7:19 Making our Artificial Neuron
15:27 Simple ANN
20:06 Adding Feedback
22:23 Closing Thoughts, Next Steps

More Max/MSP Videos:

More Artificial Neurons and Neural Networks:

Vocal Sample to Oscillator in Symbolic Sound Kyma

Turning a single cycle of a recorded sample into a wavetable for Kyma oscillators.

When composing music with samples, it’s worthwhile to explore all of the musical opportunities in that sample–reversing it, timestretching it, granulating it, etc. Along those same lines, you can take a wavetable fro a sample and use it in your oscillators, so, instead of using the usual sawtooth, square, or sine waves, you create an oscillator that has a timbral connection to the sampled material.

Here, I show how to take take two vowel sounds from a vocal sample–an “ah” and an “oh”–and cycle them in a Kyma oscillator, creating unique timbres that blend with the original sample and its processing.

0:00 Intro / Why?
0:41 Finding a Single Cycle
3:14 Changing Duration to 4096 Samples
4:16 Cycling the Wavetable in an Oscillator
6:33 Making a Different Oscillator Wavetable
9:21 Implementation Example: Chords
11:49 Adding Vibrato
14:08 SampleCloud Plus Chords

More Symbolic Sound Kyma videos:

Bass + AI: Improvisation (Python, Unity3D, and Kyma)

An improvised duet(?) with a AI agent trained on the “Embodied Musicking Dataset. “

In this performance, Python listens to live audio input from the bass, and, based on models trained with the dataset, sends out data to Unity3D and Kyma. Unity3D creates the visuals (the firework), and Kyma processes the audio from the bass.

First, though, the dataset used for training was collected from several pianists in the US and UK. As pianists played, we recorded multiple aspects of their performance: audio, video of their hands, EEG, skeletal data, and galvanic skin response. After playing, pianists listened to their own performance and were asked to record their state of “flow” over the course of the performance. All of these different dimensions of data, then, were associated over time, and so neural networks can be trained on these different dimensions to make associations.

This demonstration uses the trained models from Craig Vear’s Jess+ project to generate X&Y data (from the skeletal data), and “flow”, from the amplitude of the input. These XY coordinates, “flow”, and amplitude are sent out from Python as OSC Data, which is received by both Unity3D (for visuals) and Kyma (for audio).

In Unity, the XY data moves the “firework” around the screen. Flow data affects its color, and amplitude affects its size. Audio in Kyma is a bit more sophisticated, but X position is left/right pan, and the flow data affects the delay, reverb, and live granulation.

As you can see, amplitude to XY mapping is limited, with the firework moving along a kind of diagonal. Possible next steps would be to extract more features of the audio (e.g. pitch, spectral complexity, or delta values), and train with those.

Applying this data trained on pianists to a bass performance (in a different genre) does not have the same goals music-generation AI such as MusicGen or MusicLM. Instead of automatically generating music, the AI becomes a partner in performance. Sometimes unpredictable, but not random, since its behavior is based on rules.

New Music! “Hanamaki Sessions 2023”

I’ve collected and edited some recordings I made with my “DAWless” mobile rig in Japan this summer.

It’s been interesting try to set something up that has the flexibility that I want, while still being portable enough not to take up too much space (and weight) in my luggage. Of course, as it’s often said, limitations can often lead to greater creativity.

In this setup I have my 54HP Eurorack (which can be battery powered if I want to play on top of a lookout tower somewhere), and my Arturia DrumBrute Impact. I do mixing with a little Mackie mixer, and recording with a Zoom H4N (which lets me record sound from the microphones at the same time as the line inputs).

Last year, some might remember, I went around with just the Eurorack synth (with some different modules in it–a benjolin in particular) and recorded my three-track “Ihatov MU” album. This year’s sessions were a fun extension of those ideas.

Perhaps I should do some performing out in New England in the next few months.

Pd Machine Learning Fail

A failed attempt at machine learning for real-time sound design in Pure Data Vanilla.

I’ve previously shown artificial neurons and neural networks in Pd, but here I attempted to take things to the next step and make a cybernetic system that demonstrates machine learning. It went good, not great.

This system has a “target” waveform (what we’re trying to produce). The neuron takes in several different waveforms, combines them (with a nonlinearity), and then compares the result to the target waveform, and attempts to adjust accordingly.

While it fails to reproduce the waveform in most cases, the resulting audio of a poorly-designed AI failing might still hold expressive possibilities.

0:00 Intro / Concept
1:35 Single-Neuron Patch Explanation
3:23 The “Learning” Part
5:46 A (Moderate) Success!
7:00 Trying with Multiple Inputs
10:07 Neural Network Failure
12:20 Closing Thoughts, Next Steps

More music and sound with neurons and neural networks here:

Reaktor 6 Envelope Follower

How to create an envelope follower in Reaktor 6 Primary and set your synth to automatically follow the amplitude of drums.

An envelope follower is a device that converts the amplitude envelope of an audio input into a control signal. Once we have that control signal, we can use it to control whatever we want. We can make the amplitude of an oscillator follow the amplitude of the input, or we could move the cutoff frequency of a filter, panning, etc.

Building the envelope follower is rather straightforward, just two steps: of rectifying and then low-pass filtering. In this video I walk through the process, and then show a few different applications.

More Reaktor 6 intermediate tutorials here.

Bidirectional OSC in “Baion 倍音” with Kyma & Unity3D

Talking through bidirectional OSC (Open Sound Control) in my 2018 piece Baion (倍音), that I perform on a custom-built interface, “the catalyst”.

I’m performing my 2017 piece “Baion” this week, and I thought it was a good chance to revisit some of the mechanics of the piece, specifically the communication between the different elements–the custom interface, the Kyma timeline, and the game built in Unity3D. In this video, I go through how the musical work emerges from the bidirectional OSC communication between software.

More videos on Symbolic Sound Kyma here: