Heart Beat to DrumBrute Impact Tempo with Apple Watch, Holon.ist, & Pure Data

Using Holon.ist, OSC, and Pure Data to send my heart rate from my Apple Watch to the Arturia DrumBrute Impact.

I got an Apple Watch a couple of weeks ago, and, of course, on the very first day I started looking into apps that would allow me to send the data coming from my watch out as OSC messages. After some poking around I found Holon.ist by Holonic Systems, and decided to give it a try.

There’s actually way more in this app than I actually needed (I just wanted to get all the physical data out to my laptop), but it suits the purpose. So in this video, I show a quick demo of getting my heart rate as an OSC message into my laptop running Pd, and then converting it into a MIDI clock for the drumbrute impact.

More on OSC in Pd Vanilla:


More Pure Data tutorials here.

Check prices on the Arturia DrumBrute Impact (affiliate links):
Perfect Circuit
Reverb
Amazon

Sending Raw MIDI Data in Max (and Pure Data)

Sending out raw MIDI data in Max/MSP with [midiout] for system messages and other live control.

Here, I use the [midiout] object in Max to send individual “note on” and “note off” messages, using our knowledge of the MIDI protocol. We can then expand that to algorithmic MIDI control of sequences in the Arturia DrumBrute Impact, including adjusting the clock and the song position pointer for funky, chaotic beats.

0:00 Intro
0:30 [midiout]
0:59 Basic concept – Note On
3:16 Note Off
4:32 Pitch Bend Change
5:32 Exploring Algorithmic Control
6:47 Controling Sequencers (DrumBrute Impact)
7:09 MIDI Clock Message
9:01 Algorithmic Clock Control
10:03 Start, Stop, and Continue MIDI Messages
11:29 Playing with the Song Position Pointer
13:30 Bringing back the Drunk [metro]
15:00 Closing / Next Steps

Click here for more Max/MSP videos:

Pd Samplecrush Patch from Scratch

Doing some “samplecrushing” (downsampling) in Pure Data Vanilla to create dynamic aliasing artifacts.

0:00 Setting up [samphold~]
0:28 Simple downsampling and aliasing
0:55 Building a sequencer
2:33 Making the samplecrush dynamic
3:13 Making it stereo
3:50 Trying different timings and ranges

More Pd patching from scratch here:

Interactive Neural Net in Eurorack (Joystick & Artificial Neuron)

Combining human input from a joystick with a two-neuron artificial neural network for chaotic interactive music.

This Eurorack joystick is going into a simple neural network to control multiple dimensions of the timbre of this synth voice. Joystick dimensions X, Y, and Z go into different inputs of the Nonlinear Circuits Dual Neuron, and are mixed together and transformed by a nonlinearity (more here). In addition to the output controlling the waveform and filter cutoff of the synth, the outputs of each neuron is fed back into the other, creating a chaotic artificial organism with which to improvise.

Affiliate links for modules in this patch (though you really don’t need them; you can probably work this out with the gear or software that you currently have):
Doepfer A-174-4 3D Joystick (Perfect Circuit)
NLC Dual Neuron (Reverb)
Noise Engineering Ataraxic Translatron (Reverb)
Hikari Ping Filter (Perfect Circuit)
Noise Engineering Sinclastic Empulatrix (Reverb)
Arturia DrumBrute Impact (Perfect Circuit)
Korg SQ-1 (Perfect Circuit)

More Music with Artificial Neurons:

Spotting Subaudio

Finding and removing subaudio from sample files with a waveform editor.

Subaudio are frequencies below the range of human hearing (below 20Hz). These frequencies can sneak into our recordings, and work against us in a number of ways. If we can address subaudio in our samples, we can do ourselves a favor in the later stages of our mixing process.

0:00 Defining Subaudio
0:59 Example 1: Spotting Subaudio
2:04 Example 1: Doing the Math
2:50 Why Did This Happen?
3:11 Removing Subaudio with Parametric EQ
5:53 Example 2: Not Really Subaudio
7:27 Harmonics of Subaudio
8:31 Example 3: Trimming
9:15 Example 4: Bringing It All Together
10:16 Closing. Next Steps

The MIDI Protocol: System Messages

An overview of MIDI System messages and how they can support MIDI programming and synchronization in your studio.


I ran away from an explanation of system messages in my previous video on MIDI Messages, instead focusing entirely on channel messages. In this video, though, I’m back to talk about System Exclusive Messages, System Common Messages, and System Realtime Messages, and how you can implement them for additional musical control.

0:00 Introduction
0:22 Quick Review of bits and bytes
0:57 Channel vs. System Messages
1:59 Categories of System Messages
2:36 System Exclusive (SysEx) Messages
4:50 System Common Messages
5:08 Song Select, Song Position Pointer
6:38 MIDI Time Code
7:31 Time Code Quarter Frame Message
9:10 Tune Request Message
9:58 System Real Time Messages
10:41 Active Sensing
11:25 Reset Message
11:56 MIDI Clock, Start, Continue, & Stop
12:39 MIDI Sync Demo in Max
13:06 MIDI Sync Demo in Logic Pro X
13:26 Wrap-up

MIDI Protocol 1: Bits, Bytes, and Binary


MIDI Protocol 2: MIDI Messages

Nonlinear Data-Driven Instruments with Simple Artificial Neural Networks (Max/MSP)

Building a simple artificial neural network in Max/MSP for nonlinear, chaotic control of data-driven instruments.


I’ve talked before about data-driven instruments, and I’ve talked before about artificial neurons and artificial neural networks, so here I combine the ideas to use a simple neural network to give some chaotic character to incoming data from a mouse and joystick before converting into into MIDI music. The ANN (Artificial Neural Network) reinterprets the data in way that isn’t random, but also isn’t linear, perhaps giving some interesting “organic” sophistication to our data-driven instrument.

In this video, I work entirely with musical control in MIDI, but these ideas could also apply to OSC or directly to any musical characteristics (like cutoff frequency of a filter, granular density, etc.).

0:00 Intro
1:43 [mousestate] for Data Input
2:58 Mapping a linear data-driven instrument
7:19 Making our Artificial Neuron
15:27 Simple ANN
20:06 Adding Feedback
22:23 Closing Thoughts, Next Steps

More Max/MSP Videos:

More Artificial Neurons and Neural Networks:


Vocal Sample to Oscillator in Symbolic Sound Kyma

Turning a single cycle of a recorded sample into a wavetable for Kyma oscillators.

When composing music with samples, it’s worthwhile to explore all of the musical opportunities in that sample–reversing it, timestretching it, granulating it, etc. Along those same lines, you can take a wavetable fro a sample and use it in your oscillators, so, instead of using the usual sawtooth, square, or sine waves, you create an oscillator that has a timbral connection to the sampled material.

Here, I show how to take take two vowel sounds from a vocal sample–an “ah” and an “oh”–and cycle them in a Kyma oscillator, creating unique timbres that blend with the original sample and its processing.

0:00 Intro / Why?
0:41 Finding a Single Cycle
3:14 Changing Duration to 4096 Samples
4:16 Cycling the Wavetable in an Oscillator
6:33 Making a Different Oscillator Wavetable
9:21 Implementation Example: Chords
11:49 Adding Vibrato
14:08 SampleCloud Plus Chords

More Symbolic Sound Kyma videos: