Just got my video from the Mountain Computer Music Festival last September 6th.
Video by MCAT.
The theme of the conference is interfaces (more precisely: “INTER faces”), and KISS2013 used for its symbol Belgium surrealist painter Rene Magritte’s “Les Amants” (“The Lovers”), an image of two people kissing with cloth covering their faces:
This painting illustrates a the role of interfaces as borders, emphasizing the separation between the two lovers (a separation which exists between all people) even in this most intimate moment.
In the realm of electronic music, we most often use the term “interface” to talk about the point of human interaction with a machine, whether through typing on a keyboard, using a mouse, or even the Graphical User Interface (GUI) of a piece of software. The KISS conference’s choice of Magritte’s painting for its symbol, though, re-examines the interface as a border, a concept that Kyma creator Carla Scaletti was also quick to point out in her keynote speech (poorly paraphrased here): without these borders, we would just all be one mass of cells flowing everywhere.
Dr. Scaletti’s image here immediately reminded me of Katsuhiro Otomo’s cyberpunk manga and animated film Akira, specifically the scene where the character Tetsuo merges with the mechanical devices around him, and becomes and uncontrollable expanding mass of organic and inorganic matter.
While Akira’s level of human-machine bordlessness is, hopefully, metaphorical (at least for the time being), it seems that we are moving toward more and more transparent interfaces in our human-computer interactions.
Several workshops and pieces involved the Microsoft Kinect (including performances by fellow UO alums Jon Bellona and Chi Wang), an interfaces that understands an impressive amount of data about a person’s body position without requiring any physical contact.
A step further, though, were two piece presented where the performers did not interface with the computer physically, but instead through EEG neural headsets. The performers took the stage, then thought in front of an audience in order to create music. The EEGs then gathered data about the performers neural impulses, which was sonified by the computer.
Of course I couldn’t help but wonder exactly what they were thinking about…
Rather than seeking to erase the human-machine border, though, it seems that these new devices are designed to allow us to interact with machines on more human terms. Typing or using a joystick, for example, are actions we have learned for the sake of interacting with computers, whereas the Kinect offers a way of interfacing with a computer using actions that might hold referential meaning beyond human-machine interaction, as evidenced in Bellona’s “spell-casting” actions in Casting, and Wang’s conducting motions in SoundMotion.
Of course, in musical performance, we should remember that performers for centuries have practiced and learned how to physically interface with these instruments in a way that is not necessarily referential to motions outside music, so the transparency of an interface doesn’t not necessarily reflect on its effectiveness (or all musicians would just play the timpani, where one can see from across the room how the performer is playing the instrument).
An interesting question might be, though: does a novel interface, one that has never been seen before and whose performance we have not yet been acculturated to, benefit from a degree of clarity between the performers actions and the sonic results?
Finally, here is one more image from Magritte, this time, the artist transgresses rather than emphasizes the interface. The title of the piece seemed rather serendipitous, “Sixteen September.”
Taking my first afternoon off in a while, I sat down to see what had been lurking unwatched on my Netflix queue, and I came across a documentary that I added a while ago, We Don’t Care About Music Anyway, a 2009 film about avant-garde musicians and sound artists in Tokyo:
For a better idea of what this film is about, I think this review from the Seattle Times is pretty apt.
While not all of the performances in the documentary are to my taste (a statement that I don’t think would concern the artists in the least), I really enjoyed the film, especially in how it set “noise music” in the context of issues of modernity in urban life. Speaking purely from my anecdotal experience, I’m always impressed at how clear and confident Japanese artists are about communicating their creative impetus, and it was great to hear some of the musicians speak directly about how they feel their work fits in modern Japanese society.
If you’re interested in any of the above, consider taking an hour and a nineteen minutes to enjoy the film (especially if you have unlimited Netflix streaming).
For me, revisiting some Japanese, electronic-musical, cultural anthropology was a worthwhile break before returning to grading some Classical, German/Austrian, tonal analysis assignments.
I’ve been making a few contact microphones using cheap ingredients for lo-fi live performance piece I’m working on. I’m no expert in electrical circuits, so I’m just following the directions Nicolas Collins lays out in his book Handmade Electronic Music.
I’ve used these instructions to make little piezo triggers before for Arduino projects and the like, so I’ve gotten pretty efficient at it.
I got a piezo element ($2.95 on Sparkfun).
Then, I took a guitar patch cable, cut it in the middle and soldered the element onto the cable (actually, I bought a 20ft cable for $10, cut it down the middle and soldered a piezo onto each half-cable, giving me two mics with 10ft lines).
Finally, I dipped the piezo in “Plasti-Dip” ($8.99 for a can of way more than I needed) to coat it. Ending up with this:
Although I made these mics for an “avant-garde” piece involving a block of wood, a saw, and some nails, I had the idea to test one of them out on my shamisen, just to see how well they might work as pickups.
Using painters tape (easily removable without damaging a surface), I tried attaching the mic to various parts of the instrument (including under the bridge and to the wood), until I settled on an unobtrusive point on the front skin.
Now, before I get all of you shamisen players’ hopes up that I’ve found a less-than-ten-dollar solution to shamisen amplification, give it a listen:
It’s not a terrible sound (putting my performance aside for the moment), but there’s not a whole lot of sustain. Of course, it’s a contact mic so that makes sense that we’re not getting any feedback from the space.
Here’s the same audio file with some DSP reverberation:
Better, but you only need to watch a couple of shamisen videos on YouTube, to easily hear that the sound is still not quite right.
Again, the instrument’s sustain seems too short (although the reverb covers this a bit), perhaps because the mic is on the skin rather than the wood.
Additionally, looking at a spectral analysis of the sound, we can see we’re not really getting a lot of the higher partials of the shamisen sound (including the “noise” of the bachi striking the string):
(You can also see a little bump around 60hz, which is some 60-cycle hum.)
So, in conclusion, this little mic may not be the solution to shamisen amplification needs in a pro-audio context, but it might still have some uses.
For example, hey, check out what I can do if I put the audio through my Rammstein plug-in in Guitar Rig:
I’m currently enjoying a fascinating week in Daejeon, South Korea, at the international New Interfaces for Musical Expression (NIME) Conference at KAIST (the Korea Advanced Institute of Science and Technology). I’m here, in part, to present my piece Shin no Shin for iPad, but after just a couple of days of attending workshops, performances, and presentations, my brain is filled to almost bursting with new ideas.
Being around all of these great creative minds has been a wonderful inspiration for me to get off my butt (in my post-dissertation defense complacency) and get back to work in keeping up with all of my international colleagues.
Bravo to all for bringing their A-game, and for reminding me to bring mine.
I’m just now getting back to updating my homepage after a busy few months editing, recording, and defending my dissertation. But, with that all successfully behind me, I’d like to share some of the recordings from our reading of Act I of “A Lawn in the Sky.”
I had a wonderful group of volunteer performers (listed below) for this reading project, all of whom put in several LONG days of rehearsals to get everything sounding as good as it does.
And, of course, many thanks again to Katherine Hollander for producing and “enviable” libretto.
Please enjoy some of the tracks on SoundCloud:
And, as a bonus, here’s my favorite moment in the electronics from Act II:
Bobby Chastain, conductor;
Rebecca Sacks, Kozuka/the Teacher;
Daniel Cruse, Onoda/Search Party 1;
Addison Wong, Toshio;
Alex Johnson, The Bookseller/Search Party 2;
Julianne Graper, Keiko;
Kate Kilops, Mayu;
Sarah Benton, Piccolo (Nohkan);
Rianna Cohen, Flute;
Yinchi Chang, Oboe;
Bradley Frizzell, Clarinet;
Aaron Shatzer, Bassoon;
Colin Hurowitz, Percussion;
Dustin Shilling, Percussion;
Marty Kovach, Taiko;
Simon Hutchinson, Shamisen;
Evan C. Paul, Piano;
Corey Adkins, Bass.