Watt? collaborators use Ableton to make electronic music more expressive, less rigid

Categories: Interviews

Watt_fyodor_mark_vic_Darren_Web.jpg
Little Fyodor
Little Fyodor (from left), Mark Mosher, Victoria Lundy and Darren Kramer

Using a greater palette of digital and physical controllers, modern electronic musicians are making music that is quite different from the current wave of EDM. Watt? Amplified and Supercharged Music Expression is a collaborative performance between Darren Kramer, Victoria Lundy and Mark Mosher that will effectively demonstrate some of the breadth of sounds and controls worth exploring.

See also: Friday: Watt? Amplified and Supercharged Music Expression at Dairy Center for the Arts, 8/23/13

The participants are established veterans. Mosher is a well-known composer in the world of electronic music who plays festivals nationwide with his darkly ambient works and uses something called "visual synthesis" as a way to engage the crowd. He is also the founder of the Boulder Synthesizer Meet-Up.

Darren Kramer, meanwhile, is a renowned jazz trombonist who is pushing the envelope of possibilities in the use of that instrument, and he has also toured nationally and internationally with the likes of Tom Jones, Matchbox Twenty and Tommy Dorsey Orchestra. In his seeking to fully integrate technology in a practical fashion, he has developed the GigEasy, a stand for tablet devices now often used by electronic musicians as a controller but not necessary designed, at least initially, as a piece of musical gear.

Finally, Victoria Lundy has been involved in the experimental music world of Denver for decades as a talented Theremin player and member of the Inactivists and Jackson Induced Mutant Laboratory, as well as the semi-legendary Carbon Dioxide Orchestra, which used dry ice rubbed on a large copper heart and miked to transmit frequencies alongside Lundy's Theremin.

We recently sat down with the trio to discuss the nature of their performance this Friday, August 23, at the Dairy Center for the Arts, and and how it will be decidedly different from other electronic music shows. We also talked about how Mosher's unique visual component of the performance is the visual analog, as it were, to what he does with sound, bringing it full circle in terms of this performance unifying analog with digital electronic music in a direct fashion.

Westword: Tell us about what you're doing for this Watt's Up show?

Mark Mosher: Victoria Lundy is going first playing Theremin. Then Victoria and Darren Kramer are going to overlap a song.

Darren Kramer: Then I am playing acoustic and electric trombone using Ableton Live and using an iPad to control it. I use a MIDI keyboard and foot pedal. I've been working on it off and on for about five years.

For the electronic trombone, is it something with a mike, or is it something completely different?

DK: I'm using a normal, acoustic mike at the end of the bell, and I'm also using a mute with a mike in it, so you don't hear any acoustic trombone. I also have a vocal mike to do some vocoder stuff. In Ableton, I have five different loopers going on, so I can do keyboards separate, vocals separate, acoustic 'bone and electric 'bone separate, so you can mix and match things on the fly.

Everything is software based. Lemur running on iOS. I have a hardware Lemur, but it went out of business because of iPad, but a company took it over and released that software. It's a WiFi connection, so I can go out in the audience and use it to control. You can use your phone too.

Victoria you're playing the Theremin, of course.

Victoria Lundy: Actually I'm doing something more melody-based and less abstract than I usually do. In fact, I'm doing some folk music -- not like Dylan but a Gaelic folk song, and I'm doing a little Japanese exercise and a couple of things I've put together myself and a classical piece, "When I am laid in earth," from Dido and Aeneas, by Henry Purcell. It's an opera aria. So I'm doing more than showcasing an instrument.

You got your Theremin directly from the factory, is that right?

VL: Yes, I got it in 2005. A Theremin Etherwave Pro. I am mostly using the Moogerfoogers, but for a lot of this, I won't be doing a lot of processing. I'm mostly going to be doing a bit of 'verb and play it sort of like a violin. I am going to do one piece that is sort of wacky and spooky, but mostly it's going to be pretty straightforward, which is going to be pretty different [from what I usually do]. This is going to be the first time that I've really built anything using loops or keyboards so I'm the novice.

MM: It's going to start off pure analog with Victoria, and then it's going to progress to a mixture of analog and digital. Darren's inputs are all analog and he mangles it up. Then I go last and I am all digital. Even if something makes noise I immediately convert it to digital. Everything I've got going is controllers.

How do you convert it back into digital?

MM: I'm using Ableton Live, and it runs into Live, and I have a plug-in, and it does a convert pitch to MIDI. From there, I split it into instrument racks, so there's like tons of synthesizers through these layers I'm playing. So basically nothing I have in front of me, even the Theremin, makes any noise that comes out of the speakers. I'm playing a grid, and I have these really sensitive pads, and I have a keyboard. This is a Native Instruments machine that I have taped off. This is a controller that talks to live. I can see the status of what's happening and it has pressures as well and is super expressive.

I also use a Tenori-on. It's cut by robots out of a block of manganese. The whole premise of my show is a controller-ism show, where when I'm doing something, the audience will get visual feedback of what I'm doing. So if I hold a shape the audience can relate to what I'm doing. If I press on one side, you can see it on the other. My idea was that if I press something you see light, and it will help you connect with what is coming out of the speakers. I do live sequencing and live playing from this, and I can play leads and notes into Ableton Live. I can get weird polyrhythms going without having and tweak a lot of stuff.

I also use stands with Percussa Audiocubes. They have a computer inside and an LED and infrared sensors, so they can detect proximity, as well as cube orientation with other cubes in the network. Then I can send light commands to them, and you can do things like trigger scenes or turn on a bit crusher, turn the bit crusher to fifty percent, and you can use it like a Theremin. A bit crusher takes your high-fidelity digital and reduces the bits, so it sounds super nasty. You know, we spend all this money to get the highest fidelity we can and then we destroy it.

Let's say you had a delay on here: you can change the delay wet/dry, or I can map seven things to this. The last step in my rig is that I have a camera that sits out there and looks up at me, and I have a second laptop running software, for which the only source for the visuals you'll see is the camera input running through a sorts of filters that I'm triggering through Ableton Live and through the controllers, and the audio in the room affects it. It's a super organic idea of doing visuals without spending a hundred hours editing source video and then synching to it the same every night.

I think all of us share the idea of arrangement on the fly with what we do. Obviously if we're doing a composed piece not so much. For me, I think what's happened in the past four or five years with Ableton Live and laptops is that you can become a musician again. You're not locked into playback. You're controlling playback, and if the audience is reacting you can stick it out longer in a section or if you change your mind you can switch synths. So it becomes very expressive and very organic now whereas it used to be rigid.

What had you done before that?

MM: I played keys in rock bands in Colorado Springs. I was in a band called Head Full of Zombies for thirteen years. They called it quits after 22 years a few years back, but we were a mainstay down there. I played synth and sang and used hardware. When I started doing a laptop base, I started moving more toward virtual. I used to use Sonar and FL Studio.

Those are linear, so the problem is that you kind of have to know what you're writing beforehand and sort of lay it out and you think about structure first. Ableton Live in session allows you to think about the moment and you can experiment with arrangements before you commit to a linear timeline. For live performance it's great because you break things into sections. You might have an intro, then a middle and section where you riff out and solo.

Pink Floyd played out Dark Side of the Moon before they recorded it. With this situation you can work on songs in your basement and play it live but you haven't committed it to a linear timeline. Then when you get it you go back home to the studio after rehearsing it for weeks and months and hit the global record and lay down the track. It's weirdly like going back to the day when you were multi-tracking but you're doing it yourself.

DK: It's a nice combination of both. You can have it completely structured even in session mode. You can set it up to automatically go to the next scene. So you can have it structured, and if something feels good, you can expand the bridge section or something like that.

MM: You can map everything to controllers. So Live is like a palette in that mode. You watch a clip, you can watch a scene, you can turn on an effects processor, you can turn on the distortion half way.

DK: I can control the volume of everything and use pedals to advance the scene and I can play over it. I can also go back to where I started it. The beauty of it is that you're doing your original stuff and I can go grab a Madonna loop I've created and mix that in with the tune and you can mix and match other people's music with your own.

MM: Or take live samples from somebody in the room with you right now and drop it in. I used to think the keyboard is my instrument and now my instrument is the rig.

DK: I'm a jazz trombonist, but I play a lot of keyboards and now it's all kind of lumped in together. I do a lot of loops with covers and I call myself an electronic trombone DJ.

MM: I think what's interesting about this Dairy show is that Victoria is going, then Darren's going, and then I'm going, and we're going to play a song together. So we will have something figured out something interesting for the crowd. Darren and I play festivals around the country, and oddly I've never really played around here.

About a year ago, I started the Boulder Synthesizer Meet-Up. At first, I thought no one was going to show up, but now we have about a hundred ten members. "Eat, drink, synth" is sort of my tagline. It's a drinking club with a synthesizer problem. We meet at the Old Louisville Inn the second Tuesday every month.

There's a lot of electronic music going on in Denver in the clubs. But our sets are a little different. When I was thinking up what I do it was more theatrical meant for comfy seats, glass of wine, awesome P.A. So we thought about putting a show together and approached the Dairy Center. I think it will be a unique opportunity to sit in comfy seats and go through this progression of analog to analog-digital hybrid to completely digital with interactive digital visuals.

We're also working with Gannon Kashiwa. He's an amazing sound engineer, and does sound for Jazz in the Park, and he used to work at the company that made ProTools, working on their sound boards, so he has an amazing system and an amazing ear. We consider him part of the show as well.

You touched on the visual element earlier. What can you tell us about how that will work for this performance?

MM: I'm story-based so all my albums are an epic, alien-invasion tale told through electronic music. Part of it is beat-driven part of it is dark ambient. I gigged it out for about three years just with the visual controllers, so people could see it. So I thought it would be good to amplify this idea.

I have two albums, and the first was told from the human point of view; the second is the same story from the alien point of view -- the inverted emotional curves if you want to look at the geeky aspect of it. The third album is about this hybrid that realizes it's been infected and what it's going to do about it.

So I thought, wow, wouldn't it be awesome to have an imaginary camera that if anyone walks in front of it it would reveal the alien within. So my show is about when you see the projection it's me playing live and I'm moving my hand but it might be a ghosted movement or pixelated, or if I pick something up and begin moving it around, it might emit shards of light as I'm moving it.

So it's filtered, live input but going through filters I'm controlling, just like I'm controlling the synthesizer. So think of it as visual synthesis, I guess. What's interesting about it is that just like with Ableton in music the visual software I'm using allows you to use it on the fly, so it's different every show. It's like this ephemeral thing where when I play it, every time it's different.

Watt? Amplified and Supercharged Music Expression, 8 p.m., Friday, August 23, The Dairy Center For the Arts, $10 d.o.s., 303-444-7328, all ages




Location Info

Venue

Map

Dairy Center For the Arts

2590 Walnut St., Boulder, CO

Category: General

My Voice Nation Help
0 comments

Now Trending

From the Vault

 

Denver Event Tickets
Loading...