It is apped to exclaim, “Location, Location, Location!”


Probably the buzz-iest music project I’ve come across in the last few days is the new, interactive “location aware” album just released as an app by the band Bluebrain. It’s called The National Mall, and the only way you can hear the album is to download its app and listen to the piece as you walk around the National Mall in Washington DC. Based on the GPS information on your phone, the music will loop or change as you stand still or move around the area’s monuments and attractions. The Washington Post explains it thus:

“The app contains nearly three hours of meticulously composed music that transforms as you navigate 264 zones across the Mall. If you stay put, the song remains the same — music will loop in intervals that last two to eight minutes, depending on your position.

The point is to keep moving. Approach the Capitol dome, and you’ll hear an eerie drone. Climb the steps of the Lincoln Memorial, and it’s twinkling harps and chiming bells. As you wander from zone to zone, ambient washes dovetail into trip-hop beats and back again. The music follows you without interruption, the way a soundtrack follows a protagonist through a movie or a video game. When you leave the Mall, the sound evaporates into silence.”

The National Mall is the brainchild of sibling programmers and musicians Hays and Ryan Holladay, and they intend to build a series of site specific albums for other locations, the next one being Flushing Meadows Park in Queens, site of the 1939 World’s Fair.

There’s been some discussion over at Create Digital Music (CDM’s Peter Kirn is interviewed in the Washington Post article about Bluebrain) about a couple of things: one was whether this is the first locative musical work as claimed on Bluebrain’s blog, because Kirn and other commenters on CDM link up to other app based sound works created for specific places, such as the Urban Remix Project in Times Square (which to me, didn’t really yield anything too interesting to listen to).

A similar app mentioned in the CDM comments is the Inception App, based on the movie directed by Christopher Nolan and developed in collaboration with RjDj, the company that makes augmented or reactive music apps, combining live acoustic information from our activities and environments with music making technology. (I myself enjoy some of RjDj’s apps, having used them to transform the chatter and door slams of city bus rides into chorused, beat-driven soundscapes.)

The Inception app promises to deliver, through the headset and mic of your iPhone or iPod and your device’s GPS locator, an aural dreamworld combining the sounds of your location with new music by Inception film composer Hans Zimmer.  Like the film, the app contains levels of “dreams”, and you can unlock various levels by doing a variety of things. “For example, new dreams are unlocked by walking, being in a quiet room, traveling faster than 30 mph, when the sun shines or it is full moon.” But one of the dreams will only play if you’re in Africa, which takes us back to the DC-only Bluebrain album.

What is this album if you can only get it in one place? Well, that’s familiar. Haven’t we all searched the stacks of local record shops in cities other than our own, looking for musical gems unique to that particular town? But, you can’t take the Bluebrain album home – and even if you listen to it in DC, it will be different every time you hear it. In that sense, isn’t it more of a site specific live performance, except you are the only performer, deciding by your movements, exactly how this thing will be performed?

Another issue seems to be whether to call Bluebrain’s The National Mall an actual “album”- that being a collection of fixed songs that are usually arranged in a specific order. Not being in DC, there’s no opportunity right now to actually play the project on my device, but if you are there DC to listen to it…are these actual “songs” that change as you move from zone to zone…or snatches of musical elements? If the latter, maybe, as others have pointed out, it’s more appropriate to call this a location aware “composition”? But again, that irritates my usual notion of composition where the artist presents me with layers of notes and sounds in a particular musical order so they have meaning as a whole. Should we just call it “a piece”? (Let’s.)

Because the piece focuses on a particular attraction in a particular city, it could seem like a PR project to attract visitors, just like the Movement app I downloaded for the Detroit Electronic Music Festival.  Does it make you want to hop on the Acela and check it out? Also, because the The National Mall is delivering location based content/media in much the same way as Yelp or Google can tell me where the closest cupcake shop is or if there was a battle fought 100 years ago on the exact spot where I’m standing, the Bluebrain piece could seem more like a coding assignment and less like art. I mean, beyond describing the experience of receiving all these musical sounds in such a high tech way, I can’t see that folks have said much so far about the actual music being any good. I haven’t seen any album reviews in the music press yet.

I am so used to my attachment to conventional albums, the personal timestamps we place on our favorite records, the way our feelings about them change with time while the musical recordings themselves remain fixed. I’m also very attached to live musical performances, listening to those songs I know well infused with new energy each time the performer/composer steps up on stage. I wonder if experiencing a new technical gimmick can compare with that. But looking at all the press surrounding Bluebrain, I guess that is the point of The National Mall – to shake up ideas about what an album or performance or audience participation is in today’s wired world. We’ll see soon how “app albums” will develop in the hands of more and more artists quite soon. Word came in March that Bjork is working on a project called Biophilia, her 7th studio album partially recorded on a iPad which will be distributed as a series of apps. But I think she’s going to want this to be available in more than one place, and I still think I’ll want to buy a ticket to a show.

Jocelyn

Love Kinection?

So, since the holiday season, have you been flailing around in your living room, throwing imaginary footballs, jumping over invisible obstacles or throwing punches in the air? If you have, you probably got Microsoft’s Kinect for Xbox under your tree, the gesture based gaming system that requires no controller to play.

The Kinect’s sensor uses a camera, a depth sensor and a microphone array to track your movements, scan your environment and listen to spoken commands. The special software employs face and voice recognition and 3D motion capture, transforming you into an onscreen avatar that can fully interact with the characters and action of the game.

While the Kinect was still in development under the name Project Natal, I knew it wouldn’t take too long for programmers and/or artists to figure out a way to make use of its open USB port and create other kinds of drivers that would read input from the sensors. Microsoft says it basically welcomes the experiments that started cropping up so quickly after the system’s release last fall. Beyond gaming and other media industries, the possibilities for Kinect’s use in the creative process and in multimedia performance seem pretty clear. Let’s take a look at what some of the Kinect hackers are up to besides virtual bowling:

Last month at a meetup of the Boston Ableton users group, the crowd got a demo of Kinect being used to send midi information to Ableton Live. Here, the user waves his arm up and down to control a “wobble” effect in the music:

Hirofumi Kaneko made this line drawing program using OpenFrameworks and the Kinect, creating a pencil sketched avatar of himself that moves as he does:

Ryan Monk developed some painting software for use with the Kinect. Here you can watch him moving his “brush” in the air as the artwork takes shape onscreen:

Here’s the next step in VJing with Kinect, controlling images or graphics by dancing along with the music. This example uses an open application called TUIO, which was originally created for interactive or multi-touch surfaces. It can now track specific hand gestures, and helps the Kinect “speak” with the visual software, OpenSoundControl:

Here is some digital puppetry created with the Kinect connected to a MacBook and real-time animation software called Animata. It’s part of the Virtual Marionette research project from grifu.com. :

Looking at all these Kinect hacks sprouting up all over the place, it seems that creating or editing digital media won’t just mean butt spread and carpal tunnel syndrome. We’ll rise up from our workstations and learn to control the arts with the rest of our bodies in some new, though sometimes silly looking, ways.

Jocelyn

Encontre a Bossa Velha, mesma que a Bossa Nova

“Antônio Carlos Brasileiro de Almeida Jobim (January 25, 1927 in Rio de Janeiro – December 8, 1994 in New York), also known as Tom Jobim, was a Brazilian songwriter, composer, arranger, singer, and pianist/guitarist. A primary force behind the creation of the bossa nova style, his songs have been performed by many singers and instrumentalists within Brazil and internationally.”

Tom Jobim: b. 1927 Rio de Janeiro – d. 1994 New York

So goes the WIKI. Happy Birthday, Tom.

I can’t say that I’ve ever been a Bossa Nova fan per se but the more I hear it the deeper my respect grows. Not surprisingly, it grew by leaps and bounds in recent years while working in Brazil. Something about actually being there and taking in that vibe. I’m sure that’s a quite common effect when we travel to a place where any art originates.

This year, to celebrate Jobim’s anniversary on January 25, the radio station WKCR 89.9 FM in New York City is having an all-day on-air festival of his life and music. As part of that series of events, NYC-based composer Arthur Kampela, himself a native of São Paulo, Brazil, put together a group of composers to create recorded arrangements of Jobim’s music. That airs at 8 PM (01:00 GMT) and is able to be heard via streaming on the internet through the station’s web site.

Amongst the composers that Arthur put together to create arrangements were: himself, Clarice Assad, Gene Pritsker, Dan Cooper, myself, and most likely a few more that I won’t know about until I hear the broadcast. When the email went out, Arthur had a list of the composers and included suggestions of pieces we might want to work with. Like I said, not being the biggest Bossa Nova fan, and, the more famous pieces already having been doled out (The Girl from Impanema et al), I had to YouTube the suggestions I was given. I listened to them. Hmm…I wasn’t particularly inspired, despite the fine pieces that they are. Scrolling through a list of Jobim compositions, One Note Samba (Samba De Uma Nota So) caught my eye. With nothing more than the title (one note? I can handle that!) I listened to the original once (OK, I had heard it before I realized) and set to work.

Having no interest in trying to beat the Brazilians at their own game, I had to choose an approach that would be respectful yet unique enough to be worth the effort. I started thinking: how many things do we hear every day that push one note “melodies” at us? I made a short list to get started and then began collecting samples, tuning them all to the same pitch and beat mapping them into the same tempo.

The result was a new piece I call One Note Sampla for Tom Jobim. You can listen to the original here.

Busy Signals in D#m7

“One Note Sampla for Tom Jobim” by Patrick Grant

If you tune in and want to follow along, here’s a list of sounds that one can hear when listening to the piece:

1. A chorus of touch tone phones, from dial tone to keypad to busy signals. The busy signals build up into the first chord of the song (D#m7) in patterns of 2s, 3s, and 4s. A chromatic electric guitar duet is accompanied by strings and timpani as a drum loop of junkyard metal establishes the down beat.

2. A garbage truck alarm sounds as it backs up, left to right, with strings playing the harmonies of a slowed down chorus.

3. Submarine SONAR pings with added dripping water FX. Dripping water in a submarine? Not good.

4. Cells phones ringing and the door chimes of a NYC subway car. Going somewhere.

5. Bells and anvils clang during a double-time jazz version of the chorus.

6. More cell phones ringing with different model car horns playing the one note melody in the distance. Brazilian traffic jam?

7. A heart monitor and respirator. After a gasp, the monitor goes flatline. An international vehicle siren is heard following the descending chromatic harmony of the piece, mimicking a Doppler effect.

8. A rock band kicks in. Under the jangly guitars, an orchestral cresendo from Alban Berg’s expressionist opera Wozzeck is heard.  This comes from the end of Act Two Scene Two of the opera, Variations on a Single Note. At the end you can hear the timpani play the dominant rhythmic motive from Berg’s piece:

The dominant rhythmic motive from Wozzeck

9. One last chorus. The guitars are now in canon, one beat behind the other.

10. Two guitars battle out the last instance of the one-note melody. The orchestra swells on that one note again until…

Patrick Grant


21st Sensory Music: In Conversation with Composer Randall Woolf

As 2010 draws to a close, it should be noted that this year has marked the centennial of the premiere of Alexander Scriabin’s Prometheus: The Poem of Fire, arguably the first contemporary composition to use “multimedia” as we (mis)understand it today. That is, as defined here, accompanying visuals that are produced by electric/electronic means.

With this as a point of entry, a discussion of the previous 10 decades of new music with visuals, and their ever evolving technology, seemed a good way to lead into a mini-profile of the work of composer Randall Woolf. His catalog contains many compositions where the elements of video and staging are prominent features in a unique combination of current technology and contemporary culture in what is 21st century classical music.

Randall Woolf

This blog post is made up of three interdependent parts: this hyper-linked text as an outline, embedded video examples, and an audio interview/conversation (24 min.) between Randy and myself, recorded and edited by Jocelyn Gonzales. Feel free to hop, skip, and jump around all three as you feel fit.

You can listen to the audio here:
All told, it simply wasn’t possible to cover everything that the topic deserved but we did touch upon a number milestones, in rather broad strokes, in this order:

01. Prometheus: Poem of Fire (1910)
02. Synesthetes & Synesthesia
03. Wagner’s stage directions
04. If C=blue, then F#=?
05. Berg’s Lulu and its filmmusik
06. Schoenberg & Satie
07. Walt Disney, Russian animation, and Marcel Duchamp
08. Bernard Herrmann’s score for Psycho
09. The composer as “the last rigger on the ship” in film scoring
10. ELP, Kiss, Pink Floyd
11. Late 70s/early 80s and the advent of MIDI
12. The newer generations’ use of video
13. Fancy screen savers vs. narrative content

Around 9:15 in the audio, our conversation turns to Randy’s work itself. He says it best when he says that his goal is to incorporate aspects of real life into his compositions. We discuss four of his pieces which use video in a number of ways. Excerpts of these works are found below:

REVENGE!

music: Randall Woolf, video: “The Cameraman’s Revenge,” by Ladislaw Starewicz, produced by the Khanzhonkov Company, Moscow 1912

WOMEN AT AN EXHIBITION

music: Randall Woolf, video: Mary Harron & John C. Walsh

HOLDING FAST

music: Randall Woolf, video: Mary Harron & John C. Walsh, Jennifer Choi, violin

BYOD

music: Randall Woolf, video: Margaret Busch, text: Valerie Vasilevski, dance: Heidi Latsky

* * * * * * *

As we conclude, we speak of Randy’s upcoming work, including a new commission from Newspeak based on the Detroit Riots of the 1960s, and as to what the future may hold for the continued marriage of media in modern music.

Speaking of the future, we wish you all a very Happy 2011 and look forward to all the new work to come from us and from all of you.

Patrick Grant

Synch Before You Speak

Ever have one of those days when nothing coming out of your mouth makes any sense?

THIS is sort of what it’s like when that happens to me, although it doesn’t sound half as funny or cool:

This project is called Speakatron, created by interactive designer Marek Bereza for one of this year’s Music Hack Day events. In general, Music Hack Days happen over a weekend in a number of different cities. Musicians, coders and programmers get together to try and build the next generation of music applications, whether it’s software, apps for mobile devices or new ways of creating art for the web.

On the project Wiki, Bereza describes Speakatron as, “A program that looks at you through your web cam and plays a sound when you open your mouth. It can tell what shape you’re making and how high your mouth is on the screen as synthesis parameters.”

I haven’t downloaded it yet, but if you’d like to play around with the program or the source code, you can pull it down from Bereza’s project page. At the moment, the program offers the sounds of a cat, a synth, birdsong and Buddhist monks. I would love to add the wah-wah trombone sound of the teachers on the old Peanuts cartoons.

Jocelyn

H2Opus: Behind the Music (w/video)

Make Music New York 2010

And so, after months of planning and promotion, our Make Music New York 2010 performance of H2Opus: Fluid Soundscapes by Multiple Composers at Waterside Plaza in Manhattan came to pass on June 21st. Funny. Considering everything that could go wrong, musician schedules, illness, broken guitar strings, it all came down to the elemantal. It seemed that, after all of that, our only concerns were the possibility of rain and the reality of wind.

The weather forecast for that day was great. Nothing but pure sunshine all day. Yay! Sort of. On every piece of electronic that we as consumers buy, we see that notice in bold on every instruction manual: “WARNING! Do not store or operate this equipment in direct sunlight.” Man, they are not kidding.

With temperatures in the mid 90s and with no cover of any kind, we were sitting ducks for El Sol. The result was having to prolong the set up process as much as possible and, even then, do so with Manhasset music stands serving as umbrellas for the sound board and such at my station. I couldn’t even read the LCDs on most of the stuff until the sun went down a little further. Everything felt hot to the touch and, as usual, I kept my quiet veneer on the outside while I was privately freaking out on the inside. This I do for my team. I’m long beyond the days of counter-productive displays of dismay when there’s problems to be solved.

The upshot to this was that, after waiting 90 minutes longer than planned to set up, we were going to have to start the show without a proper, if any, soundcheck. Electronics and computers really do strange things when over heated. My computer wouldn’t boot up and read the MBox correctly. Our sound board, with each of the 11 pieces pre-programmed for levels, decided to give me random settings. One can always pre-program these levels in rehearsal and know that at the gig some minor adjustments will be necessary to accommodate for the different space/venue.  Well, this was like Bizarro World.

For smaller, more convivial shows like this, I’ve been able to run the sound from my station no problem exactly because of this programmability. In this situation, we sure could have used a dedicated soundman. My attention was all over the place.  I told the group and people afterwards: “You have just witnessed the last time I run sound while performing, no matter how small the gig, EVER!” I mean it.

Now the wind. It’s a good thing I saw this one coming days ahead of the show. Living at Waterside Plaza, I felt like an ancient mariner, going down to the performance site for every night leading up to the show and taking wind readings. “No good,” I thought. “This wind off of the East River is going to blow our music and our stands all over the place.” We had to find a solution.

My trick was to go to an art supply store next to the School of Visual Arts on 23rd St. and buy a half dozen sheets of black foam core. From these I made and gaffer taped to each stand “wings” (bad choice of words) that could fit 4 pages of music so page turns would not be necessary within a piece. In turn, these music stands would be heavily gaffer taped to the stage so that they wouldn’t blow over. That was half the problem solved. Keeping the music on the stands was the other half.

I called around and found a place in Queens that would cut (and deliver) 9 sheets of 1/4″ clear Plexiglass that would fit on top of the music to hold it down and yet enable us to read it. That seemed to do the trick.

One of the reasons I had approached Waterside Plaza about us doing a Make Music New York performance was, I thought, “How hard good it be. It’ll be EASY! I live there. Just bring everything down on luggage carts and such.” It was harder than that but certainly not as hard as dragging all those 88 key fully weighted keyboards off the premises. Plus, I wanted it to be our “Big NYC Moment.” You know, playing an outdoor gig on the East River on the first night of summer in Manhattan…hell, I romanticized it as being something like our “rooftop concert.”

And, you know: it was just that. Attendance was GREAT, all ages were represented, kids were dancing, the mature folks were bopping in their seats, and the stand-offish teens hung the duration on the perimeter lest they would blow their cool. Most of all, the musicians played great and we did well as a group. We were completed on the diversity of the music played and to me, that meant a lot. After all, “diversity” is this city’s middle name.

H2Opus: Video Excerpts
(click lower right icon in the player to enlarge)

Things to watch for in the video:

1. Musicians struggling with their music against the wind
2. The clever editing around the kids riding scooters back and forth
3. Performing while adjusting the sound at the same time (when possible)
4. The sound of the wind into the mics (you can even see it in the trees)
5. Musicians leaning into their music more and more towards the end as the sun goes down and the light fades

Afterwards, having had a proper sound check, I wanted to do it all over. Not possible. That’s live performance. So ephemeral.

Having this repertoire together now, we all are looking forward to doing it again in some way in this upcoming season: INDOORS!

Patrick Grant