VIX Sound+Light+Motion

4. Gesture and Immersive Events

hebrew translation services

Gesture

The term “gesture”  in the Immersive Environment Instrument (IEI) it is meant to encompass both the physical movement of the live performer and the musical definition of  a phrase or section. In the IEI, these two ideas are linked because the physical gesture will initiate an Immersive Event,  the name for a combined Sound+Light sequence. More on Immersive Events below.

Credit: Micheal WheatleyThe concept of gesture can be explained as using physical movement, as small as the wave of a hand or as large as a whole body motion, to convey emotion to the audience and simultaneously activate a sensor. The sensor will trigger or control an Immersive Event.

During most my years of performance as a musician, I played instruments where the audience could clearly see the cause and effect of striking a guitar string, opening my mouth to sing or playing a pattern on a drum. Since I started performing with notebook computers, that connection has been lost because the physical movement required to move a mouse or click a key is too small to be seen and it is really difficult to convey any kind of emotional charge in a mouse click.

At left, I am pictured moving with my guitar while playing. This Motion is a full body gesture that sends an emotional message to the audience. Even if I projected a live video feed of me activating controls on my computer, it does not express emotional intensity in the same way as a large physical gesture.

To solve this problem, I have designed a sensor array that will allow me to activate Immersive Events by various physical means. More information about the sensors and their function is discussed on page 5. Sensor Array Technology.

Immersive Events

The term Immersive Event describes a Sound+Light+Motion group that has been designed to be presented simultaneously. Sound is obviously the audio component and can consist of combinations of natural field recordings, composed or improvised live music and/or highly processed synthetic sounds created in the computer. These selected audio materials will be paired with a visual companion, much as food and wine are combined to enhance each other.

Credit: Victoria GibsonThe visual component is called Light, but it also refers to shadow and darkness created by lights and video/ still image projection. The photo at left was taken during a rehearsal of the Lorita Leung Dance Company with the Orchid Ensemble before they went to perform in Ottawa 2009. This is an example of how the projection can be used with human body Motion to enhance the effect. The projection was being controlled by Ken Newby of Computational Poetics.

Most pairings of audio and video are produced as separate entities and then merged into one presentation. Many live performances have projections on screens that are controlled by a VJ (video DJ) or are live projections of the performers. Often, these visual elements seem disconnected from both the audio and performer motion.

With sophisticated technology, a single gesture can activate and control both Sound+Light parameters using the same Motion. As the Immersive Event is triggered and manipulated by performer motion,  the sensor type that triggers an appropriate matching emotion is part of te consideration when preparing the program. An abrupt gesture could immediately release a new intense emotional direction or express rejection or anger with flashing, tight beams of light. In contrast, a gentle motion could introduce natural sounds and film with soothing lights creating a dream-like environment as they blur the sharp edges of the projections.

The goal is to be able to improvise a non-linear arrangement of newly generated paired material, such as live video projection of the performer singing, with pre-composed Immersive Events. The emotion of the moment would inspire the performer to trigger the appropriate sensor and then to respond with an audio/visual improvisation. The audience becomes a participant as their response influences the motion/emotional triggering.

Cinematic Virtual Reality (CVR)

Cinematic Virtual Reality is a term I created for the visual component to the Immersive Environment Instrument.

Surround sound is commonplace in live performance and in cinemas, but the concept of surround video is tricky. I propose to set up separate nodes, the same way panoramic photos are really separate photos. This will also give me more flexibility in using lights to soften the edges of each node to aid in blending or to use the nodes for unique, distinct images. Animations, still photos and drawings can also be integrated into the cinematic projections similar to the way they are fused in contemporary films.  I think that it is more cost effective and realistic to project each node independently and let the instrument create a Cinematic Virtual Reality (CVR) illusion.

Forward to 5. Sensor Array Technology

Return to Index of Immersive Environment Instrument

See posts about the Immersive Environment Instrument

Did you like this? If so, please bookmark it,
tell a friend
about it, and subscribe to the blog RSS feed.

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Visit our friends!

A few highly recommended friends...