Emerson 0.0000001: Preliminary thoughts for a low cost, real time customizable digital puppeteering system & rehearsal methodology for live scripted performances

Louai Personal Projects 0 Comments

Background

I’ve long fantasized about bringing 3D digital characters to life on a theatrical stage to perform with human counterparts. In particular, I was really inspired by Bulbous Bouffant by The VestibulesThe Vestibules are a Canadian sketch comedy troupe and this piece is a work of whimsical genius. I have not so clear memories of sitting in my friends dorm room; “Listen to this,” he cryptically said and I was promptly taken on a 4.5 minute ride of wonder. I’ve been obsessed with this piece ever since.

There are two main characters, one of whom sounds like a straight laced normal human, and the other is undoubtedly a cartoon character. In my college days I thought long and hard about how I could adapt this to theatre for my Div III, with a human actor playing normal dude and a digital projection of the cartoon character as a 3D model, being puppeteered or prerendered behind the scenes by an operator.

Playing back prerendered video of a digital character seemed antithetical for a live performance. That’s not to say video doesn’t work in theatre, but it doesn’t work that well for a character that’s supposed to be interacting with a live actor. Even when delivering a scripted performance an actor has all sorts of subtelties of timing and gesture that change with each audience, ie when a line is delivered, which parts of the line get emphasized, where exactly they fall on stage. Live puppeteering seemed like the only solution. However, I hate the look of digital puppets that just flap their mouths open and shut, without making real lip shapes and facial expressions. In order to get a fully articulated digital puppet the control systems that existed then and currently do now are very complex, proprietary, require two or more people to operate, expensive, and may need an engineer for initial configuration:

Since I’ve been focusing more and more on character animation over the last few years rather than motion graphics and compositing my interest in puppetry rekindled in a big way, especially after watching the Elmo documentary and stumbling on Jeff Dunham standup on TV. I started searching for hand puppet to play with and a colleague and fellow puppet enthusiast turned me onto the Muppet Workshop at FAO Schwarz. You get to customize a Muppet extra using their web app and then THEY SEND IT TO YOU. Amazing!

Together, at last.

Louai and Enzo. Together, at last.

His name is Enzo. Enzo is wonderful. Unlike a rigged 3D model, he responds in real time, he puts me in character immediately, I can experiment endlessly, doesn’t require a graph editor to get perfect arcs, and is completely WYSIWYG. Unlike a rigged 3D model, he is wholly limited to how my hand and body can move and the range of my voice. So I would love to take the immediacy of single performer hand puppetry and combine it with the control, articulation and flexibility of an animated 3D model and project the results in front of a live audience.

The System

The workflow would be something like this:

  1. Make your hand puppet.
  2. Make a 3D model and rig inspired by your hand puppet, or vice versa.
  3. Rehearse with your hand puppet and other human performers.
  4. Record a final performance with your puppet and other performers in front of a camera. The hand puppet’s performance is recorded as machine friendly data using a variety of methods; a flex sensor in it’s mouth, a gyroscope on it’s head, tracking markers on the hands, and an ever increasing number of other ways to capture physical movement.
  5. Keyframe the 3D model animation with a 3D application, using the recording as reference, and continue to shoot reference as needed, both with puppet and your whole body. This finished animation is what the audience will see and the human actors will interact with.
  6. Using a custom tool denote the parts of the animation that will be modified and exaggerated during the live performance. Add cue points for when phrases of the animation should be triggered.
  7. Showtime! You will perform with your hand puppet in front of a video camera with the sensors attached, as you reach the cue points with modifiable attributes created in step 6, the animation will playback with whatever output medium you’ve chosen, ie a project or large display. As the performance of the hand puppet unfolds it will drive the playback of the 3D model with modifications to the keyframe data, such as the exact tilt of the head, how far the 3D model moves across the stage, how fast a walk is, or when exactly a line is delivered. The control system playing back the animation would be rendering in real-time 3D, much like a game engine. What will be interesting is using machine learning methods in the control system so it can match the physical puppet movements against the keyframe animation.

I have no idea how well this will work, but there must be a better way to control digital puppets that is more natural than stepping inside a mech or button mashing an XBox controller.

Leave a Reply