Matching camera movements between After Effects and Arduino

29 September, 2010

The core art of cinematic special effects lies in combining multiple sources of imagery into a single unified illusion. The individual sources of imagery can be everything from still photography and painting to computer imagery and live video.

In order to have a moving point of view in these kind of composite images, the cameras (real or virtual) from every source of imagery must be synced up. The craft of achieving this is called match moving.

In contemporary film production match moves typically take the form of trying to match a computer-generated special effects shot to existing footage taken with a moving camera. In other words, effects artists try to recreate the motion of a real camera with a virtual one that moves around their virtual 3D environment in order to get views of their rendered objects and creatures that fit into the perspective of the footage shot on set.

Before the era of digital effects, match moves meant something different: reproducing an identical camera move repeatedly in order to shoot a series of different pieces of film that would later be combined by an optical printer into a single final shot. Cameras (and other moving objects such as spaceships) were moved by stepper motors which were controlled by a computer that could reproduce an identical pattern of motion repeatedly. This allowed multiple passes of the camera to capture different elements at different scales or lighting conditions: the distant starfield, a planet, a spaceship, the spaceship's glowing engines.

One strategy has become possible with today's extremely high-end special effects technology is a kind of combination of these two approaches: composing a camera move within a virtual computer-generated space and then using a motion control rig to reproduce that move with a real camera in order to create digital and real world shots that will match.

This week, I began working on a hand-crafted, DIY approach to create this kind of workflow with basic desktop software and simple electronics: After Effects, Processing, Arduino, a servo motor. The goal is to have a workflow that goes like this: design a 3D set in After Effects and compose a camera move within that set. Extract the camera position and orientation data from After Effects. Render the animation out of After Effects. Write a Processing sketch that imports the camera data and then streams it to an Arduino while playing back the rendered movie, keeping the two in-sync. The Arduino then translates the camera data into a series of actuators (servos, steppers, etc.) in order to move around a real world camera to shoot some physical elements in front of a green screen. The live image from that camera is then brought into the computer in real time and composited with the footage being played in Processing (either within Processing itself or, better, within a nodal compositing system like Conduit). The final result is a set of physical objects being live composited into pre-rendered animation with matching perspective and a moving camera.

Obviously, this workflow has a lot of steps and will take quite a bit of work to perfect. The first few steps (creating a 3D camera move in After Effects, exporting the data into Processing and using Processing to sync up the playing video with an Arduino reproducing the physical camera move), however, I currently have solidly in hand. I'll spend the rest of this post explaining the technical details involved in my current system.

I started by designing a very simple 3D set and camera move in After Effects. It consists of 4 elements and a single camera move that takes place entirely along a single axis:

CALDIC in Front of 333 Ravenswood from Greg Borenstein on Vimeo.

Then, the next task was to extract the position of the virtual camera at every frame into an XML file I could import into Processing. In order to accomplish this I learned Adobe Extendscript, Adobe's programming environment for scripting their apps. Extendscript lets you write JavaScript programs that can access files and settings within any Creative Suite app. It's meant for building automated production pipelines so it has access to every menu item and setting for everything in all the apps. Extendscript provides a DOM-like API for each application, making the process of programming in it very familiar for anyone experienced in javascript programming on the web. For example, here's the diagram of the After Effects Object Model from the After Effects CS3 Scripting Guide:

After a bit of trial and error, I managed to whip up a script that captured the X, Y, and Z position of the active camera for the selected comp at every frame and write that to a file as XML. (Note: this script has hard-coded paths that are customized for my machine, if you wanted to run this script yourself you'd need to edit the path or, better, improve it to prompt the user for where they want to save the file.) Here's the source:

Once I had the XML of the camera positions and a rendered animation, I wrote a Processing sketch that played back the movie while simultaneously sending the Z-position of the camera over serial to an Arduino. Check out the code for that here: 1-Axis Motion Control Move (Processing Sketch). (Note: I tried to do this in Eclipse, but ran into a problem getting Eclipse to find the XMLElement library that's required to decode the XML; I'm sure there's a solution to this, but I wanted to spend my prototyping time on my prototype rather than wrestling with Eclipse so I bailed out and went back to the Processing IDE.) On the other end of that, I wrote very basic Arduino code that simply read a value over serial and moved a servo to the position indicated by that value. (Code here: 1-Axis Motion Control Move (Arduino Sketch)).

The result of all of these fiddly pieces was a servo that moved in sync with my After Effects animation:

A few notes about this video. First of all, even though the servo is moving from right to left in this demo, imagine that it's carrying a camera forward towards some subject that would fit in the environment being represented in the animation when the match move was composited. Second, the mechanism that is converting the servo's rotation into linear motion is called a scotch yoke and I'll have a full post about it and its construction soon. And finally, there's a bit of a glitch at the beginning where the servo doesn't start moving when the video does and then suddenly jumps into position after which it keeps up. This (and the slightly glitchy video playback) are issue with my Processing script that I'll be improving in the next week or so.

So, at this point, I've got the beginnings of a motion control system that could produce a reproducible match move with a physical camera that corresponds perfectly with a digitally animated shot. Stay tuned for more on the mechanical portion, the compositing, and the actual aesthetic ideas I'm trying to explore with this work.