Siggraph '91, Las Vegas
"VBK - A Moviemap of
Mediated Reality and the Consciousness of Place
I. Something Extraordinary Happens
It's last October. I'm in San Francisco sitting
in a video editing room looking at the film transfer of the Karlsruhe
footage, which I shot the previous month from the front of a tramway
car. During the next few days I will edit it for a videodisc. I
am watching hundreds of little snippets of straight and turn sequences
whiz by at hypnotic speeds. For a few minutes it's kind of fun.
Then it all starts to look alike. Dang, I can't believe I'm doing
this again. It isn't quite like making a linear movie with an aesthetic
of montage - the purpose is to organize the material to minimize
disc search time and to make programming efficient. What a drag.
But it's gotta get done.
Around the third day of nonstop logging and editing
something extraordinary happens: I know where I am in the footage.
Always. You can show me any of the forty-some thousand frames of
the 108 kilometers shot in both directions and I can point to the
corresponding place on a map, as long as I can "move" back and forth
a bit. It actually seemed to happen all of a sudden, like I was
astral-projected to see Karlsruhe from a God's-eye view.
I had formed a mental map. "Karlsruhe" became
a singular thing, an almost living entity with which I could now
relate. Once that happened, every frame had its place.
A colleague once told me he believed he could
tell what section of Paris he was in purely by the quality of light.
I suspect this sense of wholeness, of consciousness
of place, can be conveyed in a very fast and highly impressionistic
way with such emerging interactive media. Not by simulating reality
- a trap perpetuated by the believers in the objective and ultimately
a losing battle. But by abstracting reality - creating experiences
otherwise impossible in the real world. And that these experiences,
when done artfully, will make you appreciate really being there
II. Moviemap Basics
"My" definition of a moviemap is:
user-controlled seamless navigation
through a real or created place via optical disc;
optical disc as lookup medium (no realtime
user has realtime control of one-dimensional
speed and direction;
user has occasional control of two-dimensional
choices but only at intersections (I call this "1.1D");
there may or may not be additional
non-seamless hypermedia information (ie., "destinations," tied together
by the surrogate travel "routes").
In late 1977, the first prototype laserdisc players
were introduced to a small group of research institutions, including
M.I.T.'s Architecture Machine Group. I recall the day it came (I
was a grad student at the art center across the street, straddling
between it and several labs). "ArcMac," at the time, was often viewed
as a "computer graphics lab," but was more a vehicle toward understanding
deeper processes, evolving out of Nicholas Negroponte's original
credo that "computers should know their users." Past and current
mega-projects at that time included "Graphical Conversation Theory,"
"Spatial Data Management Systems," and "Mapping By Yourself," so
it was natural to investigate the videodisc's potential for making
The following spring, Peter Clay, an undergrad,
shot some single-frame film footage travelling through the M.I.T.
hallways with some help from Bob Mohl (a grad student who went on
to write his PhD dissertation on moviemaps) and me. By the summer
of 1978 we were ready to shoot something more. A real environment
was selected - Aspen, Colorado (in part because of its distincitvely
During 1978 and 1979 Aspen went through a quiet
media "sweep." Under the direction of Andy Lippman and with additional
help from wildlife cinematographer John Borden, cognitive psychologist
Kristina Hooper, filmmaker Ricky Leacock, and others, streets were
shot with 4 16mm stop-frame film cameras (pointing front, back,
left, and right) triggered to fire every 10 feet by a fifth wheel
on the back of our vehicle. The camera pod was stabilized by an
expensive gyro platform. We also shot with a 360° fisheye-style
lense. In addition to filming the routes, we shot stillframes of
every facade in town (twice - both in summer and winter), stillframe
"slideshows" of many interiors, short movies, and audio interviews.
Rebecca Allen and Steve Gregory recorded binaural sound. Scott Fisher
reshot historic photos from the same points of view.
Back at the lab, Steve Yellick digitally "de-warped"
the fisheye footage. Also, the entire town was hand-digitized by
Walter Bender into a crude 3D cartoon-like model. The "basic system"
required at least two videodisc players both running into a switcher
so that when one player was playing, the other was cueing, thus
eliminating any blanking during searches.
The Aspen Moviemap was funded by the Cybernetics
Technology Office of DARPA. Several other (mostly military) moviemaps
were sponsored after, mostly (I was once told by one producer) "cheap
and dirty" compared to Aspen.
In 1985 I directed production of a moviemap of
a section of downtown Paris for the Paris Metro. With Bob Mohl,
I shot using a custom 35mm film camera along sidewalks on a modified
golfcart, triggering one frame every 2 meters. Rather than filming
turns, we hired a mime to stand in the intersections and point.
The system would then cut from the pointing mime to the direction
she was pointing. The theory was to replace visual seamlessness
with cinematic seamlessness (a la Eisensteinian montage theory).
The Paris "VideoPlan" was on public exhibition at the Madeleine
Metro stop for two years.
"Palenque" was filmed later in 1985 by the Bank
Street College for the (then RCA) Sarnoff Labs as a prototype DVI
application. Under the direction of Kathy Wilson, both Bob Mohl
and John Borden helped shoot footage of walking trails. Palenque
is a extensive multimedia package, including stillframes and text
information about the site, as well as semi-realtime "dewarping"
of fisheye images.
In 1987 I conceived and directed the "Golden Gate
Interactive Videodisc," commissioned by Advanced Interaction, Inc.,
currently on display at the Exploratorium. A 10 by 10 mile grid
at one mile intervals was carefully shot from a helicoptor with
a special gyro-stabilized camera system, centered on the Golden
Gate Bridge. The input device was a trackball, and with software
designed by Ken Carson, created a feeling of realtime control, or
"tight linkage," between what you do and what you see. This system
also required two videodisc players and a switcher.
The Karlsruhe Moviemap was commissioned by the
Zentrum fur Kunst und Medientechnologie (ZKM), a state-funded arts
and media lab under construction in the town of Karlsruhe, Germany.
Karlsruhe has a well-known tramway system, with over 100 km of track
snaking from the downtown pedestrian area out to neighboring villages
at the edge of the Black Forest.
The entire tram system was shot in both directions
from a tramcar outfitted with a 16mm film camera triggered by the
tram's electronic odometer (at 2, 4 and 8 meters per frame depending
on location). The tracks assure unrivaled stability and seamless
Using the tramway line as the basis for a moviemap
of the town has its drawbacks. It doesn't go everywhere. For example,
Karlsruhe has two large parks which are barely visible in the footage.
Also, the presence of the rails, while perhaps adding a sense of
visual continuity during travel, may distract the viewer from "looking
Yet the tramway routes are there for reasons of
history, culture, politics, and geography.: not a bad basis for
sampling the place.
The delivery system is controlled by a Mac II
computer using Hypercard with software designed by Christoph Dorhmann.
It consists of a large projected video image from a single Pioneer
8000 videodisc player (whose built-in frame buffer eliminates blanking
during searches), a graphic map-and-cursor display, and a custom-built
input consisting of a broomstick-size lever for controlling speed
and direction (zero to mach three) and three footswitches (left,
center, and right) for choosing which direction to go at each intersection.
The installation is intended to be transparent
in its responsiveness (no significant lags) and culture-independent
(e.g., no text). Each input device has an indicator light on it.
When an input is active its light flashes until it is used. When
it is being used is stays lit. When it is inactive the light goes
V. The Future: Shooting for Cyberspace
Immersion in ortho-stereoscopic imagery with unconstrained
head motion and realtime manipulation is often considered the essence
of "virtual reality" or "cyberspace." Today it is primarily restricted
to the cartoon-land of computer generated images.
The future of moviemaps lie entirely in how they
integrate into 3D computer models. Eventually, camera input will
be used as the basis for such computer models (see M. Bove's PhD
dissertation "Synthetic Movies Derived from Multi-Dimensional Image
Sensors," MIT Media Lab 1989). Whatever was not shot will be interpolated,
not an easy task, particularly when shooting in the field. Similarly,
the issue of when to use a 3D realtime computer in the final delivery
system and when to use a pre-stored version (or anything in between)
will be a function of cost, state of the technology, and as always: