Flong

Printed from www.flong.com/texts/essays/see_this_sound_old/
Contents © 2020 Golan Levin and Collaborators

Golan Levin and Collaborators

Essays and Statements

Audiovisual Software Art: A Partial History

by Golan Levin, 9 May 2009

Author's Note: The following article was commissioned for one of approximately 25 chapters on different forms of synaesthetic and audiovisual art for "See This Sound", an encyclopedic catalogue accompanying an exhibition and symposium of the same name at the Lentos Kunstmuseum, Linz in late 2009. As this encyclopedia has many chapters by others covering e.g. audiovisual installations, games, performances, films, videos and animations, it should be understood that the scope of my own article is correspondingly limited, and some of my favorite and/or important works are covered elsewhere in the volume.

Keywords and Key Phrases:
Audiovisual software art, audio visual software, history of audiovisual art, history of software art, history of computer art, history of generative art, history of synaesthetic art, abstract cinema, dynamic abstraction, interactive audiovisuals, interactive abstraction, synaesthesia, synaesthetic software, audiovisual performance systems, audiovisual games, game art, generative audiovisuals, computer animation, sound visualization, image sonification, audiovisualization, sons et lumieres, sound and light, audiovisual mappings, sound-image mappings, screensavers, skins, visualizers, lightshows, games, tools, instruments, toys, notation systems, sound-responsive software, graphic interface to sound.

Introduction

Audiovisual software art relies on computer software as its medium, and is primarily concerned with (or is articulated through) relationships between sound and image. Such works are produced for diverse social contexts, and the objectives served by these works are many. In the field at large, and in the examples discussed in this article, we note software artworks that serve some of the same aims as cinema, performances, installations, interior design, games, toys, instruments, ‘screensavers’, diagnostic tools, research demonstrations, or even aids for psychedelic hallucination—though many projects blur these boundaries to the extent that categorization may not be very productive. Likewise, audiovisual software artworks continue to emerge from plural and only occasionally intersecting communities of research scientists, new media artists, software developers, musicians, and isolated individuals working outside the institutions of the laboratory, school, museum, or corporation.

Owing to this diversity of origins and intents, the formal scope of what might be considered audiovisual software art is quite large as well. Some works generate images or animations from live or pre-recorded sounds. Other projects generate music or sounds from static images or video signals, or use screen-based graphic interfaces to interactively control musical processes. Other artworks generate both sound and imagery from some non-audiovisual source of external information (such as stock trading data, human motion capture data, etcetera) or from some internal random process. And yet other systems involve no sound at all, but are concerned instead with exploring the possibilities of “visual music”, meaning an analogue to music in the visual domain. Many threads of influence and inspiration in the history of audiovisual software cut across these formal and technical distinctions. For this reason, in the sections that follow, audiovisual software art is considered according to the principles of visualization and notation, transmutability, performativity, and generativity which frequently motivate this kind of work. These sections are first introduced by a brief treatment of the early history of audiovisual computer art, and are concluded by a survey of the online communities which have arisen to support and preserve this artform.

Predecessors and Pioneers of Audiovisual Software Art

Whereas there are thousands or perhaps even tens of thousands of audiovisual software arts practitioners today, the origin of these practices sprang from the work of just a handful of artists who obtained access to computer laboratories in the mid-1960s. The work of California-based filmmaker and animation pioneer John Whitney is a reasonable starting point for this history. While most of Whitney’s contemporaries in early computer art (such as Georg Nees, Frieder Nake, Manfred Mohr, or Chuck Csuri) were focused on software-generated plotter prints, Whitney was strictly interested in the music-like qualities of time-based imagery. Computers in the 1960s were too slow to generate complex images in real-time, however, so Whitney instead used the computer to output frames of animation to film. In animations like Permutations (1966-1968, developed in collaboration with IBM researcher Jack Citron) and Arabesque (1975, created with the assistance of Larry Cuba), Whitney explored the ways in which the kinetic rhythms of moving dots could produce perceptual effects which were strongly analogous to modulations of musical tension. Whitney’s films of this period were generally accompanied by non-electronic music [35]; only later, with the advent of personal computing and real-time graphics in the early 1980’s, did Whitney’s focus shift to the development of a software instrument with which he could compose imagery and music simultaneously, as demonstrated in his animations Spirals (1987) and MoonDrum (1989).

Although the late 1960s and early 1970s witnessed rapidly accelerating technical advances in both computer graphics and computer music, the capacity to generate both media in real-time was still several years away. For this reason, many significant experiments that would lay the conceptual groundwork for purely computer-based real-time audiovisuals were nonetheless assembled in the offline context of the film studio. This can be seen in the work produced at Bell Laboratories by the American computer artist Lillian Schwartz (1928-), who collaborated with notable computer musicians on abstract film animations like MATHOMS (1970, with music by F. Richard Moore); MUTATIONS (1972, with music by Jean-Claude Risset); and MIS-TAKES (1972, with music by Max V. Mathews). Later, some artists combined computer-based synthesis in one medium with traditional live performance in another. German graphics pioneer Herbert W. Franke, writing in 1975, described the production of two ten-minute ‘graphic music’ films (Rotations and Projections, ca. 1974), in which live jazz musicians freely improvised in response to simultaneously projected patterns of abstract animated lines [11]. In Lillian Schwartz’s 1976 performance, ON-LINE, a live dancer and musicians performed in real-time while Schwartz, playing a QWERTY keyboard, created special graphic effects on a computer-controlled video system [29].

Possibly the first computer system capable of synthesizing both animation and sound in real-time was the VAMPIRE (Video And Music Program for Interactive Real-time Exploration/Experimentation) system developed by Laurie Spiegel between 1974-1976 on a DDP-224 computer at Bell Laboratories in New Jersey. VAMPIRE offered a drawing tablet, foot pedal, and a large number of continuous knobs and pushbuttons—all of which could be used to modulate and perform a wide variety of image and/or sound parameters [31]. Built on top of Max Mathews’ computer music research system, GROOVE (Generating Real-time Operations On Voltage-controlled Equipment), VAMPIRE, according to Spiegel,

“was an instrument for composing abstract patterns of change over time by recording human input into a computer via an array of devices the interpretation and use of each of which could be programmed and the data from which could be stored, replayed, reinterpreted and reused. The set of time functions created could be further altered by any transformation one wished to program and then used to control any parameter of image or of sound (when transferred back to GROOVE’s audio-interfaced computer by computer tape or disk). Unfortunately, due to the requirement of separate computers in separate rooms at the Labs, it was not physically possible to use a single set of recorded (and/or computed) time functions to control both image and sound simultaneously, though in principle this would have been possible.” [31]

A partial list of other significant software-based or software-generated audiovisual art of the late 1960s through the early 1980s includes the computer animations of Stan Vanderbeek, Ken Knowlton, Tom DeFanti and Larry Cuba; the computer-controlled laser projections of Paul Earls and Ron Pellegrino; and the interactive installations of Myron Krueger and Ed Tannenbaum. The introduction of the personal computer significantly broadened the landscape of audiovisual arts, making way for new forms like the digital video performance work of Don Ritter and Tom DeWitt, and the interactive desktop software creations of Adriano Abbado and Fred Collopy.

Sound Visualization and Notation

Forty-three years after John Whitney’s early experiments, real-time audiovisual software now comes as a standard component in every major computer operating system. At the time of this writing, a person’s first encounter with audiovisual software is most likely to be with a ‘screensaver’ (a software utility that prevents burn-in on some kinds of computer displays) or with a ‘music visualization plug-in’ for a computer-based media player. In many cases, these functions are combined into a single piece of software. The aesthetics of such systems are more than occasionally targeted to a broad casual audience with an interest in psychedelic visual culture. The influential screensaver and visualizer Cthugha, for example, created by the Australian software developer Kevin “Zaph” Burfitt between 1993 and 1997, advertised itself as “an oscilloscope on acid” and as a “form of visual entertainment, useful for parties, concerts, raves, and other events as well as just vegging out to mesmerizing, hypnotizing displays” [5]. Despite this colorful language, Cthugha’s self-description as an “oscilloscope” is actually quite accurate from a technical standpoint. An oscilloscope is a tool for viewing the waveform (or time-domain representation) of a signal, such as music, in real-time—and Cthugha is essentially an elaborated oscilloscope, which decorates a sound’s waveform by rendering it into richly colored variations of video feedback. Sound waveforms are the simplest possible information that can be extracted from digital audio data, and have therefore been used as the basis for numerous other visualizers as well, such as Geiss (1998-2008) and MilkDrop (2001-2007) by Ryan M. Geiss, G-Force by Andy O'Meara (which has been licensed for use in Apple’s iTunes music player), Advanced Visualization Studio by Nullsoft, and ProjectM by Pete Sperl and Carmelo Piccione [22].

Some software artists have approached the challenge of visualizing music, not to produce entertaining or entrancing aesthetic experiences, but instead to provide analytic insight into the structure of a musical signal. These works exchange the ‘expressive’ visual languages of painting and abstract cinema for the conventions of ‘legibility’ found in diagrams and music notation systems. An early example of this is Stephen Malinowski’s Music Animation Machine (1982-2001), a software artwork which generated scrolling “piano roll” representations of MIDI sound files as a real-time graphic accompaniment to the music’s playback [20]. The earliest versions of the Music Animation Machine represented notes with colored bars whose vertical position corresponded to their pitch. Later variations of Malinowski’s project incorporated additional visual schema for representing the harmonic or dissonant qualities of musical chords, the spans of melodic intervals, and the timbres of different instrument tracks.

[Picture of Stephen Malinowski’s Music Animation Machine.]

Malinowski’s system for showing note pitches is an example of a frequency-domain representation, which alongside the (time-domain) waveform is the other principle mainstay of sound visualization systems. Frequency-domain representations take many forms, including piano-rolls (so called because they resemble the paper scores used in 19th century player pianos), spectrograms, sonograms, graphic equalizer displays, ‘spectral waterfall’ displays, 3D surface spectrograms, and (when applied to voice signals) ‘voiceprints’.

It is very common for audio visualization artworks, whether aesthetic or analytic, to present real-time animated graphics as an accompaniment to sound. Such systems typically display time-based representations of perceptual phenomena like pitch, loudness, and other relatively instantaneous auditory features. An interesting exception to this real-time trend is Martin Wattenberg’s The Shape of Song, a software artwork that produces static images from MIDI music in order to reveal its long-scale and multi-scale temporal structures. For The Shape of Song, Wattenberg introduced an entirely new visualization method, termed “arc diagrams”, which displays the ways in which constituent passages and phrases are repeated in a larger piece of music. The Shape of Song is necessarily a non-real-time visualization of music, as any real-time version would require perfect future knowledge of repetitions yet to happen.

The Transmutability of Data: Mapping Input Signals to Sounds and Images

A significant theme in many audiovisual software artworks is the transmutability of digital data, as expressed by the “mapping” of some input data stream into sound and graphics. For these works, the premise that any information can be algorithmically sonified or visualized is the starting point for a conceptual transformation and/or aesthetic experience. Such projects may or may not reveal the origin of their input data in an obvious way, and indeed, the actual source of the transformed data may not even matter. This proposition is made particularly evident in Data Diaries (2002) by Cory Arcangel, in which the artist has used Apple’s Quicktime movie player to straightforwardly interpret his computer’s entire hard drive as if it were an ordinary movie file [2]. As Alex Galloway writes in the project’s introductory notes, “[Arcangel’s] discovery was this: take a huge data file—in this case his computer’s memory file—and fool Quicktime into thinking it’s a video file. Then press play.” [2] Although Arcangel’s process in Data Diaries posits a nearly total rejection of artistic “craft,” the results of his readymade technique nonetheless delineate a pure “glitch” aesthetic with a colorful and surprisingly musical quality.

Most commonly, the transmutability of data per se is not itself the primary subject of a work, but is rather used as a means to an end, in enabling some data stream of interest to be understood, experienced, or made perceptible in a new way. In such cases, the artist typically gives special attention to the aesthetics (and sometimes the legibility) of the audiovisually-rendered information. The software artworks in the Emergent City series by the British artist Stanza are representative of this approach; these projects employ data collected from urban spaces as the basis for generating audiovisual experiences themed around cities. In Datacity (2004), a browser-based Shockwave application, sounds and video are collected in real time from multiple cameras around the city of Bristol, and are then collaged and manipulated to produce a “painterly interpretation of the landscape” [32]; in Sensity (2004-2009), measurement signals from a network of wireless environmental sensors, deployed by the artist throughout his neighborhood, are used to generate audiovisual layers in an interactive map display [33]. The user of both projects is provided with various interfaces that allow further personalization of the audio mix and visual experience. Other artists have developed software art based on audiovisual mappings derived from e.g. weather data, network traffic (Carnivore, 2001, by Alex Galloway and the Radical Software Group [26]), seismic activity (Mori, 1999, by Ken Goldberg et al. [13]), Ebay user data (The Sound of Ebay, 2008, by Übermorgen [34]), topographic data (G-Player, 2004, by Jens Brand [4]), and casualty statistics from the American military action in Iraq (Hard Data, 2009, by R. Luke DuBois [10]), to name just a few examples.

The voyeuristic software installation Listening Post, by Mark Hansen and Ben Rubin, produces a particularly moving audiovisual experience by mapping Internet text streams to text fragments culled “in real time from thousands of unrestricted Internet chat rooms, bulletin boards and other public forums. The texts are read (or sung) by a voice synthesizer, and simultaneously displayed across a suspended grid of more than two hundred small electronic screens” [14]. By rendering these otherwise disembodied texts into sounds and animated typography, this project literally “gives voice” to the unspoken words of thousands of people, placing its viewer at the center of an maelstrom of desires, opinions, chatter and solicitations collected from around the world.

The design space of data-mapping projects has been humorously summarized in Jim Campbell’s Formula for Computer Art, 1996-2003, an animated cartoon diagram, which mischievously implies that the inputs to many data-mapping artworks may be fundamentally arbitrary and thus interchangeable [6].

Mappings based on Human Action: Instruments

A variety of performative software systems use participatory human action as a primary input stream for controlling or generating audiovisual experiences. These systems range from screen-based musical “games”, to deeply expressive audiovisual “instruments”, to puzzling and mysterious audiovisual “toys” whose rule-sets must be decoded gradually through interaction. In many cases the boundaries between these forms are quite blurry. Some of these systems are commercial products; others are museum installations or browser-based Internet experiences; and some projects have moved back and forth between these forms and contexts. What these applications all share is a means by which a feedback loop can be established between the system and its user(s)—allowing a user or visitor to collaborate with the system’s author in exploring the possibility-space of an open work, and thereby to discover their own potential as actors.

The category of performative audiovisual software games is extremely large, and is treated in depth elsewhere in this volume. We briefly note instances where such games may also be intended or regarded as ‘artworks’, as with Masaya Matsuura’s Vib-Ribbon (1999), a rhythm-matching game, or art/game ‘mods’ like RC (1999) by Retroyou (Joan Leandre), in which the code of a racing car game has been creatively corrupted and repurposed. One particularly notable game-like system is Music Insects (1991-2004) by Toshio Iwai, which functions simultaneously as a paint program and a real-time musical composition system, and which Iwai has presented in a variety of formats, including museum installations and commercial game versions.

Numerous audiovisual ‘instruments’ have been created which allow for the simultaneous performance of real-time imagery and sound. Many of these screen-based programs use the gestural, temporal act of drawing as a starting point for constructionist audiovisual experiences. A pioneering example of this was Iannis Xenakis’s UPIC (1977-1994), which allowed users to gesturally draw, edit and store spectrogram images using a graphics tablet; a 1988 version offered fully real-time performance and improvisation of spectral events [18]. Whereas the UPIC was developed to be a visually based instrument for composing and performing sound, other audiovisual performance systems have been explicitly framed as ‘open works’ or ‘meta-artworks’—that is, artworks in their own right, which are only experienced properly when used interactively to produce sound and/or imagery. A good example is Scott Snibbe’s Motion Phone (1991-1995), a software artwork that allows its user to interactively create and perform ‘visual music’ resembling the geometric abstract films of Oskar Fischinger or Norman McLaren. The Motion Phone records its participant’s cursor movements and uses these to animate a variety of simple shapes (such as circles, squares, and triangles), producing silent but expressive computer-graphic animations [30]. A related artwork, Golan Levin’s Audiovisual Environment Suite or AVES (2000) presents a collection of cursor-based interactions by which a user can gesturally perform both dynamic animation and synthetic sound, simultaneously, in real-time. Based on the metaphor of an “inexhaustible, infinitely variable, time-based, audiovisual ‘substance’ which can be gesturally created, deposited, manipulated and deleted in a free-form, non-diagrammatic image space”, Levin’s system uses recordings of the user’s mouse gestures to influence particle simulations, and then applies time-varying properties of these simulations to govern both visual animations and real-time audio synthesis algorithms [17]. Amit Pitaru’s Sonic Wire Sculptor (2003) likewise produces both synthetic sound and animated graphics from the user’s mouse gestures, but shifts the representational metaphor from a 2D canvas to a 3D space populated by the user’s ribbon-like drawings [25]. Josh Nimoy’s popular BallDroppings (2003) departs from free-form gestural interaction, presenting instead an elegant mouse-operated construction kit wherein “balls fall from the top of the screen and bounce off the lines you are drawing with the mouse. The balls make a percussive and melodic sound, whose pitch depends on how fast the ball is moving when it hits the line” [21]. Nimoy articulately summarizes the hybrid nature of such work: “BallDroppings is an addicting and noisy play-toy. It can also be seen as an emergence game. Alternatively this software can be taken seriously as an audio-visual performance instrument.”

Another genre of performative audiovisual software dispenses with drawing altogether, in favor of a screen space populated (usually a priori) with manipulable graphic objects. Users adjust the visual properties (such as size, position, or orientation) of these objects, which in turn behave like mixing faders for a collection of (often) pre-recorded audio fragments. This can be seen in Stretchable Music (1998), an interactive system developed at MIT by Pete Rice, in which each of a heterogeneous group of responsive graphical objects represents a track or layer in a pre-composed looping MIDI sequence [17]. Other examples of this interaction principle can be seen in John Klima’s interactive Glasbead artwork (2000), “a multi-user persistent collaborative musical interface which allows up to 20 online players to manipulate and exchange sound samples” [16], or more recently in Fijuu2 (2004-2006) by Julian Oliver and Steven Pickles, whose adjustable graphical objects allow more dramatic audio manipulations.

The systems described above were designed for use with the ubiquitous but limited interface devices of desktop computing: the computer mouse and keyboard. The use of comparatively more expressive user interface devices, such as video cameras or custom tangible objects, considerably expands the expressive scope of instrumental audiovisual software systems, but also pulls them towards the formats (and physical dimensions) of performances and/or installations. Finnish artist-researcher Erkki Kurenniemi’s landmark 1971 DIMI-O system (“Digital Music Instrument, Optical Input”) synthesized music from a live video image by scanning the camera signal as if it were a piano-roll [18].  David Rokeby’s Very Nervous System or VNS (1986-1990) explored the use of camera-based full-body interactions for controlling the simultaneous generation of sound and image. Other audiovisual software instruments have employed custom tangible objects as their principal interface, such as the Audiopad (2003) by James Patten or ReacTable (2003-2009) by Sergi Jordà, both of which use real-time data about the positions and orientations of special objects on a tabletop surface to generate music and visual projections [3].

Generative Audiovisual Systems

The above sections have discussed several genres of audiovisual software artworks, including systems that use music to generate aesthetic or analytic visualizations; artworks that map real-world data signals to graphics and sound; and artworks that use human performances to govern the synthesis of animation and music. A fourth significant genre of audiovisual software artworks, known as generative artworks, produce animations and/or sound autonomously—from their own intrinsic rule-sets. These rules may range from trivial forms of randomness, to sophisticated algorithmic techniques that simulate complex organic processes or even implement “artificial intelligence” models of visual and musical composition. In this short space, two examples of this large genre must suffice. One influential example of such an autonomous artwork is Scott Draves’s Bomb (1993-1998), a free software system that produces “fluid, textured, rhythmic, animated, and generally non-representational” visual music [8]. Bomb uses recursive and non-linear iterated systems, such as cellular automata algorithms (often used to simulate animal population behavior), reaction-diffusion equations (used to simulate organic pattern formations, such as leopard spots and zebra stripes), and video feedback. According to Draves, one of the most important innovations in Bomb was the idea of “having multiple CA [cellular automaton] rules interacting with each other”, which allowed the program to generate and evolve a truly vast range of organic graphic configurations [9].

[Screenshots from Bomb (1993-1998), by Scott Draves.]

Whereas Bomb is silent, Antoine Schmitt’s Nanoensembles (2002) uses simple generative techniques to produce both sound and animation simultaneously [28]. In Nanoensembles, a number of small animated visual elements move back and forth across the canvas, each at their own rate, each producing a simple looping sound whose volume is related to their speed and position. Because each element has its own unique cycle period, their motions eventually go out of phase – as do their sounds. The result is an ever-changing and effectively endless audiovisual composition.

Archives and Online Communities for Audiovisual Software and Net.Artworks

As global interest in audiovisual software has grown, so have efforts to preserve this ephemeral artform and support the communities of people who create and enjoy it. Some of the most significant efforts to do so have been led not by institutions or museums, but by the artists themselves. The artist-directed web site Turbulence.org is one of the longest-running such projects, and is especially notable for providing financial support for the creation of new works; founded in 1996 by Helen Thorington and Jo-Anne Green, nearly half of the 173 projects commissioned by the site (including Cory Arcangel's Data Diaries, Martin Wattenberg's The Shape of Song, and R. Luke DuBois' Hard Data, discussed above) deal in some way with problems in audiovisuality. Another such online collection is the Runme.org software art repository, founded in 2003 by Amy Alexander, Olga Goriunova, Alex McLean and Alexei Shulgin. Although Runme.org is dedicated to preserving contributions from the entire spectrum of software art, its collections of specifically audiovisual “artistic tools” and audiovisual “data transformation” systems are two of its largest. A third and particularly substantial archive, focused exclusively on audiovisual software and net-based artworks, has been compiled and documented since 1997 by Stanza at the website Soundtoys.net. Billing itself as the “Internet's leading space for the exhibition of exciting new works by a growing community of audiovisual artists, while also providing a forum for discourse around new technologies and the nature of soundtoys”, the site provides downloadable and browser-based versions of hundreds of significant software artworks, contributed by the artists themselves, as well as interviews, links to resources, and critical texts about interactive, generative and audiovisual arts. By making otherwise obscure software projects available for direct experience and evaluation, websites such as Turbulence.org, Runme.org, and Soundtoys.net both consolidate and educate their artist communities, and are likely to have an continuing and accelerating influence on  audiovisual software art in the near future. 

Bibiography

1.    Abbado, Adriano. “Perceptual correspondences of abstract animation and synthetic sound”. M.S. Thesis, Massachusetts Institute of Technology, 1988. http://www.abbado.com/wp-content/uploads/2007/08/thesis.pdf.
2.    Arcangel, Cory. Data Diaries, 2002. http://www.turbulence.org/Works/arcangel/.
3.    Blaine, Tina.   “New Music for the Masses.” Think Tank,  http://www.adobe.com/designcenter/thinktank/ttap_music/.
4.    Brand, Jens. G-Player, 2004. http://g-turns.com.
5.    Burfitt, Kevin “Zaph”. Cthuga. Winamp visualization plug-in, 1993-1997. http://www.afn.org/~cthugha/.
6.    Campbell, Jim. Formula for Computer Art. 1996-2003. http://www.jimcampbell.tv/formula/index.html.
7.    Collopy, Fred. Imager. Interactive Mac software, 1998. http://rhythmiclight.com/studios/index.html.
8.    Draves, Scott. Bomb. Sound-responsive software,1993-1998. http://draves.org/bomb/.
9.    Draves, Scott. Notes on Bomb, 1993-1998. http://draves.org/blog/archives/000361.html.
10.    DuBois, R. Luke. Hard Data. Interactive Flash applet, 2009. http://transition.turbulence.org/Works/harddata.
11.    Franke, Herbert W. in Leavitt, Ruth (ed.), Artist and Computer, 1974, p. 83.
12.    Geiss, Ryan M. Geiss. Windows software, 1998-2008. http://www.geisswerks.com/geiss/.
13.    Goldberg, Ken, Randall Packer, Gregory Kuhn, and Wojciech Matusik. Mori: an Internet-Based Earthwork, 1999. http://www.ieor.berkeley.edu/~goldberg/art/mori/.
14.    Hansen, Mark and Ben Rubin. Listening Post. Software installation, 2001. http://www.earstudio.com/projects/listeningpost.html.
15.    Ishii, Haruo. Hyperscratch. Interactive software, 1993-2003. http://www.land-net.co.jp/~stone.
16.    Klima, John. Glasbead. Interactive networked software, 2000.   http://www.cityarts.com/glasbeadweb/glasbead.htm.
17.    Levin, Golan. “Painterly Interfaces for Audiovisual Performance.” M.S. Thesis, Massachusetts Institute of Technology, 2000. http://www.flong.com/texts/publications/thesis.
18.    Levin, Golan. "The Table is The Score: An Augmented-Reality Interface for Real-Time, Tangible, Spectrographic Performance." Proceedings of the International Conference on Computer Music 2006 (ICMC'06). New Orleans, November 6-11, 2006.
19.    Limiteazero. laptop_orchestra, 2004. http://limiteazero.net/l_o/index.html.
20.    Malinowski, Stephen. Time-Line of The Music Animation Machine, 1970-2001. http://www.musanim.com/mam/mamhist.htm.
21.    Nimoy, Josh. Ball Droppings. Interactive software, 2003. http://www.balldroppings.com.
22.    “Music visualization”. In Wikipedia: The Free Encyclopedia. Retrieved 27 April 2009.
23.    Pellegrino, Ronald. The Electronic Arts of Sound and Light. Van Nostrand Reinhold Company Inc., 1983.
24.    Pichlmair, Martin and Fares Kayali. “Levels of Sound: On the Principles of Interactivity in Music Video Games.” in Situated Play, Proceedings of the Digital Games Research Association (DiGRA) 2007 Conference, p. 424-430. http://www.digra.org/dl/db/07311.14286.pdf.
25.    Pitaru, Amit. Sonic Wire Sculptor, 2003. http://www.pitaru.com/sonicWireSculptor/.
26.    Radical Software Group (RSG). Carnivore, 2001. http://r-s-g.org/carnivore/.
27.    Ritter, Don. “Interactive Video as a Way of Life,” Musicworks 56, Fall 1993, p. 48-54.
28.    Schmitt, Antoine. Nanoensembles. Macromedia Shockwave application, 2002.   http://www.gratin.org/as/nanos/index.html, http://soundtoys.net/toys/nanoensembles.
29.    Schwartz, Lillian. Summary of works at http://www.lillian.com/films/.
30.    Snibbe, Scott. “The Motion Phone.” Proceedings of Ars Electronica ‘96. Christine Schopf, ed. http://kultur.aec.at/lab/futureweb/english/prix/prix/1996/E96azI-motion.html.
31.    Spiegel, Laurie. “Graphical Groove: Memorium for a Visual Music System”. In Organised Sound 3(3): p.187-191. Cambridge University Press, August 1998. http://retiary.org/ls/writings/vampire.html.
32.    Stanza. Datacity. 2004. http://soundtoys.net/toys/datacity-2004.
33.    Stanza. Sensity. 2004-2009. http://www.stanza.co.uk/sensity.
34.    Ubermorgen. The Sound of Ebay. Net.Art, 2008. http://www.sound-of-ebay.com/100.php.
35.    Whitney, John. Digital Harmony: on the Complementarity of Music and Visual Art. McGraw-Hill, 1980.
36.    Whitney, John. “Fifty Years of Composing Computer Music and Graphics: How Time’s New Solid-State Tactability Has Challenged Audio Visual Perspectives.” Leonardo v. 24, 5 November 1991, pp. 597-599.
37.    Whitney, John. Permutations. Animation, 1968. http://www.youtube.com/watch?v=BzB31mD4NmA.

 


Case Study 1: John Whitney's Permutations (1966-1968)

One of the first artists to tackle the challenges of audiovision with the new medium of software was the American abstract animator John Whitney, Sr. (1917-1995). After studying twelve-tone composition in Paris in the late 1930s, Whitney became interested in the possibility of discovering a set of formal principles which could structure a time-based visual analogue to music. Whitney articulated his core research question, during this period, thus: “Can motion, of a kind of abstract neoplasticism, bear the burden of content in a visual cinema as it obviously does in the realm of musical experience?” [Whitney 1980, p.204]. Inspired by the abstract films of Oskar Fischinger, but seeking an alternative to the idiosyncratic and labor-intensive results of Fischinger’s methods of hand-animation, Whitney invented animation methods which attempted to leverage automation without being mechanistic [Whitney 1980, p.218]. Many of Whitney’s films from the 1940s through the early 1960s were created with custom motion-producing machines, beginning with simple pendulums (used in his Film Exercises #1-5, 1943-1944) and culminating in a unique “Mechanical Analogue Computer” (used in his Catalogue, 1961), which was constructed from repurposed anti-aircraft gun directors, and which could perform a wide range of trigonometric calculations and complex spatial movements. By the mid-1960s, Whitney had achieved a significant reputation throughout Hollywood as a master of mathematically-structured visual animation, propelled by successful commercial productions for clients such as the Eames Studio, Saul Bass, Bob Hope and Douglas Aircraft. It was in this context that, in 1966, the IBM Los Angeles Scientific Center granted Whitney its first “artist in residence” status in order to explore the expressive possibilities of the IBM Model 360 computer and IBM 2250 Graphic Display Console.

Whitney’s work at IBM was supported by Dr. Jack Citron, a programmer and graphics researcher. Citron and others had developed a library of subroutines called GRAF (GRaphic Additions to FORTRAN) to support enhanced display capabilities on the IBM 360 [Hurwitz, p.553]; this library would become Whitney’s primary animation tool for the remainder of the decade. A preliminary collaboration between Whitney and Citron yielded a three-minute animation study, Homage to Rameau (1967), after which Citron was given formal responsibility for supporting Whitney’s residency at IBM. In 1968, Whitney and Citron completed Permutations, an eight-minute computer graphic film wholly consisting of the independent circular movements of 281 colored dots [Whitney, p.196]. For Whitney, the kinetic rhythms and phasing relationships of the dots’ movements produced perceptual effects which were strongly analogous to modulations of tension in music:

“Every one of the points in Permutations is moving at a different rate and moving in an independent direction in accord with natural laws as valid as Pythagoras’, while moving within their circular field. Their action produces a phenomenon which is more or less equivalent to musical harmonics. When the dots reach certain numerical (harmonic) relationships with other parameters in the equation, they form elementary many-lobed figures. Then they go off into a non-simple numerical relationship and appear to be random again. I think of this as an order-disorder phenomenon that suggests the harmony-dissonance effect of music” [Whitney 1980, p.218].

Each frame of Permutations took approximately two seconds to compute, and was photographed to 16mm film directly from the black-and-white IBM 2250 display. Color was added later, through the use of optical printing techniques. Upon its completion, Permutations was one of only a small handful of entirely computer-animated films in existence, and was therefore very influential to subsequent animators interested in exploring the computer.

Given Whitney’s skills as a composer and his abiding concern with music, it is perhaps surprising that the actual soundtrack of Permutations – a selection of traditional South Indian tabla music – was adapted to his film from a pre-existing recording by [Sundaram] “Balachander, on World Pacific Records”.  As Whitney explained, in an interview during this period, “I am not composing music right now simply because I have my hands full with what I’m doing about the graphic formal problems” [Whitney p.218]. In retrospect, we may understand Whitney’s soundtrack selection as a product of its time, reflecting an Orientalist preoccupation with “Eastern” musics and philosophies prevalent throughout California during the late 1960s and early 1970s. Eight years later, Whitney’s landmark computer animation Arabesque (1975) similarly juxtaposed his abstract computer-generated movement patterns against a recording of traditional Iranian santoor music. Reflecting on this choice in his 1980 book, Digital Harmony, Whitney wrote that “once again, as with so many works before, I was obliged to search for given music to fit the completed essay of my visual composition” [Whitney 1980, p.113]. The ultimate consequence of Whitney’s soundtrack selections for these films is that any actual “correspondences” perceived between the animation and musical selections are, in fact, purely serendipitous and constructed from whole cloth in the mind of the viewer. We see that, in his early computer works like Permutations and Arabesque, John Whitney was above all focused on creating music-like structures in dynamic visual form – and not on creating “mappings” between image and sound.

This changed after the publication of his book, Digital Harmony, in 1980. From this time until his death in 1995, Whitney’s focus indeed shifted to the development of a software instrument on which he could compose visual and musical output simultaneously, in real time. Working in collaboration with programmer Jerry Reed, Whitney developed an audiovisual composing system, the Whitney-Reed RDTD (“Radius-Differential Theta -Differential”), which allowed him to create "musical design intertwined with color design tone-for-tone, played against action-for-action" [Whitney 1991]. With this software Whitney created his animations Spirals (in 1987-1988), and MoonDrum (in 1989-1995).

In the latter decades of his career, Whitney distilled his model for understanding temporal structures to the notion of “computational periodics,” or harmonic agreements of periodic functions. He wrote:

"Rhythm, meter, frequency, tonality and intensity are the periodic parameters of music. There is a similar group of parameters that set forth a picture domain as valid and fertile as the counterpoised domain of sound. This visual domain is defined by parameters which are also periodic. 'Computational periodics' then is a new term which is needed to identify and distinguish this multidimensional art for eye and ear that resides exclusively within computer technology. For notwithstanding man's historic efforts to bridge the two worlds of music and art through dance and theater, the computer is his first instrument that can integrate and manipulate image and sound in a way that is as valid for visual, as it is for aural, perception." [John Whitney in Leavitt, p. 80.]

Case Study 1: Bibliography

Greene, Rachel. Internet Art (World of Art).  Thames & Hudson, 2004.
Hurwitz, A., Citron, J.P., and Yeaton, J.B.  "GRAF: Graphic Additions to FORTRAN".  American Federation of Information Processing Societies (AFIPS) Joint Computer Conference, Proceedings of the April 18-20, 1967 Spring joint computer conference, Atlantic City, New Jersey, 1967. pp. 553-557. 
Leavitt, Ruth. Artist and Computer. Harmony Books , 1976.
Lovejoy, Margot. Digital Currents: Art in the Electronic Age. Routledge, 2004.
Moritz, William. "Digital Harmony: The Life of John Whitney, Computer Animation Pioneer." Animation World Magazine, Issue 2.5, August 1997. World Wide Web (retrieved 16 March, 2009): http://www.awn.com/mag/issue2.5/2.5pages/2.5moritzwhitney.html
Paul, Christiane. Digital Art (World of Art).  Thames & Hudson, 2003.
Popper, Frank. Art of the Electronic Age. HNA Books, 1993.
Post, Maaike. Book for the Electronic Arts. Art Data, 2000.
Reichardt, Jasia. The computer in art. Van Nostrand Reinhold, 1971.
Roads, Curtis. The Computer Music Tutorial. The MIT Press, 1996.
Whitney, John. Digital Harmony : on the Complementarity of Music and Visual Art. McGraw-Hill, 1980.
Whitney, John. "Fifty Years of Composing Computer Music and Graphics: How Time's New Solid-State Tactability Has Challenged Audio Visual Perspectives." Leonardo v.24, 5 November 1991, pp. 597-599.
Whitney, John. Permutations. Animation, 1968. http://www.youtube.com/watch?v=BzB31mD4NmA
http://en.wikipedia.org/wiki/John_Whitney_(animator)


Case Study 2: Toshio Iwai's Music Insects (1991)

Japanese new-media artist Toshio Iwai is credited with inventing the genre of performative audiovisual games in the game, Otocky, which he designed for ASCII Corporation in 1987. Since that time, Iwai has gone on to create numerous influential software systems for playful audiovisual interaction. Notable among these is Music Insects (1991), which Iwai originally developed as a screen-based museum installation for the San Francisco Exploratorium, and which he has since reinterpreted several times in both installation and commercial ‘game’ formats (e.g. Nintendo Sound Fantasy, 1994; Maxis SimTunes, 1996; the interactive installation Composition on the Table: PUSH, 1999; and Nintendo Electroplankton, 2005).

Iwai’s Music Insects is singular for the seamless way in which it hybridizes a pixel-based paint program, a real-time system for composing and performing music, and a wholly visual programming environment for animated behaviors. The core interaction logic of this inventive software application is a graphic step-sequencer wherein animated graphical ‘bugs’ trigger musical notes and/or change their direction of movement when they encounter fat, colored pixels placed in their path by the user. These colored squares define the audiovisual terrain for the collection of four virtual bugs that crawl across the gridded surface of the canvas.  When one of these bugs encounters a colored square, a diatonic musical note is triggered whose pitch is linked to the square’s color. Each bug represents a different musical instrument, and has its own timbre with which it sonifies the squares; thus one bug produces piano sounds when it collides with the pixels, while the other bugs produce percussion, bass guitar, or trumpet sounds.

The user can add, modify and delete pixels while the bugs are engaged in performing the score. Additional sophistication is possible through the use of certain specially-colored pixels, which have the effect of rotating or reversing the bugs which touch them. Using these special pixels, the user can cause the bugs to create looping rhythms, phasing polyrhythms, and complex passages which seem to never repeat at all. In this way, the colored squares define a musical score whose results may be generative, unpredictable, highly rhythmic, or some combination of these. And, at the same time of course, the user of Music Insects is also authoring an image.

Iwai’s Music Insects comes remarkably close to offering a completely balanced solution for authoring image and sound simultaneously. Iwai overcomes many of the legibility problems often associated with symbolic and diagrammatic scores, for example, through the use of his animated bugs, which act as self-revealing and self-explanatory “playback heads” for the live sound. Because the system’s score-elements are elementary pixels as opposed to well-defined symbols, moreover, the fine granularity of Iwai’s audiovisual substance is well suited to the creation of abstract or representational images, and the system’s display screen may be read equally well as a painting or a score.

A playful approach to audiovisual interaction design and musical experience such as that in Music Insects is a hallmark of many Japanese software projects and can also be observed, for example, in software artworks like Haruo Ishii’s Hyperscratch (various versions, 1993-2003) and Masaki Fujihata’s Small Fish (2000).

Case Study 2: Bibliography

Azby Brown, Azby. “Portrait of the Artist as a Young Geek.” Wired, No. 5.05, May 1997. http://www.wired.com/wired/archive/5.05/ff_iwai_pr.html.
“Electroplankton.” In Wikipedia: The Free Encyclopedia. Retrieved 15 April 2009.
Levin, Golan. "Painterly Interfaces for Audiovisual Performance". M.S. Thesis, MIT Media Laboratory, August 2000.
“Otocky”. In Wikipedia: The Free Encyclopedia. Retrieved 15 April 2009.
“SimTunes.” In Wikipedia: The Free Encyclopedia. Retrieved 15 April 2009.


Case Study 3: Martin Wattenberg's The Shape of Song (2001)

Martin Wattenberg (b. 1970) is an American artist and scientific researcher who creates visual treatments of “culturally significant data.”[1] Best known for creating interactive data visualizations such as his popular 2005 Baby Name Voyager (an online tool for exploring trends in baby names across a century of census data), Wattenberg is especially sensitive to the dialectic potential of information displays. His early works, such as StarryNight (1999), Apartment (2001) and Idea Line (2001, which was the first piece of Net.Art commissioned by the Whitney Museum of American Art) figured significantly in establishing the genre of “artistic information visualizations” which are so common today. Wattenberg’s designs are consistently easy to use and captivating to explore, a consequence of his motivation to provide “connection, insight, narrative and beauty” [2] in his work.

In 2001, with the help of a commission from Turbulence.org, Wattenberg turned his attention to visualizing the structure of music in his project, The Shape of Song [3]. While it is common for audiovisual software artists to attempt to visualize music with the help of real-time graphics, Wattenberg approached the problem as one of notation, and produced a project which develops a new form of static image that reveals hidden patterns latent in the music’s score. Indeed, The Shape of Song is necessarily a non-real-time visualization of music, as any real-time version would require perfect future knowledge.

The Shape of Song is specifically designed to reveal repetitions of musical passages within MIDI files. The project takes the form of an online Java applet, within which visitors can select a pre-loaded musical composition (Bach, Beatles, Britney Spears, etc.) or upload a MIDI file of their own for visualization. The visualization method is straightforward. Wattenberg writes:

“The diagrams in The Shape of Song display musical form as a sequence of translucent arches. Each arch connects two repeated, identical passages of a composition. By using repeated passages as signposts, the diagram illustrates the deep structure of the composition. For example, the picture above was built from the first line of a very simple piece, Mary Had a Little Lamb. Each arch connects two identical passages. To clarify the connection between the visualization and the song, in this diagram the score is displayed beneath the arches. […] The resulting images reflect the full range of musical forms, from the deep structure of Bach to the crystalline beauty of Philip Glass.” [4]

In a software landscape heavily populated by real-time displays of pitch, loudness, and other instantaneous auditory features, The Shape of Song continues to be a significant music visualization because of its ability to represent long-scale and multi-scale temporal structures in music. Since the project’s launch in 2001, Wattenberg’s “arc diagram” visualization method has been particularly influential in the broader field of information visualization as well, finding new uses in projects for visualizing such diverse datasets as internet chatting behavior, hypertext links between blogs, biblical cross-references, and campaign contributions. [5,6]

Case Study 3: Bibliography

[1] Wattenberg, Martin.  Personal web site, http://www.bewitched.com/. Downloaded 1 April, 2009.
[2] Ibid.
[3] Wattenberg, Martin. The Shape of Song. Online interactive artwork, http://turbulence.org/Works/song/
[4] Wattenberg, Martin.  Personal web site, http://www.bewitched.com/.
[5] http://www.visualcomplexity.com/vc/index.cfm?method=Arc%20Diagrams
[6] Wattenberg, Martin.  "Arc Diagrams: Visualizing Structure in Strings", Proceedings of the IEEE Symposium on Information Visualization (InfoVis'02), 2002. http://www.research.ibm.com/visual/papers/arc-diagrams.pdf