Born Digital Cultural Heritage – Angela Ndalianis & Melanie Swalwell

The collection and preservation of the ‘born digital’ has, in recent years, become a growing and significant area of debate. The honeymoon years are over and finally institutions are beginning to give serious consideration to best practice for digital preservation strategies and the establishment of digital collections. Digital technology emerges and disappears with incredible speed, as a once-new piece of hardware or software becomes old and is replaced by the next technological advancement. What happens to: videogame software and hardware of the 1980s and 90s? The web browsers, blogs and social media sites and content they once displayed? The artworks that relied on pre-2000 computers to create art? Are these – amongst many other – digital creations fated to be abandoned, becoming only memories of individual experience? Are they to be collected by institutions as defunct objects? Or are they to be preserved and revived using new digital technology? These are but a few of the serious questions facing collecting institutions. The question of who is responsible for collecting, preserving and historicising born digital cultural heritage is a crucial one, as is the issue of best practice – what are the best ways to preserve and make accessible such born digital heritage?

In June 2014, our “Play It Again”[1] project team ran an international conference on “The Born Digital and Cultural Heritage” that aimed to convene a forum where some of these issues could be discussed. “Play It Again” was a three year project focused on the history and preservation of microcomputer games written in 1980s Australia and New Zealand, but as the first digital preservation project to be funded as research in this part of the world (at least to our knowledge), it also had a broader significance. We tried to use it to raise awareness around some of the threats facing born digital cultural production more broadly, beyond 1980s digital games. Two of the project’s aims were to “Enhance appreciation for the creations of the early digital period” and “To build capacity in both the academic and cultural sectors in the area of digital cultural heritage and the ‘born digital’”, both critical issues internationally. A two-day event held at the Australian Centre for the Moving Image, Melbourne, the conference’s remit was thus deliberately wider than the focus of the Australian Research Council Linkage Project.

The need for cooperation between different stakeholders – legislative bodies, professionals working in different types of institutions, and the private sector – was a key recommendation of the 2012 “Vancouver Declaration,” a Memory of the World initiative (UNESCO). Born digital artefacts often require multiple sets of expertise, therefore our call for papers invited proposals from researchers and practitioners in a range of disciplines, spheres of practice and institutional contexts concerned with born digital heritage. This included libraries, archives, museums, galleries, moving image institutions, software repositories, universities, and more besides. We wanted to create a space where communication between the different types of professionals dealing with preservation of born digital cultural heritage could take place. Archivists, librarians, conservators, and moving image archivists share many challenges, yet, we suspect, often they attend conferences which are profession based, which enforces a kind of silo-ing of knowledge. Particularly in small countries such as Australia and New Zealand, there’s a need for conversations to take place across professional boundaries, and so we sought to bring people who perhaps don’t normally move in the same circles into contact.

The presentations during the conference ranged in approach from theoretical, to practical, to policy-oriented. We gloried in the range of papers that were presented. There were game histories, reflections on the demoscene, on net.art and other forms of media art, on born digital manuscripts, robots, twitter accounts and website archiving. As well as papers addressing different forms of heritage materials, there were also technical reports on the problems with hacking and patching disk images to get them to emulate, on software migration, and legal papers on copyright protection, and the ‘right to be forgotten’. (Audio of many of the presentations is available here. The variety of presentations made painfully visible the enormous task at hand in addressing born digital cultural heritage.

While Refractory focuses on entertainment media, in this issue we recognise that born digital entertainment media share many of the challenges of non-entertainment objects. Here, we have collected article versions of selected papers from the conference. The topics and subjects are varied – from those looking more broadly at approaches to born digital heritage and the preservation of digital art, to the documentation of and public discourse about early game histories, and to future creative writing practice facilitated through the collection of digital manuscripts.

In his paper “It Is What It Is, Not What It Was: Making Born Digital Heritage” (which was a keynote address), Henry Lowood examines the preservation and collection of digital media in the context of cultural heritage. Lowood is concerned with “the relationship between collections of historical software and archival documentation about that software” and poses the question “Who is interested in historical software and what will they do with it?” He argues that “answers to this fundamental question must continue to drive projects in digital preservation and software history”. Using the examples of ‘The Historian’, ‘The Media Archaeologist’ and ‘The Re-enactor’ his paper raises important questions about the function, purpose and varied approaches to the digital archive. The historian, he states, is interested in the digital archival material in order to interpret, reconstruct and retell its story in history. For the media archaeologist, “media machines are transparent in their operation” and, rather than requiring interpretation, speak of their pastness by making possible the playback of “historical media on historical machines”. Finally, for ‘The Re-enactor’, ‘authenticity’ is a crucial factor for digital preservation; however, the question of authenticity is fraught with debate – on the one hand, the re-enactor at one extreme insists on a “fidelity of play” with the software that engages with technology (hardware and software) in its original state, and at the other extreme is the re-enactor who is willing to forgo the historical machine in favour of emulation and virtualisation that recreates an embodied experience of ‘playing’ with the original software, whether a game or word processing program. In either case, as Lowood explains, “Re-enactment offers a take on born-digital heritage that proposes a commitment to lived experience.”

In their article “Defining The Experience: George Poonhkin Khut’s Distillery: Waveforming, 2012”, Amanda Pagliarino and artist George Poonkhin Khut present an account of Khut’s sensory artwork, Distillery: Waveforming 2012, which uses the prototype iPad application ‘BrightHearts,’ which was acquired by the Queensland Art Gallery. The Curator of Contemporary Australian Art requested that the acquisition “was captured in perpetuity in its prototype state”. The authors explain that this biofeedback artwork is ‘iterative’ and Khut continued to develop the work in other iterations that include updates for the BrightHearts app for touch screen devices. This article describes the development of the artwork and the issues that were addressed in its acquisition, archiving, and the consultations that took place between the artist and the collecting institution. As the writers argue “to secure the commitment of the artist to engage in collaborative, long-term conservation strategies is extraordinary and this has resulted in the Gallery acquiring an unparalleled archival resource” that includes documentation and description of the interactive principles and behaviour of the artwork in its early state and as it evolved in Khut’s art practise. This archival resource will make it possible for the work to be reinterpreted “at some point in the future when the original technology no longer functions as intended”. In this respect, Distillery: Waveforming is understood as a “legacy artwork intrinsically linked to past and future iterations” of Khut’s larger Biofeedback Project.

The next article “There and Back Again: A Case History of Writing The Hobbit” by Veronika Megler focuses on the iconic text adventure game The Hobbit (Melbourne House, 1981), which Megler co-wrote during the final year of her Bachelor of Science degree at Melbourne University. This paper is a case history of the development of the The Hobbit (based on J.R.R.Tolkien’s novel of the same name) into a game that could run on the first generation of home computers that were just beginning to hit the market. Little has been written about the development of the first generation of text-based computer games; this case history provides insight into this developmental period in computer game history. Megler describes the development process, the internal design, and the genesis of the ideas that made The Hobbit unique. She compares the development environment and the resulting game to the state-of-the-art in text adventure games of the time, and wraps up by discussing the game’s legacy and the recent revival of interest in the game.

Jaakko Suominen and Anna Sivula’s article “Participatory Historians in Digital Cultural Heritage Process — Monumentalization of the First Finnish Commercial Computer Game” continues with games, analysing how digital games become cultural heritage. By using examples of changing conceptualisations of the first commercial Finnish computer game, the article examines the amateur and professional historicisation of computer games. The authors argue that the production of cultural heritage is a process of constructing symbolic monuments that are often related to events of change or the beginning of a progressive series of events, and the article presents an account of the formation of games as symbolic cultural monuments within a Finnish context. Whilst many researchers and journalists have claimed that Raharuhtinas (Money Prince 1984) for Commodore 64 was the first Finnish commercial digital game, its status as such is controversial. As the authors explain, “in this paper, we are more interested in public discourse of being the first” and how this relates to the cultural heritage process. The case of the ‘first’ game, it is argued, illuminates how items are selected as building material for digital game cultural heritage.

In “Retaining Traces of Composition in Digital Manuscript Collections: a Case for Institutional Proactivity”, Millicent Weber turns to digital manuscripts, their collection, preservation and digital storage by collecting institutions. Weber argues that libraries, archives and scholars have not addressed the content of future digital or part-digital collections, or their capacity to support sustained scholarly research. This paper examines the potential content of future collections of poetry manuscripts and their capacity to support research into the process of composition. To predict this capacity, the article compares a study of compositional process, using handwritten and typewritten manuscripts, with a small-scale survey of early-career poets’ compositional habits. The draft manuscripts of three poems by the poet Alan Gould and three by the poet Chris Mansell are used to describe each poet’s compositional habits, while the survey component of the project obtained information about the drafting practices of 12 students of creative writing and poetry at the University of Canberra. Weber concludes that the results indicate both the great diversity of manuscript collections currently being created, and the importance of archival institutions adopting an active advocacy role in encouraging writers to create and maintain comprehensive and well-organised collections of digital manuscripts.

The collection and preservation of born digital cultural heritage is of critical importance. In the digital era, “Heritage refers to legacy from the past, what we live with today, and what should be passed from generation to generation because of its significance and value” (UNESCO/PERSIST Content Task Force 16). If we want to ensure that records and works from this era persist, we will need to substantially ramp up our efforts. Cooperation between different stakeholders is critical and the research sector has an important role to play, in undertaking collaborative research with cultural institutions to tackle some of the thornier challenges surrounding the persistence of born digital cultural heritage.

Works cited

UNESCO. “UNESCO/UBC Vancouver Declaration, The Memory of the World in the Digital Age: Digitization and Preservation.” N.p., 2012. Web. 17 Dec. 2012.

UNESCO/PERSIST Content Task Force. “The UNESCO/PERSIST Guidelines for the Selection of Digital Heritage for Long-Term Preservation.” 2016. Web.

 

[1] The “Play It Again” project received support under the Australian Research Council’s Linkage Projects funding Scheme (project number LP120100218). See our research blog and the “Popular Memory Archive” for more information on the project.

 

Bios

Associate Professor Melanie Swalwell is a scholar of digital media arts, cultures, and histories. She is the recipient of an ARC Future Fellowship for her project “Creative Micro-computing in Australia, 1976-1992”. Between 2011-15, she was Project Leader and Chief Investigator on the ARC Linkage Project “Play It Again“. In 2009, Melanie was the Nancy Keesing Fellow (State Library of New South Wales). She has authored chapters and articles in both traditional and interactive formats, in such esteemed journals as ConvergenceVectors, and the Journal of Visual Culture. Melanie’s projects include:

  • “Creative Micro-computing in Australia, 1976-1992”. Watch the filmhere.
  • Australasian Digital Heritage, which gathers together several local digital heritage research projects. Follow us onFacebook & Twitter @ourdigiheritage
  • Play It Again: Creating a Playable History of Australasian Digital Games, for Industry, Community and Research Purposes”, ARC Linkage, 2012-14. Follow us onFacebook & Twitter @AgainPlay, and visit the Popular Memory Archive.

 

Angela Ndalianis is Professor in Screen Studies at Melbourne University, and the Director of the Transformative Technologies Research Unit (Faculty of Arts). Her research interests include: genre studies, with expertise in the horror and science fiction genres; entertainment media and media histories; the contemporary entertainment industry. Her publications include Neo-Baroque Aesthetics and Contemporary Entertainment (MIT Press 2004), Science Fiction Experiences (New Academia 2010), The Horror Sensorium; Media and the Senses (McFarland 2012) and The Contemporary Comic Book Superhero (editor, Routledge 2008). She is currently completing two books: Batman: Myth and Superhero; and Robots and Entertainment Culture. She is also a Fellow of the Futures of Entertainment Network (U.S), and is the Hans Christian Andersen Academy’s Visiting Professor (2015-7), a position also affiliated with the University of Southern Denmark.   

Defining the Experience: George Poonhkin Khut’s DISTILLERY: WAVEFORMING, 2012 – Amanda Pagliarino & George Poonhkin Khut

 

Abstract:  George Poonkhin Khut’s sensory artwork, Distillery: Waveforming 2012, was the winner of the 2012 National New Media Art Award. This immersive installation artwork is a biofeedback, controlled interactive that utilises the prototype iPad application ‘BrightHearts’. Khut has an interest in the continued development of the ‘BrightHearts’ app to the point of making it available as a download from iTunes App Store to be used in conjunction with specialised pulse-sensing hardware.  The configuration of Distillery: Waveforming presented in 2012 at the Gallery of Modern Art, Brisbane, incorporated Apple iPad 4th generation devices running the ‘BrightHearts’ app supported by Mac mini computers that processed data and mapped sound and visuals that were fed back to users as animations on the iPads. At the conclusion of the exhibition the artwork was acquired into the Queensland Art Gallery collection.  The Curator of Contemporary Australian Art requested that the acquisition ensure that the artwork was captured in perpetuity in its prototype state.  The iPad devices underwent jailbreaks to safeguard their independent operation and management, and to allow for the permanent installation of non-expiring copies of the ‘BrightHearts’ app.  Source code for the ‘BrightHearts’ app was also archived into the collection. This paper describes the development of the artwork and the issues that were addressed in the acquisition and archiving of an iPad artwork

 

Figure 1. George Poonkhin Khut, Australia b.1969, Distillery: Waveforming 2012, Custom software and custom heart rate monitor on iPad and Mac mini signal analysis software: Angelo Fraietta and Tuan M Vu; visual effects software: Jason McDermott, Greg Turner; electronics and design: Frank Maguire; video portraits: Julia Charles, Installed dimensions variable, The National New Media Art Award 2012. Purchased 2012 with funds from the Queensland Government. Image: Mark Sherwood

Figure 1. George Poonkhin Khut, Australia b.1969, Distillery: Waveforming 2012, Custom software and custom heart rate monitor on iPad and Mac mini signal analysis software: Angelo Fraietta and Tuan M Vu; visual effects software: Jason McDermott, Greg Turner; electronics and design: Frank Maguire; video portraits: Julia Charles, Installed dimensions variable, The National New Media Art Award 2012. Purchased 2012 with funds from the Queensland Government. Image: Mark Sherwood

George Poonhkin Khut’s digital artwork Distillery: Waveforming is a body-focused, controlled, interactive experience. The artwork was acquired by the Queensland Art Gallery / Gallery of Modern Art (QAGOMA) in 2012 and has been the subject of an ongoing dialogue between the artist and the Gallery, through the Head of Conservation and Registration, regarding its long-term preservation.  At the heart of the artwork is an individual, human experience with certain intrinsic elements combining to create this experience. In their endeavour to provide a sound future plan for Distillery: Waveforming they have questioned ‘the experience’ from their individual perspectives – that of the artist and the collecting institution.

Distillery: Waveforming is both an independent artwork and an affiliated outcome of Khut’s long running work with heart rate biofeedback. This unusual duality plays a significant role in the ways in which the artist and the institution perceive the artwork, its preservation and future installations. Since the artwork’s acquisition into the QAGOMA collection the artist has remained involved and interested in the Gallery’s management of Distillery: Waveforming. Khut’s progress in his work on the biofeedback project has seen him make significant advances in software development, allowing him to release the iTunes application BrightHearts that was in-development at the time that Distillery: Waveforming was created. These advances in the biofeedback project provide current context to the dialogue and continue to shape the opinions of both artist and institution. Through this collaborative process QAGOMA has been able to build an extensive resource for the long-term preservation of Distillery: Waveforming.

HISTORY AND BACKGROUND: BIOFEEDBACK IN ART AND MEDICINE

George Poonhkin Khut’s biofeedback artwork Distillery: Waveforming was the winning entry in the 2012 National New Media Award (NNMA) held at the Gallery of Modern Art (QAGOMA 2012). The artwork entered the Queensland Art Gallery / Gallery of Modern Art collection at the conclusion of the exhibition. The curator of Contemporary Australian Art requested that the artwork be acquired to accurately reflect its display in the NNMA exhibition – that is as a prototype.

In 2011 when Khut was invited to enter the NNMA he was working as the Artist in Residence at the Children’s Hospital Westmead. In this residency Khut and his research colleagues commenced the BrightHearts Project that aimed ‘to assess the potential of small, portable biofeedback-based interactive artworks to mediate the perception and performance of the body in paediatric care: as experienced by children undergoing painful recurrent procedures’ (Khut et.al 2011).

Apple iPads loaded with games were already in use for diversion and distraction purposes during painful procedures at the Children’s Hospital Westmead. Khut chose to adapt his work for iPad technology for the BrightHearts Project based on this ‘diversional’ precedent and the excellent optical qualities of the iPad display (Khut 2014). In realising Distillery: Waveforming Khut channelled years of artistic practice in biofeedback and body-focused interactivity in the development of a cross-disciplinary artwork at the core of which was the prototype BrightHearts application (app) for Apple iPad.

When Distillery: Waveforming was displayed in the NNMA exhibition, from August to November 2012, the BrightHearts app was still in-development under a short-term Apple Developer licence. At this point in the provisioning, the prototype app generated the visuals on the iPad in response to a multilayered array of messages transmitted from a laptop or desktop computer over a network connection. This approach enable Khut to quickly prototype a variety of visualisation ideas by adjusting parameters on the desktop computer, without needing to compile and install the app on to the iPad each time. More importantly, at the time of its development – this networked approach also enabled him to incorporate live heart rate sensor data in a way that was not supported by the Apple operating system (iOS) at the time (before the introduction of the Bluetooth 4.0 wireless standard), and to continue his work with complex signal analysis, mapping and sonification algorithms that have been central to his work with body-focussed interactions since 2003. Essentially Distillery: Waveforming and the trial therapeutic devices at the Children’s Hospital Westmead were operating as ensembles that included iPads loaded with the prototype BrightHearts app, data collection devices, and desktop/laptop computers and network routing systems.

DISTILLERY: WAVEFORMING

Acquiring Distillery: Waveforming to reflect its status as a prototype was a curatorial imperative. Khut describes his approach to the long-running biofeedback project as ‘iterative’ and in this regard the artwork is an incremental representation of Khut’s artistic practice and a model demonstration of the developmental BrightHearts app for touch screen devices (Khut and Muller 2005). In the future Distillery: Waveforming will become a legacy artwork intrinsically linked to past and future iterations in the biofeedback project.

Distillery: Waveforming derives from Khut’s earlier work on BrightHearts that commenced in 2011 and his Cardiomorphologies series from 2004-2007. The mandala-like visuals were initially developed for Cardiomorphologies v.1 by John Tonkin using Java programming which Khut controlled via Cycling’74’s Max (version 4.5) application, a popular visual programming language for Apple and Windows computers. In 2005 the original visualisation software was expanded upon by Greg Turner for Cardiomorphologies v.2 using visuals generated from within the Max application. Turner used the C++ programming language to develop ‘Fireball’ a specialised graphic module (known in the Max programing environment as an ‘object’) – that enabled Khut to control the visuals with messages to each layer, for example, drawing a red coloured ring, the width of the screen, with a thickness of 20 pixels, and a green circle with a gradient, with a diameter of 120 pixels (Khut 2014; Pagliarino 2015, pp. 68-69).

Then in 2011 Jason McDermott, a multi-disciplinary designer working in the area of information visualisation and architecture, was engaged to re-write Greg Turner’s ‘Fireball’ visualisation software to enable it to run on hand-held technologies with touch sensitive controls. Using the open source C++ library openFrameworks with Apple’s Xcode (version 4) McDermott redesigned and expanded the potential of the software, developing BrightHearts as an iOS 5 mobile operating system application (McDermott 2013).

Development of the app continued when Khut received the NNMA prize worth AUS$75,000 and in April 2014 the BrightHearts app was released into the iTunes store. Heart rate data acquisition and processing is now integrated into the application software and the only external device that is required in conjunction with the app is a Bluetooth 4.0 heart rate monitor that captures the real-time heart rate data. The app is categorised as a Health and Fitness product that can be used to assist with relaxation and body awareness (iTunes 2014).

This history of development, change, modification and repurposing creates a landscape in which Distillery: Waveforming is an important new media artwork. As a legacy artwork the Gallery aims to maintain the component parts and software in their original form and function for as long as possible. Technologies change at such a rapid rate that the artwork will date in the years to come to reflect, quite evidently, an artwork of 2012.  Perhaps future users will consider what are at present beautifully rich and transcendent animations as rudimentary and the touch screen navigation amusing and unsophisticated. Perhaps future users will recapture the sense of appeal that early touch screen devices inspired in consumers. However, it is not the intention of the Gallery to create a sense of nostalgia but to offer insights into the balance between art, technology and science at this fixed point in time.  As a legacy the artwork will be an authentic installation and will offer an unambiguous window into Khut’s interdisciplinary artistic practice.

In its presentation in the NNMA exhibition Distillery: Waveforming was configured of five iPad devices running the prototype BrightHearts app that were built into a long, shallow, tilled table at which participants sat on low stools to interact with the artwork. Specifications set by the artist allow the Gallery to modify the configuration for smaller displays of no fewer than three stations in future installations. However it is necessary that the ambiance of the installation space affect a sense of calm and contemplation by utilising low light levels, soft dark colours and discrete use of technology. In the original installation the spatial arrangement situated participants in front of three video portraits of the artwork in use (Fig 1). Distillery: Waveforming is a composite artwork incorporating the iPad devices loaded with the prototype BrightHearts app, external data collection and processing equipment and video portraits displayed on monitors. The combined hardware and software systems include:

  • Five Apple iPads (3rd generation) operating on the iOS 5.1.1 operating system, with retina display high resolution (2,048 x 1,536 pixels at 264 ppi) and dual-core Apple A5X chip

Loaded with:

  • BrightHearts app (in-development)
  • Cydia – a software application that enables the user to search for and install applications on jailbroken iOS devices
  • Activator app – a jailbreak application launcher for mobile devices
  • IncarcerApp – an application that disables the home button and effectively locks on the BrightHearts app when in use, preventing the user from inadvertently exiting the app
  • Five heart rate sensors incorporating Nonin PureSat Pulse Oximeters (ear clip type) sensors, a Nonin OEM III Pulse Oximetry circuits and Aduino Pro Mini 328 microcomputers, running specially written code (OemPulseFrank.pde) to receive the pulse data from the pulse oximeter sensors – and relay this to the MacMini’s via a USB-serial connection.
  • Five Mac minis 5.2, 2.5 GHz dual-core Intel Core i5 processor, 4GB RAM, 10.7.5 (OSX Lion) operating system

Running:

  • Max6 application (Cycling74, 2012)
  • Custom written scripts running from OSX ‘Terminal’ utility, that receive pulse data from the sensors via USB port and pass this along to Max6
  • One 5.0 GHz network router that transmits control data from the Max6 software on the Mac minis to the corresponding iPads.
  • Three digital portraits displayed on 40” LCD / LED monitors hung in portrait orientation
  • Video portrait files include MPEG-4, QuickTime ProRes and AVC file formats
  • Five headband-style stereo headsets

The prototype BrightHearts app for Distillery: Waveforming was written for Apple iPad (3rd gen) models running iOS 5.1.1. It was written under a short-term Apple Developer licence that allowed for provisioning and testing of the app on multiple devices. The licencing arrangement for the app expired in July 2013, nine months after the artwork was acquired by into the collection. A key aspect of archiving this artwork was the need to gain control of the app and in advance of the expiration the Gallery implemented an archiving strategy that was developed through consultation between the conservator, curator and the artist who was in contact with the software developer.

The most challenging aspect of the acquisition for the collecting institution was the long-term management of the proprietary technology and software. At the time of acquisition the prototype BrightHearts app was capable of performing a function with external support but did not have status as an independent Apple-approved application. In fact, its completion and approval was still one and half years away. It was also important for the iPad operating system to be locked down to iOS 5.1.1 as the prototype BrightHearts app for Distillery: Waveforming will only launch in this version. Through a consultative process it was agreed that to administer the artwork as an authentic prototype it was necessary to increase the end user control of the technology and software.  This was achieved through jailbreaking the iPad devices and loading a non-expiring copy of the prototype BrightHearts app on the iPads (Pagliarino 2015).

MAINTAINING AN AUTHENTIC EXPERIENCE

Distillery: Waveforming has been acquired with the intention of maintaining authenticity and as such the Gallery has archived a full complement of digital files for the artwork. Included in this is source code for compiling the BrightHearts application with Xcode and source code for compiling the pulse-sensing Arduino microcontrollers for which Khut owns both copyrights (Khut 2012).

In conventional object-oriented programming source code, a programming sequence in readable text, outlines the steps that are necessary to compile software and make it function as intended, for example an app for an iPad. The source code has to be interpreted or compiled by a programmer in order to create the necessary machine code, for example Xcode if the work is developed for Apple OSX. Acquiring source code is thought to be a means of future-proofing digital artworks (Collin and Perrin 2013, p.52). This is undeniable as without the source code there is very little that can be used as a structural guide. However, Laforet et al. (2010, p.27) questions whether source code can really act as a safety net for software artworks in an age where there is a strong commercial imperative driving the development of digital technologies at the expense of the conservation of data.  The success of source code to future-proof artworks relies on accurate interpretation and, in the context of an authentic experience, a complete lack of bias towards alternate or more efficient ways of programming the software to run an artwork as it was intended.

In cases where an artwork was developed using a suite of applications and programming languages, documenting source code becomes a complicated task in comparison to artworks where the source code is contained entirely within a single object-oriented programming environment. The programing for Distillery: Waveforming is distributed across three operating systems and four programming languages: Arduino for the sensor hardware; Objective C and iOS5 (via Xcode 4) for the BrightHearts app; OSX for the desktop computer that operates as a terminal emulator, running sensor data routing and analysis processes; and most significantly  Max,  the visual programing application that is used to perform the core analysis, mapping and sonification processes between the incoming heart rate data and the outgoing messages controlling the appearance of the various layers of the iPad visuals and sounds. Laforet et al. confirms that the difficulties faced with software artworks created by individual programmers are that:

These projects are relatively small efforts, putting the work created with it in a very fragile position. Unlike more popular software and languages, they are not backed up by an industry or a community demanding stability. The software only works under very specific conditions at a very specific time. Migrating such a work is a tremendous task, likely to involve the porting of a jungle of obscure libraries and frameworks. (Laforet et al. 2010, p. 29)

The complexity of combining multiple source codes from various programming platforms to work within one artwork significantly increases the risk of error in interpretation. In the case of Distillery: Waveforming it seems highly unlikely that source code alone would be sufficient to recreate the artwork in future. Khut has recognised this and has considered alternate bespoke and existing documentation systems for both Distillery: Waveforming and BrighHearts for the purpose of preservation and representation.

Visually the prototype BrightHearts app consists of 22 individually controlled graphic layers. Each layer is comprised of a single polygon that can be drawn as a solid shape or a ring, the edges of the shape can be blurred and the colour can be varied according to hue, saturation, alpha & value (brightness). The layers are then blended using an ‘additive’ compositing process, so that the layers interact with one another, for example a combination of overlapping red, green and blue shapes would produce white. This additive blending is a crucial aspect of the work’s visual aesthetic.

While the visuals are rendered on the iPad by the app developed by Jason McDermott, using Xcode and the openFrameworks libraries, the actual moment-by-moment instructions regarding what shapes are to be drawn, colour, size brightness etc. are all sent from the Max document.

The Max document, the top level ‘patch’ as it is referred to in the Max programming environment, is the heart of the work: the primary mediating layer between the sensor and display hardware that determines how changes in heart rate will control the appearance and sound of the work. It consists of an input section that receives sensor data, an analysis section that generates statistics from the heart rate measurements, and mapping layers that map these statistics to the various audio and graphic variables of colour, shape, volume etc.

The modular design methods used in the Max programming environment allow for the creation of modular units of code referred to as ‘abstractions’ and ‘bpatchers’, that can be re-used in multiple instances to process many variables using a very simple set of instructions. The programming for Distillery: Waveforming makes extensive use of these modules, which are stored as discreet ‘.maxpat’ files within the Max folder on the Mac mini computer. These modules are used for many of the repetitive statistical processes used to analyse the participant’s heart rate, as well as the mappings used to create the highly layered visuals and sounds that are central to the aesthetic of Distillery: Waveforming.

In the analysis section of the programming, changes in average heart rate are calculated over different time frames: the average rate of the last four heart beats, the average rate of the last sixteen heart beats, then thirty-two heart beats and so on, as well as information about the direction of these changes, enabling the work to track when the participant’s heart rate is starting to increase or decrease.

Within Max, the twenty-two graphic layers of visuals used in the prototype BrightHearts app, are each controlled by a corresponding ‘bpatcher’ layer-control module. Each of these ‘bpatcher’ modules contain 107 variables that determine how the parameters of all the layers are controlled. That is what aspect of the participant’s heart rate patterning it responds to and how these changes are mapped to variables such as diameter and colour of the layer in question.

Each layer-control module is comprised of sixteen sub-modules responsible for specific aspects of each layer’s appearance such as diameter, hue, saturation, and shape-type. In the programming of the layer-control modules the boxes of numbers visible in each module describe how incoming data relating to heart rate is mapped to the behaviour of the layer, in this case its diameter, and what statistical information it will respond to such as a running average of the last thirty-two heart beats, a normalised and interpolated waveform representing breath-related variations in heart rate, or the pulse of each heartbeat (Fig 2).

Figure 2: Four of the twenty-two layer-control mapping modules in Max – used to control the shapes drawn on the iPad by the BrightHearts (prototype) app.

All of these variables, controlling the appearance of each layer, are stored and recalled using a table of preset values describing which statistics each layer and variable responds to and how it interprets this input. These numbers are adjusted by the artist to produce the desired mapping and dynamic range and then stored in the .json file and recalled as presets. The information contained in this table is stored as a ‘preset file’ in a .json xml format file that is read when the Max document is launched. These preset files document the precise mapping and scaling settings that determine the appearance and behaviour of each layer of the artwork. Together these layered behaviours and the preset values that describe them produce the final interactive visual aesthetic of the artwork.

Figure 3: Example of one section of the .json ‘preset’ file containing preset data that is read by each of the graphics mapping modules – in this example showing all the parameters used to control the behaviour of the diameter  for Layer 15.

 

For the artist, these preset tables are of central importance for documenting the appearance and interactive behaviour of the artwork for future interpretations, since it is these values that determine how the work responds to changes in the participant’s heart rate.

Strategies for hardware independent migration and reinterpretation

Khut has begun the process of documenting and describing the interactive principles and behaviour of the artwork independent from current technologies to enable the work to be recreated in the future, based on the Variable Media Network approaches set out by Ippolito (2003a).

For creators working in ephemeral formats who want posterity to experience their work more directly than through second-hand documentation or anecdote, the variable media paradigm encourages creators to define their work independently from medium so that the work can be translated once its current medium is obsolete.
This requires creators to envision acceptable forms their work might take in new mediums, and to pass on guidelines for recasting work in a new form once the original has expired.

Variable Media Network, Definition – Ippolito, 2003b

For Khut, the essence of the artwork that would need to be preserved and recreated, independent of the specific technologies currently used, is the experience of having one’s breathing, nervous system, pulse and heart rate patterning represented in real time in an interactive audio visual experience, and the various optical and kinaesthetic sensations and correlations that are experienced during this interaction.

Taking an experience-centred approach it is not the source code as much as the experience of the visuals and sounds changing in response to the live heart rate data that is most essential to recreating the artwork. The aesthetic experience of interacting with the artwork, and the maner in which it responds to changes in heart rate initiated through slow breathing and relaxation is crucial to its authenticity.

The schematic approach: an open ‘score’ for reinterpretation

The simplest approach to documentation for future reinterpretation is the use of a very flexible set of instructions outlining the core interactive form and behaviour of the artwork. This approach leaves many aspects of the artwork’s appearance open to interpretation. Essentially what is preserved is the basic nature of the transformation – from breath, pulse and nervous system to colour, diameter, shape and sound. Such an approach would comprise the following instructions:

The visuals and sounds have been designed to respond to two forms of interaction:
1) gradual decreases in heart rate caused by a general increase in the participants ‘parasympathetic’ nervous system activity that can be initiated through conscious relaxation of muscles in the face, neck, shoulders and arms,

2) breath-related variations in heart rate known as ‘respiratory sinus arrhythmia’ whereby slow inhalation causes an increase in heart rate, and slow exhalation causes a decrease in heart rate.

The result being a wave-like (sine) oscillation in heart rate to which the work owes its name (wave forming).

 

Features extracted from Participant’s pulse and heart rate Name of modulation source (controling the sounds/visuals) Visual representation on tablet surface Sonic representation as heard through headphones
Pulsing heart beat /beat/bang Gently throbbing circular shapes that either contract subtly with each pulse, or darken slightly with each pulse – to create a visual effect of subtle pulsing. A deep and soft throbbing noise that gets louder and brighter as heart rate increases, and softer as heart rate decreases.
Breath-related variations in heart rate – normalised and rescaled to emphasise slow, wave-like oscillations in heart rate that can be induced through recurrent slow breathing at around 6 breaths per minute. /IBI/dev-mean/4/normalised Ring-shaped layers that expand when heart rate is increasing, and contract when heart rate is decreasing. Synthesized drone sound, modulated with a ‘phasor’ effect controlled by breath-related changes in heart rate.
Gradual changes in average heart rate (average of last 32 beats) mediated by changes in autonomic nervous system (stress/relaxation), neck, shoulder arm muscle relaxation etc. /IBI/how-slow/32 Colour of background gradient – red for fastest heart rates recorded since start of session, green for medium, and blue for slowest average heart rate recorded since start of session. Pitch of synthesized drone sound – crossfades through overlapping notes in C Melodic Minor scale – from B6 to A2
threshold points triggered by decreases in heart rate (/IBI/how-slow/32) musical notes and burst of colour. Circular, expanding bursts of colour from centre – fading out when they reach the edge of the frame. Highly reverberant electric piano sounds triggered when threshold crossed – synchronised with burst of colour. Pitch descends in C Melodic Minor scale according to decrease in heart rate
When participants sustain a slow relaxed breath pattern at around 6 breaths per minute, Frequency-domain analysis of heart rate variability will report the appearance of a ‘resonant peak’ around 0.1Hz (6 breaths per minute). There are six thresholds: 25, 30, 35, 40, 45, 50. Each time one of these thresholds is crossed – a message is generated that is used to control an audio and visual event /spectrum/resonant-peak-resonance A large, soft-edged blue ring expands slowly out beyond the edges of the frame and then slowly fades away.

 

Threshold 25 = yellow

Threshold 30 = yellow-green

Threshold 35 = green-yellow

Threshold 40 = green

Threshold 45 = cyan

Threshold 50 = indigo

 

 

Very soft, muted and heavily reverberated piano note, with slow decay

 

Threshold 25 = D#3

Threshold 30 = A#3

Threshold 35 = D#4

Threshold 40 = F4

Threshold 45 = G4

Threshold 50 = A#4

Table 1: showing relationship of key mappings in Distillery Waveforming heart rate controlled artwork. Table 1 lists the key heart rate variables and their mapping to the main visual and sonic representations. The most basic recreation of the work according to the scheme laid out in this table would still require instructions for obtaining and generating the modulation sources from the heart rate data: the algorithms that scale and interpolate the heart rate data and translate these beat-by-beat messages into smooth, continuous control signals.

The translation approach: calibration tools and resources

A second, more precise approach for reinterpretation provides a set of documents to help future developers interpret and translate the original code and .json preset data to provide an aesthetic experience more closely aligned to the artwork at the time it was acquired by QAGOMA (Fig 3). This information is contained in a set of calibration images and accompanying tables that provide a crucial link for reinterpretations of the artwork, allowing future programmers to determine how values stored in the original preset files relate to the appearance of each of the work’s 22 graphic layers. Many aspects of the prototype BrightHearts app’s interpretation of these messages are not linear in their response and it can be seen that the gradients for each shape blend differently according to hue (Fig 4). It is hoped that these calibration images will help future programmers to compare how their own code interprets the messages stored in the preset files, against the behaviour and appearance of original prototype BrightHearts iPad app.

Figure 4: Example of one of the calibration images and accompanying tables describing how the messages from the Max software are interpreted by the visualisation software of the BrightHearts (prototype) App on the iPad.

Figure 4: Example of one of the calibration images and accompanying tables describing how the messages from the Max software are interpreted by the visualisation software of the BrightHearts (prototype) App on the iPad.

 

Summary of documentation strategy for future translation

Documentation element Description
Broad schematic mapping of real time heart rate statistics to sounds and visuals Describes the basic interaction concept and interaction experience: images and sounds controlled by slow changes in heart rate that can be influenced through slow breathing and relaxation/excitement.
Experiential aims and conditions for interaction Describes the environmental conditions proscribed by the artist – to ensure optimum conditions for interaction i.e. minimise audio-visual distractions.
Documentation of Max patch: Annotations in each section of Max code: the subsections (‘subpatches’, ‘abstractions’, and ‘bpatchers’) of the main file – describe the flow of information, through each section.

Document each section as a numbered image file, accompanied by notes describing how information is being modified/transformed.

Heart Rate Analysis
Sounds
Visuals – misc. top-level controls i.e. manage storage and retrieval of preset data, transition to ‘live’ visuals, control overall size, hue, position etc.
Annotated table of preset values describing the mapping of heart rate information to the behaviour of the visuals, extracted from .json presets Describing how each layer responds to the various heart rate statistics, and the quality of response over time (i.e. ‘easing’, non-linear scaling etc.)
iPad visuals – Annotated Calibration Images and tables, Indicating how the visuals should look given specific layer-control messages i.e. diameter, hue, alpha etc. describing the idiosyncrasies of the visualisation code.

Table 2 – George Khut’s documentation strategy for “Distillery: Waveforming

 

Conclusions

For Khut, Distillery: Waveforming is foremost an experiential artwork and therefore his ideas about documentation focus on capturing its functionality and the aesthetics of the interaction. Khut sees the fundamental element of Distillery: Waveforming to be something other than the source code and the technical hardware: namely the mappings between breath and relaxation-mediated changes in heart rate and the appearance of the sounds and visuals, and how these mappings give form to the subject’s experience of interactions between their breath, heart rate and autonomic nervous system.

The modular Max patch programming and the presets in the .json file form, for the artist, the compositional heart of Distillery: Waveforming. This programming draws the visuals in response to the real time heart rate data: the key to the artwork. By further documenting the interactive principles independent from the current technology, drawing on approaches proposed in the Variable Media Questionnaire, Khut has developed reference documents that allow for the translation of the original preset data and calibration for future interpretations of the visualisation software.  In this way Khut can describe the artwork with greater clarity in a non-vernacular, opening up opportunities for the artwork to be recreated in alternate modes.

As an artwork in the QAGOMA collection, Distillery: Waveforming sets a precedent as the first prototype-artwork to be acquired. Technology-based digital artworks are prone to being superseded at a rapid pace and attempting to manage even the medium-term future for such artworks is perplexing. To gain the assistance of the artist at the time of acquisition is constructive and very beneficial, but to secure the commitment of the artist to engage in collaborative, long-term conservation strategies is extraordinary and this has resulted in the Gallery acquiring an unparalleled archival resource (Pagliarino 2015, p. 74). Although the Gallery maintains an interest and intention to preserve Distillery: Waveforming in its original developmental state, providing clear evidence of Khut’s ‘iterative’, evolving art practice, the archival resource provides scope to reinterpret the artwork at some point in the future when the original technology no longer functions as intended.

Through this process of defining the experience, the artist and the institution have collaboratively addressed their common and divergent interests in the future care of Distillery: Waveforming. These differing views have created an opportunity to better understand the artwork and its position as an asset within a state collection and a physical, historical link to an ongoing, evolving artistic practice. Khut’s continued interest in the preservation of Distillery: Waveforming and his participation in dialogues about this artwork and other iterations of the biofeedback project have provided the Gallery with an extraordinary reference and flexibility to manage and display the artwork long into the future.

 

 

Works Cited

Collin, JD and Perrin, V 2013, ‘Collecting Digital Art: Reflections and Paradoxes – Ten years’ experience at the Espace multimedia gantner’ in Serexhe, Berhnard(Ed.), Digital Art Conservation – Preservation of digital art theory and practice, Germany, ZKM Centre for Art and Media Karlsruhe.

Cycling74, 2012, Max (version 6.0.8) computer software, Walnut, California, Accessed 22 September 2014.

Ippolito, Jon. 2003a, ‘Accomodating the Unpredictable’ in The Variable Media Approach: Permanence through change, Guggenheim Museum Publications, New York, and The Daniel Langlois Foundation for Art, Science & Technology, Montreal, pp. 46-53. Accessed 22 September 2014.

Ippoliti, Jon. 2003b, ‘Variable Media Network, Definition’ Accessed 22 September 2014.

iTunes 2014, ‘BrightHearts by Sensorium Health’ viewed 29 August 2014

Khut, George Poonkhin 2014, ‘On Distillery: Waveforming (2012)’ Born Digital and Cultural Heritage conference, Melbourne, Australia, 19-20 June 2014, viewed 29 August 2014

Khut, George Poonkhin 2014, personal communication, interview 5th September 2014.

Khut, George Poonkhin 2012, Distillery: Waveforming 2012 user’s manual (draft), in the possession of the Queensland Art Gallery, Brisbane.

Khut, George Poonkhin, Morrow, A & Watanabe, MY 2011, ‘The BrightHearts Project: A new approach to the management of procedure-related paediatric anxiety’, Preceding of OzCHI 2011: The Body InDesign.  Design, Culture & Interaction, The Australasian Computer Human Interaction conference, Canberra, Australia, 28-29 November 2011, pp. 17-21.

Khut, George Poonkhin  & Muller, L 2005, ‘Evolving creative practice: a reflection on working with audience experience in Cardiomorphologies’, in Lyndal Jones, Pauline Anastasious, , Rhonda Smithies, Karen Trist,  (Eds.), Vital Signs: creative practice and new media now, Australian Centre for the Moving Image, Melbourne, Australia, RMIT Publishing.

Anne Laforet, Aymeric Mansoux and Marloes de Valk,  2010, ‘Rock, paper, scissors and floppy disks’, in Annet Dekker (ed.), Archive 2020: Sustainable archiving of born digital cultural content, Virtueel Platform, viewed 4 February 2014, pp.25-36.

McDermott, J 2013, ‘Bright Hearts (2011)’, jmcd, viewed 4 September 2013, <http://www.jasonmcdermott.net/portfolio/bright-hearts>

Pagliarino, Amanda 2015, ‘Life beyond legacy: George Poonhkin Khut’s Distillery: Waveforming’, AICCM Bulletin, vol. 36, no. 1, pp. 67-75.

Queensland Art Gallery / Gallery of Modern Art, 2012, National New Media Award 2012 – George Poonkhin Khut 2012 NMA winner, QAGOMA, viewed 15 October 2014.

 

Bios

Amanda PAGLIARINO is Head of Conservation at the Queensland Art Gallery / Gallery of Modern Art, Brisbane.  Since 2003 she has worked on the conservation of audiovisual and electronic artworks in the Gallery’s collection. Amanda received a Bachelor of Visual Arts from the Queensland University of Technology in 1991 and a Bachelor of Applied Science, Conservation of Cultural Material from the University of Canberra in 1995.

George Poonkhin Khut is an artist and interaction-designer working across the fields of electronic art, interaction design and arts-in-health. He lectures in art and interaction design at UNSW Art & Design (University of New South Wales, Faculty of Art  & Design) in Sydney, Australia. Khut’s body-focussed interactive and participatory artworks use bio-sensing technologies to re-frame experiences of embodiment, health and subjectivity. In addition to presenting his works in galleries and museums, George has been developing interactive and participatory art with exhibitions and research projects in hospitals, starting with “The Heart Library Project” at St. Vincent’s Public Hospital in 2009, and more recently with the “BrightHearts” research project – a collaboration with Dr Angie Morrow, Staff Specialist in Brain Injury at The Children’s Hospital at Westmead, Kids Rehab, that is evaluating the efficacy of his interactive artworks as tools for helping to reduce the pain and anxiety experienced by children during painful and anxiety-provoking procedures.

 

Volume 27, 2016

Themed Issue: Born Digital Cultural Heritage

Edited by Angela Ndalianis & Melanie Swalwell

Introduction: Born Digital Heritage – Angela Ndalianis & Melanie Swalwell

  1. It Is What It Is, Not What It Was: Making Born Digital Heritage – Henry Lowood
  2. Defining The Experience: George Poonhkin Khut’s Distillery: Waveforming, 2012 – Amanda Pagliarino & George Poonkhin Khut
  3. There and Back Again: A Case History of Writing The Hobbit – Veronika Megler
  4. Participatory Historians in Digital Cultural Heritage Process: Monumentalization of the First Finnish Commercial Computer Game – Jaakko Suominen and Anna Sivula
  5. Retaining Traces of Composition in Digital Manuscript Collections: a Case for Institutional Proactivity – Millicent Weber

It Is What It is, Not What It Was – Henry Lowood

Abstract: The preservation of digital media in the context of heritage work is both seductive and daunting. The potential replication of human experiences afforded by computation and realised in virtual environments is the seductive part. The work involved in realising this potential is the daunting side of digital collection, curation, and preservation. In this lecture, I will consider two questions. First, Is the lure of perfect capture of data or the reconstruction of “authentic” experiences of historical software an attainable goal? And if not, how might reconsidering the project as moments of enacting rather than re-enacting provide a different impetus for making born digital heritage?

Keynote address originally delivered at the Born Digital and Cultural Heritage Conference, Melbourne, 19 June 2014

Let’s begin with a question. When did libraries, archives, and museums begin to think about software history collections? The answer: In the late 1970s. The Charles Babbage Institute (CBI) and the History of Computing Committee of the American Federation of Information Processing Societies (AFIPS), soon to be a sponsor of CBI, were both founded in 1978. The AFIPS committee produced a brochure called “Preserving Computer-Related Source Materials.” Distributed at the National Computer Conference in 1979, it is the earliest statement I have found about preserving software history. It says,

If we are to fully understand the process of computer and computing developments as well as the end results, it is imperative that the following material be preserved: correspondence; working papers; unpublished reports; obsolete manuals; key program listings used to debug and improve important software; hardware and componentry engineering drawings; financial records; and associated documents and artifacts. (“Preserving …” 4)

Mostly paper records. The recommendations say nothing about data files or executable software, only nodding to the museum value of hardware artefacts for “esthetic and sentimental value.” The brochure says that artefacts provide “a true picture of the mind of the past, in the same way as the furnishings of a preserved or restored house provides a picture of past society.” One year later, CBI received its first significant donation of books and archival documents from George Glaser, a former president of AFIPS. Into the 1980s history of computing collections meant documentation: archival records, publications, ephemera and oral histories.

Software preservation trailed documentation and historical projects by a good two decades. The exception was David Bearman, who left the Smithsonian in 1986 to create a company called Archives & Museum Informatics (AHI). He began publishing the Archival Informatics Newsletter in 1987 (later called Archives & Museum Informatics). As one of its earliest projects, AHI drafted policies and procedures for a “Software Archives” at the Computer History Museum (CHM) then located in Boston. By the end of 1987, Bearman published the first important study of software archives under the title Collecting Software: A New Challenge for Archives & Museums. (Bearman, Collecting Software; see also Bearman, “What Are/Is Informatics?”)

In his report, Bearman alternated between frustration and inspiration. Based on a telephone survey of companies and institutions, he wrote that “the concept of collecting software for historical research purposes had not occurred to the archivists surveyed; perhaps, in part, because no one ever asks for such documentation!” (Bearman, Collecting Software 25-26.) He learned that nobody he surveyed was planning software archives. Undaunted, he produced a report that carefully considered software collecting as a multi-institutional endeavor, drafting collection policies and selection criteria, use cases, a rough “software thesaurus” to provide terms for organizing a software collection, and a variety of practices and staffing models. Should some institution accept the challenge, here were tools for the job.

Well, here we are, nearly thirty years later. We can say that software archives and digital repositories finally exist. We have made great progress in the last decade with respect to repository technology and collection development. Looking back to the efforts of the 1980s, one persistent issue raised as early as the AFIPS brochure in 1978 is the relationship between collections of historical software and archival documentation about that software. This is an important issue. Indeed, it is today, nearly forty years later, still one of the key decision points for any effort to build research collections aiming to preserve digital heritage or serve historians of software. Another topic that goes back to Bearman’s report is a statement of use cases for software history. Who is interested in historical software and what will they do with it? Answers to this fundamental question must continue to drive projects in digital preservation and software history.

As we consider the potential roles to be played by software collections in libraries and museums, we immediately encounter vexing questions about how researchers of the future will use ancient software. Consider that using historical software now in order to experience it in 2014 and running that software in 2014 to learn what it was like when people operated it thirty years ago are two completely different use cases. This will still be true in 2050. This may seem like an obvious point, but it is important to understand its implications. An analogy might help. I am not just talking about the difference between watching “Gone with the Wind” at home on DVD versus watching it in a vintage movie house in a 35mm print – with or without a live orchestra. Rather I mean the difference between my experience in a vintage movie house today – when I can find one – and the historical experience of, say, my grandfather during the 1930s. My experience is what it is, not what his was. So much of this essay will deal with the complicated problem of enacting a contemporary experience to re-enact a historical experience and what it has to do with software preservation. I will consider three takes on this problem: the historian’s, the media archaeologist’s, and the re-enactor.

Take 1. The Historian

Take one. The historian. Historians enact the past by writing about it. In other words, historians tell stories. This is hardly a revelation. Without meaning to trivialize the point, I cannot resist pointing out that “story” is right there in “hi-story” or that the words for story and history are identical in several languages, including French and German. The connections between story-telling and historical narrative have long been a major theme in writing about the methods of history, that is, historiography. In recent decades, this topic has been mightily influenced by the work of Hayden White, author of the much-discussed Metahistory: The Historical Imagination in Nineteenth-Century Europe, published in 1973.

White’s main point about historians is that History is less about subject matter and source material and more about how historians write.

He tells us that historians do not simply arrange events culled from sources in correct chronological order. Such arrangements White calls Annals or Chronicles. The authors of these texts merely compile lists of events. The work of the historian begins with the ordering of these events in a different way. Hayden writes in The Content of the Form that in historical writing, “the events must be not only registered within the chronological framework of their original occurrence but narrated as well, that is to say, revealed as possessing a structure, an order of meaning, that they do not possess as mere sequence.” (White, Content of the Form 5) How do historians do this? They create narrative discourses out of sequential chronicles by making choices. These choices involve the form, effect and message of their stories. White puts choices about form, for example, into categories such as argument, ideology and emplotment. There is no need in this essay to review all of the details of every such choice. The important takeaway is that the result of these choices by historians is sense-making through the structure of story elements, use of literary tropes and emphasis placed on particular ideas. In a word, plots. White thus gives us the enactment of history as a form of narrative or emplotment that applies established literary forms such as comedy, satire, and epic.

In his book Figural Realism: Studies in the Mimesis Effect, White writes about the “events, persons, structures and processes of the past” that “it is not their pastness that makes them historical. They become historical only in the extent to which they are represented as subjects of a specifically historical kind of writing.” (White, Figural Realism 2.) It is easy to take away from these ideas that history is a kind of literature. Indeed, this is the most controversial interpretation of White’s historiography.

My purpose in bringing Hayden White to your attention is to insist that there is a place in game and software studies for this “historical kind of writing.” I mean writing that offers a narrative interpretation of something that happened in the past. Game history and software history need more historical writing that has a point beyond adding events to the chronicles of game development or putting down milestones of the history of the game industry. We are only just beginning to see good work that pushes game history forward into historical writing and produces ideas about how these historical narratives will contribute to allied works in fields such as the history of computing or the history of technology more generally.

Allow me one last point about Hayden White as a take on enactment. Clearly, history produces narratives that are human-made and human-readable. They involve assembling story elements and choosing forms. How then do such stories relate to actual historical events, people, and artifacts? Despite White’s fondness for literary tropes and plots, he insists that historical narrative is not about imaginary events. If historical methods are applied properly, the resulting narrative according to White is a “simulacrum.” He writes in his essay on “The Question of Narrative in Contemporary Historical Theory,” that history is a “mimesis of the story lived in some region of historical reality, and insofar as it is an accurate imitation, it is to be considered a truthful account thereof.” (White, “The Question of Narrative …” 3.) Let’s keep this idea of historical mimesis in mind as we move on to takes two and three.

Take 2. The Media Archaeologist

My second take is inspired by the German media archaeologist Wolfgang Ernst. As with Hayden White, my remarks will fall far short of a critical perspective on Ernst’s work. I am looking for what he says to me about historical software collections and the enactment of media history.

Hayden White put our attention on narrative; enacting the past is storytelling. Ernst explicitly opposes Media Archaeology to historical narrative. He agrees in Digital Memory and the Archive, that “Narrative is the medium of history.” By contrast, “the technological reproduction of the past … works without any human presence because evidence and authenticity are suddenly provided by the technological apparatus, no longer requiring a human witness and thus eliminating the irony (the insight into the relativity) of the subjective perspective.” (Ernst, Loc. 1053-1055.) Irony, it should be noted, is one of White’s favourite tropes for historical narrative.

White tells us that historical enactment is given to us as narrative mimesis, with its success given as the correspondence of history to some lived reality. Ernst counters by giving us enactment in the form of playback.

In an essay called “Telling versus Counting: A Media-Archaeological Point of View,” Ernst plays with the notion that, “To tell as a transitive verb means ‘to count things’.” The contrast with White here relates to the difference in the German words erzählen (narrate) and zählen (count), but you also find it in English: recount and count. Ernst describes historians as recounters: “Modern historians … are obliged not just to order data as in antiquaries but also to propose models of relations between them, to interpret plausible connections between events.” (Ernst, Loc. 2652-2653) In another essay, aptly subtitled “Method and Machine versus the History and Narrative of Media,” Ernst adds that mainstream histories of technology and mass media as well as their counter-histories are textual performances that follow “a chronological and narrative ordering of events.” He observes succinctly that, “It takes machines to temporarily liberate us from such limitations.” (Ernst, Loc. 1080-1084)

Where do we go with Ernst’s declaration in “Telling versus Counting,” that “There can be order without stories”? We go, of course, directly to the machines. For Ernst, media machines are transparent in their operation, an advantage denied to historians. We play back historical media on historical machines, and “all of a sudden, the historian’s desire to preserve the original sources of the past comes true at the sacrifice of the discursive.” We are in that moment directly in contact with the past.

In “Method and Machine”, Ernst offers the concept of “media irony” as a response to White’s trope of historical irony. He says,

Media irony (the awareness of the media as coproducers of cultural content, with the medium evidently part of the message) is a technological modification of Hayden White’s notion that “every discourse is always as much about discourse itself as it is about the objects that make up its subject matter. (Ernst, Loc. 1029-1032)

As opposed to recounting, counting in Ernst’s view has to do with the encoding and decoding of signals by media machines. Naturally, humans created these machines. This might be considered as another irony, because humans- have thereby “created a discontinuity with their own cultural regime.” We are in a realm that replaces narrative with playback as a form of direct access to a past defined by machine sequences rather than historical time. (Ernst, Loc. 1342-1343)

Ernst draws implications from media archaeology for his closely connected notion of the multimedia archive. In “Method and Machine,” he says, “With digital archives, there is, in principle, no more delay between memory and the present but rather the technical option of immediate feedback, turning all present data into archival entries and vice versa.” In “Telling versus Counting,” he portrays “a truly multimedia archive that stores images using an image-based method and sound in its own medium … And finally, for the first time in media history, one can archive a technological dispositive in its own medium.” (Ernst, Loc. Loc. 1745-1746; 2527-2529.) Not only is the enactment of history based on playback inherently non-discursive, but the very structure of historical knowledge is written by machines.

With this as background, we can turn to the concrete manifestation of Ernst’s ideas about the Multimedia Archive. This is the lab he has created in Berlin. The website for Ernst’s lab describes The Media Archaeological Fundus (MAF) as “a collection of various electromechanical and mechanical artefacts as they developed throughout time. Its aim is to provide a perspective that may inspire modern thinking about technology and media within its epistemological implications beyond bare historiography.” (Media Archaeological Fundus) Ernst explained the intention behind the MAF in an interview with Lori Emerson as deriving from the need to experience media “in performative ways.” So he created an assemblage of media and media technologies that could be operated, touched, manipulated and studied directly. He said in this interview, “such items need to be displayed in action to reveal their media essentiality (otherwise a medium like a TV set is nothing but a piece of furniture).” (Owens) Here is media archaeology’s indirect response to the 1979 AFIPS brochure’s suggestion that historical artifacts serve a purpose similar to furnishings in a preserved house.

The media-archaeological take on enacting history depends on access to artifacts and, in its strongest form, on their operation. Even when its engagement with media history is reduced to texts, these must be “tested against the material evidence.” This is the use case for Playback as an enactment of software history.

Take 3. The Re-enactor

Take three. The Re-enactor. Authenticity is an important concept for digital preservation.   A key feature of any digital archive over the preservation life-cycle of its documents and software objects is auditing and verification of authenticity, as in any archive. Access also involves authenticity, as any discussion of emulation or virtualization will bring up the question of fidelity to an historical experience of using software.

John Walker (of AutoDesk and Virtual Reality fame) created a workshop called Fourmilab to work on personal projects such as an on-line museum “celebrating” Charles Babbage’s Analytical Engine. This computer programming heritage work includes historical documents and a Java-based emulator of the Engine. Walker says, “Since we’re fortunate enough to live in a world where Babbage’s dream has been belatedly realised, albeit in silicon rather than brass, we can not only read about The Analytical Engine but experience it for ourselves.” The authenticity of this experience – whatever that means for a machine that never existed – is important to Walker. In a 4500-word essay titled, “Is the Emulator Authentic,” he tells us that, “In order to be useful, an emulator program must be authentic—it must faithfully replicate the behaviour of the machine it is emulating.” By extension, the authenticity of a preserved version of the computer game DOOM in a digital repository could be audited by verifying that it can properly run a DOOM demo file. The same is true for Microsoft Word and a historical document in the Word format. This is a machine-centered notion of authenticity; we used it in the second Preserving Virtual Worlds project as a solution to the significant properties problem for software. (Walker, “Introduction;” Walker, “Analytical Engine.”)

All well and good. However, I want to address a different authenticity. Rather than judging authenticity in terms of playback, I would like to ask what authenticity means for the experience of using software. Another way of putting this question is to ask what we are looking for in the re-enactment of historical software use. So we need to think about historical re-enactment.

I am not a historical re-enactor, at least not the kind you are thinking of. I have never participated in the live recreation or performance of a historical event. Since I have been playing historical simulations – a category of boardgames – for most of my life, perhaps you could say that I re-enact being a historical military officer by staring at maps and moving units around on them. It’s not the same thing as wearing period uniforms and living the life, however.

Anyway, I need a re-enactor. In his 1998 book Confederates in the Attic, Tony Horwitz described historical re-enactment in its relationship to lived heritage. (Horwitz) His participant-journalist reportage begins at a chance encounter with a group of “hard-core” Confederate re-enactors. Their conversation leads Horwitz on a year-long voyage through the American South. A featured character in Confederates in the Attic is the re-enactor Robert Lee Hodge, a waiter turned Confederate officer. He took Horwitz under his wing and provided basic training in re-enactment. Hodge even became a minor celebrity due to his role in the book.

Hodges teaches Horwitz the difference between hard-core and farby (i.e., more casual) re-enactment. He tells Horwitz about dieting to look sufficiently gaunt and malnourished, the basics of “bloating” to resemble a corpse on the battlefield, what to wear, what not to wear, what to eat, what not to eat, and so on. It’s remarkable how little time he spends on martial basics. One moment sticks out for me. During the night after a hard day of campaigning Horwitz finds himself in the authentic situation of being wet, cold and hungry. He lacks a blanket, so he is given basic instruction in the sleeping technique of the Confederate infantryman: “spooning.” According to the re-enactor Scott Cross, “Spooning is an old term for bundling up together in bed like spoons placed together in the silver chest.” (Horwitz) Lacking adequate bedding and exposed to the elements, soldiers bunched up to keep warm. So that’s what Horwitz does, not as an act of mimesis or performance per se, but in order to re-experience the reality of Civil War infantrymen.

It interested me that of all the re-enactment activities Horwitz put himself through, spooning reveals a deeper commitment to authenticity than any of the combat performances he describes. It’s uncomfortable and awkward, so requires dedication and persistence. Sleep becomes self-conscious, not just in order to stick with the activity, but because the point of it is to recapture a past experience of sleeping on the battlefield. Since greater numbers of participants are needed for re-enacting a battle than sleep, more farbs (the less dedicated re-enactors) show up and thus the general level of engagement declines. During staged battles, spectators, scripting, confusion and accidents all interfere with the experience. Immersion breaks whenever dead soldiers pop up on the command, “resurrect.” In other words, performance takes over primacy from the effort to re-experience. It is likely that many farbs dressed up for battle are content to find a hotel to sleep in.

Specific attention to the details of daily life might be a reflection of recent historical work that emphasizes social and cultural histories of the Civil War period, rather than combat histories. But that’s not my takeaway from the spooning re-enactors. Rather, it’s the standard of authenticity that goes beyond performance of a specific event (such as a battle) to include life experience as a whole. Horvitz recalled that,

Between gulps of coffee—which the men insisted on drinking from their own tin cups rather than our ceramic mugs—Cool and his comrades explained the distinction. Hardcores didn’t just dress up and shoot blanks. They sought absolute fidelity to the 1860s: its homespun clothing, antique speech patterns, sparse diet and simple utensils. Adhered to properly, this fundamentalism produced a time travel high, or what hardcores called a ‘period rush.’ (Horwitz, Loc. 153-157)

Stephen Gapps, an Australian curator, historian, and re-enactor has spoken of the “extraordinary lengths” re-enactors go to “acquire and animate the look and feel of history.” Hard-core is not just about marching, shooting and swordplay. I wonder what a “period rush” might be for the experience of playing Pitfall! in the mid-21st century. Shag rugs? Ambient New Wave radio? Caffeine-free cola? Will future re-enactors of historical software seek this level of experiential fidelity? Gapps, again: “Although reenactors invoke the standard of authenticity, they also understand that it is elusive – worth striving for, but never really attainable.” (Gapps 397)

Re-enactment offers a take on born-digital heritage that proposes a commitment to lived experience. I see some similarity here with the correspondence to lived historical experience in White’s striving for a discursive mimesis. Yet, like media archaeology, re-enactment puts performance above discourse, though it is the performance of humans rather than machines.

Playing Pitfalls

We now have three different ways to think about potential uses of historical software and born digital documentation. I will shift my historian’s hat to one side of my head now and slide up my curator’s cap. If we consider these takes as use cases, do they help us decide how to allocate resources to acquire, preserve, describe and provide access to digital collections?

In May 2013, the National Digital Information Infrastructure and Preservation Program (NDIIPP) of the U.S. Library of Congress (henceforth: LC) held a conference called Preserving.exe. The agenda was to articulate the “problems and opportunities of software preservation.” In my contribution to the LC conference report issued a few months later, I described three “lures of software preservation.” (Lowood) These are potential pitfalls as we move from software collections to digital repositories and from there to programs of access to software collections. The second half of this paper will be an attempt to introduce the three lures of software preservation to the three takes on historical enactment.

  1. The Lure of the Screen

Let’s begin with the Lure of the Screen. This is the idea that what counts in digital media is what is delivered to the screen. This lure pops up in software preservation when we evaluate significant properties of software as surface properties (graphics, audio, haptics, etc).

This lure of the screen is related to what media studies scholars such as Nick Montfort, Mark Sample and Matt Kirschenbaum have dubbed (in various but related contexts) “screen essentialism.” If the significant properties of software are all surface properties, then our perception of interaction with software tells us all we need to know. We check graphics, audio, responses to our use of controllers, etc., and if they look and act as they should, we have succeeded in preserving an executable version of historical software. These properties are arguably the properties that designers consider as the focus of user interaction and they are the easiest to inspect and verify directly.

The second Preserving Virtual Worlds project was concerned primarily with identifying significant properties of interactive game software. On the basis of several case sets and interviews with developers and other stakeholders, we concluded that isolating surface properties, such as image colourspace as one example, while significant for other media such as static images, is not a particularly useful approach to take for game software. With interactive software, significance appears to be variable and contextual, as one would expect from a medium in which content is expressed through a mixture of design and play, procedurality and emergence. It is especially important that software abstraction levels are not “visible” on the surface of play. It is difficult if not impossible to monitor procedural aspects of game design and mechanics, programming and technology by inspecting properties expressed on the screen.

The preservation lifecycle for software is likely to include data migration. Access to migrated software will probably occur through emulation. How do we know when our experience of this software is affected by these practices? One answer is that we audit significant properties, and as we now know, it will be difficult to predict which characteristics are significant. An alternative or companion approach for auditing the operation of historical software is to verify the execution of data files. The integrity of the software can be evaluated by comparison to documented disk images or file signatures such as hashes or checksums. However, when data migration or delivery environments change the software or its execution environment, this method is inadequate. We must evaluate software performance. Instead of asking whether the software “looks right,” we can check if it runs verified data-sets that meet the specifications of the original software. Examples range from word processing documents to saved game and replay files. Of course, visual inspection of the content plays a role in verifying execution by the software engine; failure will not always be clearly indicated by crashes or error messages. Eliminating screen essentialism does not erase surface properties altogether.

The three takes compel us to think about the screen problem in different ways. First, the Historian is not troubled by screen essentialism. His construction of a narrative mimesis invokes a selection of source materials that may or may not involve close reading of personal gameplay, let alone focus on surface properties. On the other hand, The Re-enactor’s use of software might lead repositories to fret about what the user sees, hears and feels. It makes sense with this use case to think about the re-enactment as occurring at the interface. If a repository aims to deliver a re-enacted screen experience, it will need to delve deeply into questions of significant properties and their preservation.

Screen essentialism is also a potential problem for repositories that follow the path of Media Archaeology. It is unclear to me how a research site like the MAF would respond to digital preservation practices based on data migration and emulation. Can repositories meet the requirements of media archaeologists without making a commitment to preservation of working historical hardware to enable playback from original media? It’s not just that correspondence to surface characteristics is a significant property for media archaeologists. Nor is the Lure of the Screen a criticism of Media Archaelogy. I propose instead that it is a research problem. Ernst’s vision of a Multimedia Archive is based on the idea that media archaeology moves beyond playback to reveal mechanisms of counting. This machine operation clearly is not a surface characteristic. Ernst would argue, I think, that this counting is missed by an account of what is seen on the screen. So let’s assign the task of accounting for counting to the Media Archaeologist, which means showing us how abstraction layers in software below the surface can be revealed, audited and studied.

  1. The Lure of the Authentic Experience

I have already said quite a bit about authenticity. Let me explain now why I am sceptical about an authentic experience of historical software, and why this is an important problem for software collections.

Everyone in game or software studies knows about emulation. Emulation projects struggle to recreate an authentic experience of operating a piece of software such as playing a game. Authenticity here means that the use experience today is like it was. The Lure of the Authentic Experience tells digital repositories at minimum not to preserve software in a manner that would interfere with the production of such experiences. At maximum, repositories deliver authentic experiences, whether on-site or on-line. A tall order. In the minimum case, the repository provides software and collects hardware specifications, drivers or support programs. The documentation provides software and hardware specifications. Researchers use this documentation to reconstruct the historical look-and-feel of software to which they have access. In the maximum case, the repository designs and builds access environments. Using the software authentically would then probably mean a trip to the library or museum with historical or bespoke hardware. The reading room becomes the site of the experience.

I am not happy to debunk the Authentic Experience. Authenticity is a concept fraught not just with intellectual issues, but with registers ranging from nostalgia and fandom to immersion and fun. It is a minefield. The first problem is perhaps an academic point, but nonetheless important: Authenticity is always constructed. Whose lived experience counts as “authentic” and how has it been documented? Is the best source a developer’s design notes? The memory of someone who used the software when it was released? A marketing video? The researcher’s self-reflexive use in a library or museum? If a game was designed for kids in 1985, do you have to find a kid to play it in 2050? In the case of software with a long history, such as Breakout or Microsoft Word, how do we account for the fact that the software was used on a variety of platforms – do repositories have to account for all of them? For example, does the playing of DOOM “death match” require peer-to-peer networking on a local area network, a mouse-and-keyboard control configuration and a CRT display? There are documented cases of different configurations of hardware: track-balls, hacks that enabled multiplayer via TCPIP, monitors of various shapes and sizes, and so on. Which differences matter?

A second problem is that the Authentic Experience is not always that useful to the researcher, especially the researcher studying how historical software executes under the hood. The emulated version of a software program often compensates for its lack of authenticity by offering real-time information about system states and code execution. A trade-off for losing authenticity thus occurs when the emulator shows the underlying machine operation, the counting, if you will. What questions will historians of technology, practitioners of code studies or game scholars ask about historical software? I suspect that many researchers will be as interested in how the software works as in a personal experience deemed authentic.   As for more casual appreciation, the Guggenheim’s Seeing Double exhibition and Margaret Hedstrom’s studies of emulation suggest that exhibition visitors actually prefer reworked or updated experiences of historical software. (Hedstrom, Lee, et al.; Jones)

This is not to say that original artefacts – both physical and “virtual” – will not be a necessary part of the research process. Access to original technology provides evidence regarding its constraints and affordances. I put this to you not as a “one size fits all” decision but as an area of institutional choice based on objectives and resources.

The Re-enactor, of course, is deeply committed to the Authentic Experience. If all we offer is emulation, what do we say to him, besides “sorry.” Few digital repositories will be preoccupied with delivering authentic experiences as part of their core activity. The majority are likely to consider a better use of limited resources to be ensuring that validated software artefacts and contextual information are available on a case-by-case basis to researchers who do the work of re-enactment. Re-enactors will make use of documentation. Horwitz credits Robert Lee Hodge with an enormous amount of research time spent at the National Archives and Library of Congress. Many hours of research with photographs and documents stand behind his re-enactments. In short, repositories should let re-enactors be the re-enactors.

Consider this scenario for software re-enactment. You are playing an Atari VCS game with the open-source Stella emulator. It bothers you that viewing the game on your LCD display differs from the experience with a 1980s-era television set. You are motivated by this realization to contribute code to the Stella project for emulating a historical display. It is theoretically possible that you could assemble everything needed to create an experience that satisfies you – an old television, adapters, an original VCS, the software, etc. (Let’s not worry about the shag rug and the lava lamp.) You can create this personal experience on your own, then write code that matches it. My question: Is the result less “authentic” if you relied on historical documentation such as video, screenshots, technical specifications, and other evidence available in a repository to describe the original experience? My point is that repositories can cooperatively support research by re-enactors who create their version of the experience. Digital repositories should consider the Authentic Experience as more of a research problem than a repository problem.

  1. The Lure of the Executable

The Lure of the Executable evaluates software preservation in terms of success at building collections of software that can be executed on-demand by researchers.

Why do we collect historical software? Of course, the reason is that computers, software, and digital data have had a profound impact on virtually every aspect of recent history. What should we collect? David Bearman’s answer in 1987 was the “software archive.” He distinguished this archive from what I will call the software library. The archive assembles documentation; the library provides historical software. The archive was a popular choice in the early days. Margaret Hedstrom reported that attendees at the 1990 Arden Conference on the Preservation of Microcomputer Software “debated whether it was necessary to preserve software itself in order to provide a sense of ‘touch and feel’ or whether the history of software development could be documented with more traditional records.” (Hedstrom and Bearman) In 2002, the Smithsonian’s David Allison wrote about collecting historical software in museums that, “supporting materials are often more valuable for historical study than code itself. They provide contextual information that is critical to evaluating the historical significance of the software products.” He concluded that operating software is not a high priority for historical museums. (Allison 263-65; cf. Shustek)

Again, institutional resources are not as limitless as the things we would like to do with software. Curators must prioritize among collections and services. The choice between software archive and library is not strictly binary, but choices still must be made.

I spend quite a bit of my professional life in software preservation projects. The end-product of these projects is at least in part the library of executable historical software. I understand the Lure of the Executable and the reasons that compel digital repositories to build collections of verified historical software that can be executed on-demand by researchers. This is the Holy Grail of digital curation with respect to software history. What could possibly be wrong with this mission, if it can be executed?   As I have argued on other occasions there are several problems to consider. Let me give you two. The first is that software does not tell the user very much about how it has previously been used. In the best case, application software in its original use environment might display a record of files created by previous users, such as a list of recently opened files found in many productivity titles like Microsoft Office. The more typical situation is that software is freshly installed from data files in the repository and thus completely lacks information about its biography, for want of a better term.

The second, related problem is fundamental. Documentation that is a prerequisite for historical studies of software is rarely located in software. It is more accurate to say that this documentation surrounds software in development archives (including source code) and records of use and reception. It is important to understand that this is not just a problem for historical research. Documentation is also a problem for repositories. If contextual information such as software dependencies or descriptions of relationships among objects is not available to the repository and all the retired software engineers who knew the software inside-and-out are gone – it may be impossible to get old software to run.

Historians, of course, will usually be satisfied with the Archive. Given limited resources, is it reasonable to expect that the institutions responsible for historical collections of documentation will be able to reconcile such traditional uses with other methods of understanding historical computing systems? The Re-enactor will want to run software, and the Media Archaeologist will not just want access to a software library, but to original media and hardware in working order. These are tall orders for institutional repositories such as libraries and archives, though possibly a better fit to the museum or digital history centre.

In Best Before: Videogames, Supersession and Obsolescence, James Newman is not optimistic about software preservation and he describes how the marketing of software has in some ways made this a near impossibility. He is not as pessimistic about video game history, however. In a section of his book provocatively called “Let Videogames Die,” he argues that a documentary approach to gameplay might be a more pragmatic enterprise than the effort to preserve playable games. He sees this as a “shift away from conceiving of play as the outcome of preservation to a position that acknowledges play as an indivisible part of the object of preservation.” (Newman 160) In other words, what happens when we record contemporary use of software to create historical documentation of that use? Does this activity potentially reduce the need for services that provide for use at any given time in the future? This strikes me as a plausible historical use case, but not one for re-enactment or media archaeology.

Software archives or software libraries? That is the question. Is it nobler to collect documentation or to suffer the slings and arrows of outrageous software installations? The case for documentation is strong. The consensus among library and museum curators (including myself) is almost certainly that documents from source code to screenshots are a clear win for historical studies of software. Historians, however, will not be the only visitors to the archive. But there are other reasons to collect documentation. One of the most important reasons, which I briefly noted above, is that software preservation requires such documentation. In other words, successful software preservation activities are dependent upon technical, contextual and rights documentation. And of course, documents tell re-enactors how software was used and can help media archaeologists figure out what their machines are showing or telling them. But does documentation replace the software library? Is it sufficient to build archives of software history without libraries of historical software? As we have seen, this question was raised nearly forty years ago and remains relevant today. My wish is that this question of the relationship between documentation and software as key components of digital heritage work stir conversation among librarians, historians, archivists and museum curators. This conversation must consider that there is likely to be a broad palette of use cases such as the historian, media archaeologist and re-enactor, as well as many others not mentioned here. It is unlikely that any one institution can respond to every one of these use cases. Instead, the more likely result is a network of participating repositories, each of which will define priorities and allocate resources according to both their specific institutional contexts and an informed understanding of the capabilities of partner institutions.

 

References

Allison, David K. “Preserving Software in History Museums: A Material Culture Approach. Ed. Ulf Hashagen, Reinhard Keil-Slawik and Arthur L. Norberg. History of Computing: Software Issues. Berlin: Springer, 2002. 263-272.

Bearman, David. Collecting Software: A New Challenge for Archives and Museums. Archival Informatics Technical Report #2 (Spring 1987).

— “What Are/Is Informatics? And Especially, What/Who is Archives & Museum Informatics?” Archival Informatics Newsletter 1:1 (Spring 1987): 8.

Cross, Scott. “The Art of Spooning.” Atlantic Guard Soldiers’ Aid Society. 13 July 2016. Web. http://www.agsas.org/howto/outdoor/art_of_spooning.shtml. Originally published in The Company Wag 2, no. 1 (April 1989).

Ernst, Wolfgang. Digital Memory and the Archive. (Minneapolis: Univ. Minnesota Press, 2012). Kindle edition.

Gapps, Stephen. “Mobile monuments: A view of historical reenactment and authenticity from inside the costume cupboard of history.” Rethinking History: The Journal of Theory and Practice, 13:3 (2009): 395-409.

Hedstrom, Margaret L., Christopher A. Lee, Judith S. Olson and Clifford A. Lampe, “‘The Old Version Flickers More’: Digital Preservation from the User’s Perspective.” The American Archivist, 69: 1 (Spring – Summer 2006): 159-187.

Hedstrom, Margaret L., and David Bearman, “Preservation of Microcomputer Software: A Symposium,” Archives and Museum Informatics 4:1 (Spring 1990): 10.

Horwitz, Tony. Confederates in the Attic: Dispatches from the Unfinished Civil War. New York: Pantheon Books, 1998. Kindle Edition.

Jones, Caitlin. “Seeing Double: Emulation in Theory and Practice. The Erl King Study.” Paper presented to the Electronic Media Group, 14 June 2004. Electronic Media Group. Web. http://cool.conservation-us.org/coolaic/sg/emg/library/pdf/jones/Jones-EMG2004.pdf

Lowood, Henry. “The Lures of Software Preservation.” Preserving.exe: Toward a National Strategy for Software Preservation (October 2013): 4-11. Web. http://www.digitalpreservation.gov/multimedia/documents/PreservingEXE_report_final101813.pdf

Media Archaeological Fundus. Web. 21 Jan. 2016. http://www.medienwissenschaft.hu-berlin.de/medientheorien/fundus/media-archaeological-fundus

Newman, James. Best Before: Videogames, Supersession and Obsolescence. London: Routledge, 2012.

Owens, Trevor. “Archives, Materiality and the ‘Agency of the Machine’: An Interview with Wolfgang Ernst.” The Signal: Digital Preservation. Web. 8 February 2013. http://blogs.loc.gov/digitalpreservation/2013/02/archives-materiality-and-agency-of-the-machine-an-interview-with-wolfgang-ernst/

“Preserving Computer-Related Source Materials.” IEEE Annals of the History of Computing 1 (Jan.-March 1980): 4-6.

Shustek, Len. “What Should We Collect to Preserve the History of Software?” IEEE Annals of the History of Computing, 28 (Oct.-Dec. 2006): 110-12.

Walker, John. “Introduction” to The Analytical Engine: The First Computer.” Fourmilab, 21 March 2016. Web. http://www.fourmilab.ch/babbage/

— “The Analytical Engine: Is the Emulator Authentic?,” Fourmilab, 21 March 2016. Web. http://www.fourmilab.ch/babbage/authentic.html

White, Hayden. The Content of the Form: Narrative Discourse and Historical Representation. Baltimore: Johns Hopkins Univ. Press, 1987.

Figural Realism: Studies in the Mimesis Effect. Baltimore: Johns Hopkins Univ. Press, 2000.

— “The Question of Narrative in Contemporary Historical Theory.” In: History and Theory 23: 1 (Feb. 1984): 1-33.

 

Bio

Henry Lowood is Curator for History of Science & Technology Collections and for Film & Media Collections at Stanford University. He has led the How They Got Game project at Stanford University since 2000 and is the co-editor of The Machinima Reader and Debugging Game History, both published by MIT Press. Contact: lowood@stanford.edu

 

Digital Memories: The McCoy’s Electronic Sculptures – Wendy Haslem

Abstract: This article investigates the connections between history and new forms of memory that are produced, configured and mapped with the tools of digital media. Digital memories are contained within, and inspired by Jennifer and Kevin McCoy’s electronic sculptures. The article explores the potential for new media technologies to re-imagine the intersection between history and memory as digital ‘lieu de mémoire’, a version of Pierre Nora’s memory sites that block the possibility of forgetting by remembering for us.

The Eternal Return – The McCoys (2003)

The explosion and expansion of digital tools and communication transforms definitions of memory and refines the intersection of memory and history. Digital natives and digitally literate adopters have tools at hand to practice as cartographers, genealogists, archivists, and chroniclers, even historians. This results in the creation of new connections and communities, archives that become virtual as well as material, histories that might be both real and imagined. Image and text based sites like You Tube, Vimeo, Flickr, Wikipedia, Pinterest, Twitter, Facebook and the range of pervasive blogging sites across the Internet provide new ways to produce and disseminate digitally configured histories and memories. The influence of the virtual on material sites of exhibition, particularly galleries, museums, cinematheques and public sites is evidenced by an increased reliance on digital tools, particularly digital screens, to reconfigure memory and history. Digital technologies enable myriad approaches to history, expanding definitions beyond the dominance of the empirical or sequential, bringing memory into contact with history. The obsession with the present in status updates, uploads, new blog posts and the seemingly immediate availability of content brings memory into the present. Exceeding the acceleration of history characteristic of Fredric Jameson’s definition of postmodern culture (1991), the present is rapidly superseded by an immediate future/past or the past eternally returning. Concurrently, definitions of memory produced, distributed and exhibited by digital technologies result in the proliferation of innovative forms of remembering and new ways to imagine histories by prioritizing memory. The electronic sculptures produced by Jennifer and Kevin McCoy that reveals how digital technologies can be used to reflect processes of memory and to map new connections. Installations produced by the McCoys frame and direct memories, they don’t remember for us, but instead, they reveal how memories are indebted to, provoked by and shaped by aspects drawn from the archive of visual cultures. But this interrelationship between memory and history, mapped and imagined through digital technologies was not always perceived as entwined, let alone contingent.

The historian Pierre Nora argues that history and memory exist in violent opposition (1989, p. 8). He describes history as a static, incomplete attempt to reconstruct a past that no longer exists, whilst memory is more fluid, involved in a process of rediscovery that “remains in permanent evolution” (Nora 1989, p. 8). In Nora’s words, “history is perpetually suspicious of memory, and its true mission is to suppress and destroy it” (1989, p. 9). This dynamic collision between history and memory is the result of the acceleration of history at a time that Nora defines as: “a turning point where consciousness of a break with the past is bound up with the sense that memory has been torn-but torn in such a way as to pose the problem of the embodiment of memory in certain sites where a sense of historical continuity persists” (1989, p. 7). The notion of memory as ‘torn’, no longer complete, singular, trusted, no longer emanating from a defined date, moment or time, provides impetus for memory as imagined, embodied and defined in sites beyond the scope of the traditional archive. Nora identifies memory as fluid and transformed by its passage through history. “Memory remains in permanent evolution open to the dialectic of remembering and forgetting, unconscious of its successive deformations, vulnerable to manipulation and appropriation, susceptible to being long dormant and periodically revived” (Nora, 1989, p. 8).  In Nora’s words, memory’s vocation is to record and whilst delegating to the archive the responsibility of remembering, “it sheds its signs upon depositing them there, as a snake sheds its skin” (13). Nora perceives modern memory as above all, archival, relying on the materiality of the trace, the immediacy of the recording, the visibility of the image (1989, p. 13). Whilst Nora’s argument focuses predominantly on French national identity and politics, his discussion of the transformation of history and memory offers a particularly pertinent approach for an investigation of the impact of digital media on remembering. New forms of communications media provide increased access to memory, creating very specific structures for sites of remembering and producing an illusion of memory as immediate, reflexive and interconnected. The digital reshapes the production, distribution, dissemination and exhibition of memory producing innovative approaches to mapping, interacting with memory, and new sites of remembrance. Once captured, memory may contribute another strand of history.

For Nora, lieux de mémoire are sites where “memory crystallizes and secretes itself”, memory sites that block the possibility of forgetting and act to remember for us (1989, p. 7). Nora writes that the “most fundamental purpose of the lieu de mémoire is to stop time, to block the work of forgetting… all of this in order to capture a maximum of meaning in the fewest of signs” (1989, p. 19). In these sites history besieges memory as, “moments of history torn away from the movement of history, then returned” (Nora 1989, p. 12). Memory sites can be actual spatial forms like archives, exhibitions, personal shrines, and they can be tangible objects: collections of photographs, objects, diary entries, notes, tickets and souvenirs. They can also be more ephemeral, taking the form of thoughts, reminiscences and spoken word stories. Lieux de mémoire originate with the sense that “there is no spontaneous memory, that we must deliberately create archives, maintain anniversaries, organize celebrations, pronounce eulogies, and notarize bills because such activities no longer occur naturally” (Nora 1989, p. 12). Nora’s conception of lieux de mémoire arises from the acceleration of history. He writes that: “if history did not besiege memory, deforming and transforming it, penetrating and petrifying it there would be no lieux de mémoire” (Nora, 1989, p. 12).

Extending Nora’s lieu de mémoire into the realm of the visual historical archive, these sites can be reimagined as electronic databases, multimedia projections, or interactive exhibits, sites that preserve, but also revise, reshape, and inspire new memories. Electronic lieux de mémoire are used to construct and deconstruct memory in the multimedia artworks produced by the American artists Jennifer and Kevin McCoy. Across their oeuvre, the artworks build on and complicate definitions of memory, situating it in relation to an archive of recent popular cultural history. Laura U. Marks describes the McCoy’s web based work Airworld.net (1999) as noophagic – sucking in, eating information from other sites, reprocessing images and text and emerging with the advertising, branding and even the special offers of a new, artificial, corporation mined from corporate language on the web (2002, p. 189). Airworld.net is a web-based artwork that trawls commercial sites and creates networks of text, jargon, still images and footage from security cameras in the workplace. Lev Manovich defined Jennifer and Kevin McCoy as postmodern media artists who “accept the impossibility of an original, unmediated vision of reality; their subject matter is not reality itself, but a representation of reality by media, and the world of media itself” (2002a, np). Manovich develops the notion of soft cinema as database art, a specific type of new media art that is indebted to the archive, but reverses the opposition traditionally associated with the syntagmatic and the paradigmatic (2002b, pp. 230-231). For Manovich, database art values the paradigmatic, the tangible range of possible choices, options or possibilities over the syntagmatic, the virtual flow of words and images. Producing art that prioritizes memory, but refuses to narrativize it in classical form, the early installations can be understood as offering a matrix of impressionistic sequences, reflecting the illogical, sensuous workings of memory. In their database art installations, the McCoy’s work opposes rigidity and linearity, even when it is derived from the delay-filled, repetitious parallelisms characteristic of serial narrative form. Instead, the installations offer multiple possibilities and perspectives, splitting and fracturing spectatorship, creating new ways to map memories and a diverse range of possible narrative forms. The ‘electronic sculptures’ created by the McCoys rely on paradigmatic contingency to complicate the notion of memory as personal and individual by reworking and interweaving popular visual histories into their artwork. The resulting new media art reinvent Nora’s lieux de mémoire using miniaturized cinematic technologies, database narration and electronic sculptural dioramas.

The expansion of cinema towards the digital and into the art gallery, produces new ways of mapping, engaging and exhibiting memory. Anne Friedberg identifies the transition towards the digital resulting in an increasingly mobilized, virtual experience of visual cultures (2006). Jennifer and Kevin McCoy’s collaboration creates art that juxtaposes personal with collective memories, offering viewers an interactive experience in encouraging an intervention by the deconstruction and reconstruction of popular narrative. Their art creates a matrix of reference points drawn from some very recognizable popular iconography to exhibit an inextricable connection between individual and collective memory. The installations that were created at the turn of the millennium combine the interactive potential of the database with the seemingly endless array of visual motifs, generic tropes and narrative threads recognizable from popular televisual serials. Every Shot, Every Episode (2001) is a deconstruction and recreation of the Starsky and Hutch (1975-1979) series resulting in a new taxonomy consisting of two hundred and seventy-eight categories. Individual shots and scenes from the series were excised and reorganized to feature new paradigmatic, aesthetic and cinematographic categories including: ‘Every Bloody Clothing’, ‘Every Yellow Volkswagen’, ‘Every Sexy Outfit’, ‘Every Stabbing’, ‘Every Character Looks Left’, ‘Every Insult’, ‘Every Speculation’, ‘Every Extreme Close Up’, ‘Every Pan Right’, ‘Every Tilt Down’, ‘Every Zoom In’ and ‘Every Reaction Shot’. This approach reveals the degree of repetition and the importance of generic tropes and conventions, the foundations of serial television. The shelves of DVDs mounted on the gallery wall positioned next to a suitcase containing the small DVD player and screen offers an impression of open access to secretive imagery. Every Shot, Every Episode reconstructs imagery that blurs the division between individual and collective memory. Viewing and re-viewing sequences, visitors become interactive cartographers, mapping and re-mapping as they select and view paradigmatic sequences from Starsky and Hutch. Nora’s description of memory as “intensely retinal and powerfully televisual”(1989, p. 17) is resonant in the ways that these sequences reflect the fragmented, impressionistic workings of memory. The linearity of television series is here reconceptualised by prioritizing the elements that comprise narrative form. Every Shot, Every Episode points to the tendency to prioritise moments, sensations, effects, color, action or gesture in recalling the larger structure. Paradigmatic selection parallels the ways that specific impressions might be remembered whilst larger narrative structures are forgotten. In turn, the archive of images and narrative that forms the referent – in this case Starsky and Hutch – is modified and transformed by Every Shot, Every Episode.

Every Shot, Every Episode (2001)

This approach was elaborated in Every Anvil (2001). In this interactive installation Looney Tunes (1942-1969) cartoons are deconstructed and reimagined according to generic tropes and violent themes including: ‘Every Explosion’, ‘Every Poisoning’, ‘Every Whacking’, ‘Every Evil Genius’, ‘Every Beg and Plead’, ‘Every Kiss’, ‘Every Slipping and Sliding’, ‘Every Sneaking’, ‘Every Flattening Character’, ‘Every Cooking a Character’ and ‘Every Tornado Spin’. The individual action, aesthetic and cinematographic sign is excised from the animated series, altering the temporal framework to highlight the preeminence of the moment over continuity across the series. Every Anvil, Every Shot, Every Episode along with a further installation, 448 Is Enough (2002), a deconstruction of episodes of Eight Is Enough (1977-1981) displays the McCoy’s interest in dissecting syntagmatic logic whilst recombining the imagery to highlight paradigmatic selection. The use of the new media database helps to develop incursions into conventional narrative form, resulting in sequences that are reconfigured according to impressionistic structures more common to dreams or memories. The McCoy’s subsequent installations use miniature forms to interrogate the exhibition of time, space, narrative, scale and identity. All installations situate popular culture as pivotal in the production of memory.

Every Anvil (2001)

Memory, according to Maurice Halbwachs exists unconsciously in the mind as psychic states of recollection where each act of recollection involves the reconstruction of the memory in the context of the present (1992, p. 24). Memories are constructed and facilitated in association with (or in contrast to) other individuals. Memories as a reconstruction, rather than a faithful recreation of the past are the crucial element in this context. Halbwachs argues that, paradoxically, an individual remembers by placing himself in the perspective of the group, but, by contrast, the memory of a group realizes and manifests itself in individual memories. (1992, p. 22). The McCoy’s practice involves accessing, researching and deconstructing large sources, provoking memories contingent upon popular culture. In an interview, Jennifer McCoy reveals the focus on interactivity and connection between visual culture and memory in their artwork when she suggests that: “one’s memory of a show are placed next to real memories and become part of your mental collection” (2006, np)

Soft Rains (2003-2004) is a serialized collection of six installations that use miniature figures and diorama as the base of these electronic sculptures. The miniature static sets appear as single fragments of time, or frames of film. These tiny sculptures freeze time into instances with the miniature figures representing a single instant, without an indication of the preceding or succeeding events. These instants are resonant. The conflation of the narrative, or genre into instants reiterates the selectivity of memory. In On Longing: Narratives of the Miniature, the Gigantic, the Souvenir, the Collection, Susan Stewart writes that, “miniature time transcends the duration of everyday life” (2003, p. 66). Miniatures, for Stewart, offer “a narrativity and history outside the given field of perception – is a constant daydream that the miniature presents. This is the daydream of the microscope: the daydream of life inside life, of significance multiplied infinitely within significance” (2003, p. 54). Each of the six installations that comprise Soft Rains has its own thematic focus.

Soft Rains (2003-2004)

In Soft Rains #6: Suburban Horror (2004), the miniature imagery becomes decidedly Gothic. A diorama built on the melodramatic iconography of a 1950s scene imagined through a dark cinematic aesthetic reveals suburban settings and suggests the surrounding menace. A woman stares longingly out of a kitchen window, suggesting entrapment and her desire for escape. A car traveling down a road indicates travel to a remote cabin, but the scene at the cabin contains details of blood and dismemberment, revealing a couple that had been murdered with an axe. This installation incorporates fragmented imagery signifying isolation, alienation, multiple time frames and the darker side of the imagination. Screens display low-resolution imagery, where colors are blocked and blurred, drawing from the aesthetic of colorized postcards, or perhaps the saturation and low definition imagery characteristic of 8mm film projections. Soft Rains was inspired by a David Lynchian surrealist aesthetic, and the gruesome imagery also recalls slasher films like the Friday the 13th series. Each diorama is surrounded by lights and tiny cameras suspended and directed onto the scenes via flexible metal arms. Shots are illuminated and filmed by the miniature technologies surrounding the tiny scene. These shots are then projected onto an adjacent screen in the gallery. Exposing sets, lights and cameras, alongside the fantasy projected on the screen deconstructs the illusion, defamiliarizing and reinventing the contemporary Gothic narrative. Suburban Horror draws from the archive of familiar Gothic tropes and imagery to produce disarming miniature fragments, moments that resonate with memorable sequences within the history of cinema. This series of installations rely on tropes of the Gothic and horror genres, impressionistic, distilled, miniaturized and deconstructed. The scale reflects how scale is often distorted by memory, miniature objects are enlarged on screen. In its allusions to iconic cinematic tropes, genres and aesthetics Suburban Horror mimics the potential for memory (and the database) to create a dialogue across time.

How We Met was originally exhibited at Postmaster’s Gallery and then very briefly shown in a decommissioned terminal at JFK Airport in 2004. How We Met is an elaborate series of miniature sculptural dioramas, each representing a moment in time. At first glance the platforms seem to depict aspects of the memory of Jennifer and Kevin’s first meeting as both reach for the same suitcase as it circles a carousel at an airport in France. The dioramas that form the base of How We Met are constellations of small gestures and figures, actions suspended in time with their stillness highlighted by the revolving carousel. These fragmented moments are reminiscent of Nora’s description of ‘true memory’ as comprised of gestures, habits, unspoken knowledge and unstudied reflexes (1989, p. 18). On one platform a miniature figure of Jennifer waits for her bag to emerge whilst Kevin stands to her left, seemingly distracted by a mysterious blonde woman in a red dress. At the edge of the diorama, their moment of connection is depicted through a simple gesture as two disembodied hands reach for the same suitcase. On another platform, a cab waits outside the airport terminal, offering a hint of a transition towards a new space. One camera that is positioned to shoot within the actual airport space incorporates impressions of human sized viewers alongside the miniatures. Customized computer software receives and connects the ‘live’ images, projecting a seemingly random range of sequences onto the screen. This combination of the static miniature diorama with the spectator entwines past with present. Further, baring the device for illumination, recording and projection produces a fractured, but all encompassing vision of moving image and apparatus. The result is that the memory depicted is deconstructed and reconstructed, expanding time into instants and exploding space across the dioramas. Reconstructing the experience in miniature renders the projected sequence dreamlike and impressionistic.

Whilst the title, How We Met, promises a cause and effect sequence, the constellation of images that emerge from the pivotal central gesture, opens up a matrix of connections. Mary Ann Doane perceives cinematic time as diachronic and contingent (2002). She writes about divergent temporal registers that are linked by chance and contingency, a relationship that is characteristic of the cinema.  Chance and coincidence become powerful forces in How We Met, however, this avowal of memory (from the title, from the reconstruction, from the autobiographic presence of the artists in miniature) is playfully recontextualised with the revelation of the extent that this artwork is indebted to the cinema. How We Met consciously references and remixes the bag swapping sequence from Peter Bogdanovich’s screwball comedy What’s Up Doc? (1972). What appears coincidental is in fact memory depicted through the prism of popular film. The slippage between intimate, personal memories and popular visual histories is no more evident throughout the McCoy’s oeuvre than in How We Met. In this electronic sculpture, sequences from film history are inextricably entwined with personal memory.

How We Met – detail (2004)

The recreation of real and imaginary spaces plays an important role in the function of memory. Spaces that include airports, taxi ranks, the cinema, the dance hall and the gallery become actual and imagined sites of remembrance in the McCoy’s installations. These very public, transitory spaces are described by Marc Augè as ‘non-place’, a location created through the excessive logic, space and information of ‘supermodernity’ (1995). Supermodernity arises through excess and extension of time, and in spaces that result from the shift in global scale where distance is reduced by immediate and effective communications technologies. Non-places are essentially empty spaces, locations of solitude, even when they are full of people. These are places of movement and transit where there is little sense of community or connection. The non-place exists as an urban space of little or no distinct identity or particular history. These are temporary, sometimes provisional spaces. Non-places can also be generic spaces of consumption like airports, transit lounges, supermarkets or petrol stations. However, in the installations produced by the McCoy’s, non-places become sites of memory.

Our Second Date (2004) also interweaves the McCoy’s memories with iconic sequences from the history of film. Memory here is contingent upon French New Wave cinema. This electronic sculpture presents an imagination of personal histories as sequences from film, creating a memory site that reveals the influence of film in both content and form. In Our Second Date miniature scenes are positioned at various points on a large tabletop diorama. Each of these scenes blurs the distinction between memory and film particularly when miniature models of Jennifer and Kevin appear inside a tiny cinema watching Jean-Luc Godard’s Weekend (1967). Weekend becomes the key visual source for the remembrance of their second date. The table also features a large, slowly spinning disc, a recreation of the traffic jam, complete with carnage, from the film. As the road revolves, the illusion of movement is projected onto the screen. The heightened colors combined with the now familiar use of a soft focus that blurs outlines, produces a dreamlike sequence of moving images. Memories are expressed through screen memories in these exhibits. Digital technologies are used to capture celluloid and possibly personal memories, highlighting the non-linearity crucial to the film, to the exhibit and to the McCoy’s memories. Our Second Date uses cinematic processes like narration, projection and exhibition to provide the framework for and signifiers of memory. Like Godard’s cinema, the McCoy’s Date series defamiliarizes processes of narration, reconfiguring the counter-narrative experiments of the French New Wave, producing a beginning, middle and an end, just not in that order.

Our Second Date (2004)

The McCoy’s create electronic lieux de mémoire by interweaving public, collective and personal, individual memories within the history of visual culture. It is the blurring of public and private, individual and collective memory that distinguishes their new media art. The memories exhibited by the electronic sculptures need not have an actual referent in the viewer’s memory, or even in the McCoy’s experience. Alison Landsberg describes ‘prosthetic memory’ as a link to those histories that do not originate from direct and lived experiences (2004 p. 26). Prosthetic memories are derived from media engagement and arise through a direct connection to screen imagery. Landsberg defines prosthetic memories as: “memories that circulate publicly, that are not organically based, but that are nonetheless experienced with one’s own body – by means of a wide range of cultural technologies” (2004 p. 25-26). Prosthetic memories are direct and indirect – direct in their audio-visual presentation as images, and indirect in that they always refer to another spatio-temporal realm. They are collective, but also individual in that they become part of a specific range of experiences, virtual and real. Prosthetic memories produce an experiential relationship based on a virtual world rather than a ‘real’ world experience. They are created, produced, received and shared via technologies that consciously construct memories in processes of presentation and representation. Echoing the description provided by Jennifer McCoy, Landsberg suggests that prosthetic memories, “become part of one’s personal archive of experience” and that the memories that cinema affords might be as significant for the viewer in constructing, or deconstructing, the spectator’s identity, as any lived experience (2004 p. 26). The McCoy’s installations position memory as contingent on the history of visual cultures. In the case of How We Met, the artwork is indebted to popular film. In this artwork the difference between embodied and prosthetic memory is indistinct. It is possible that the bag sequence, heavy with the romantic tropes of chance and coincidence from What’s Up Doc? stands in for, and could even be entirely unrelated to, the memory of how Jennifer and Kevin McCoy actually met. Accordingly, whilst referencing cinema, exposing the machinations of the apparatus and reworking counter-narrative, Our Second Date may well also define memory as selective, constructed and prosthetic.

The power of the fragmentary detail within photography is well known in the writing on the ‘punctum’, by Roland Barthes (1984, p. 25-62). Writing during the 1950s, Barthes defines history as outside of his lived experience, but inextricably linked to his maternal bloodline. He explores the importance of subjectivity and emotion – eidos – in his encounter with history via photography. Barthes conceptualizes photography working according to a dual system of representation. He perceives the ‘studium’ as those coded, recognizable signs that are open to everyone, whilst the punctum is specific and subjective (Barthes 1984, p. 27). Barthes argues that the apprehension of the punctum is a sudden recognition of meaning that exceeds normal boundaries. This excess becomes an encounter with the self and history. Barthes’ punctum refers to the fragmentary detail of the photograph, the detail that holds significance, so much so, that it overwhelms the context. Barthes describes the effect of the punctum as akin to a sting, a recognition that he feels with a visceral physical intensity. It is this focus on detail, those smaller memory fragments, miniature signs or metonymic symbols that open out to more expansive revelations of the interconnection between memory and history, that structure the McCoy’s electronic sculptures. Whilst Barthes’ punctum refers to a detail within a photograph that linked him to his blood relations, prosthetic memories can provide a similar affective ‘pinch’, by provoking memories arising from his/her visual literacy of the popular culture archive. Prosthetic memories can also link viewers across cultures and across histories. These media images allow identification, perhaps even a visceral response from the virtual or the imagined. In a larger, perhaps more utopian context, prosthetic memories can forge the ground for new identifications, new political realignments through recognition, identification and empathy. Landsberg argues, “prosthetic memories have the potential to generate something like public spheres of memory” (2004, p. 21). The potential for cinema to generate and disseminate memories is highlighted in the work of Marita Sturken who argues that films contribute to the development of ‘technologies of memory’ where memories are shared, produced, archived and given meaning by new communications media (1997).

Eternal Return (2003) inspires the creation of prosthetic memories by situating anonymous miniature figures caught up in the rapture of dance. The presence of Jennifer and Kevin McCoy is less visible in this exhibit, but perhaps evident in the forms and concepts that spin out of the installation. Featuring a nostalgic, black, white and sepia toned dancehall; this elaborate sculptural diorama depicts a scenario set entirely within a distant past. The emphasis on cyclic rotation and repetition performed by unidentified miniature figures adorned in formal ball gowns and tuxedos, create invitations to become swept up in the nostalgia and romance of the exhibit. The wedding cake couples dance and spin in a wistful symbolization of the wheel of time. More than any other electronic sculpture, Eternal Return offers numerous entry points into the past. Encompassing imagined scenes from the 1930s dancehall, iconography and choreography akin to the films of Busby Berkeley, all mirrored in reflective surfaces, Eternal Return, as the title suggests, is a pure fantasy of another time and space. This ‘pure’ memory site renders its temporality cyclic by the repetition of movements and gestures, enhanced by the revolutions of the dioramas and giddy miniature figures. There is no identifiable narrative in this installation, no recognizable characters, endpoint or closure, just endless cycles of repetition.

Eternal Return (2003)

Common to all other projected sequences, images of the diorama are filmed, edited and they repeat and return in a combination ordered by the bespoke computer software. More than any other exhibit, the complexities of the apparatus on display in Eternal Return become part of the spectacle. Exhibiting the intricacies of the technologies involved, demystifies, but its partial concealment also re-mystifies the exhibition. Such a kinetic spectacle, featuring unidentified dancers, imagined spaces and distant past points to history and memory, but also incorporates the present amid the swirl of contingent temporalities. Quoting Gilles Deleuze’s third synthesis of time in the title of the installation, the Eternal Return refers to the complex return of difference, one that may not have existed previously (1985). This installation also manifests Walter Benjamin’s achronological history imagined in The Arcades Project. In this incomplete work Benjamin visualised history by creating a collage of quotes and reassembling fragments through montage, defining history as connection, rather than a linear taxonomy (1999), Eternal Return is built on endless repetition and eternally returning fragments of projected pasts and futures. Quotes from early film history inspire a montage of prosthetic memories, external memories that may not have materialized previously.

 

The notion of artist or auteur is insufficient to account for the McCoy’s oeuvre. Whilst Jennifer and Kevin McCoy create an impression of quite intimate work in installations like How We Met and Our Second Date, the idea of an individual, coherent worldview expressed across a body of work is not enough to account for the dual dioramas presented back to back in Double Fantasy (2005). Double Fantasy identifies the differences in childhood dreams by using miniature models where images from each are randomly selected and projected onto a screen. There are two contrasting impulses in Double Fantasy. The doubled diorama emphasizes difference, but the screened stills juxtapose and interweave projections of disparate dreams. Dream Sequence (2006) extends this doubling and splitting further, projecting dual visions of dream imagery emanating from two revolving dioramas onto adjacent screens and by incorporating impressions of the miniature dreamers below their dreams. Dream, fantasies and memories are individual, shared and collective.

Dream Sequence (2006)

The McCoy’s lieux de mémoire inspire new ways to perceive and imagine history and memory. Miniature scenes and narrative forms expand the realm of memory by highlighting connections to film, television, nostalgic fantasies and projected histories. The conflation of real and imaginary spaces reflects the potential for locations to provoke memories. Airports, taxi ranks, the cinema, the dancehall and the gallery become sites of remembrance in the McCoy’s exhibitions. Non-places like the airport represent the location of a first meeting, a miniature cinema becomes the place of a second date and the revolving imagined space of the dance hall distills memories using movement, gesture and sound and reproduces them as memory sites.

Their electronic sculptures and database art revise and exhibit memory by incorporating intertextual references to the history of cinema and visual culture. The re-vision of memory through multimedia technologies can instill a sense of hyper-engagement, connecting viewers with personal or public histories. It can also blur the distinction between prosthetic and embodied memories. Whilst many of these works emerge from archives of popular culture and are exhibited in art galleries, they are also made accessible via the McCoy’s website (mccoyspace.com) and Flickr which includes views of the documentation and images of the live feed of their installations. Such multiple forms of exhibition extend the scope and lifetime of each artwork and, simultaneously, feed the imagery back into the database. In the gallery space and in the virtual world, the McCoy’s art situates the viewer centrally and actively within a matrix of visual references, paradigmatic associations and generic conventions, highlighting the strength of the currents connecting popular iconography with personal memory.

The memories exhibited and inspired by the work of Jennifer and Kevin McCoy have the potential to tease out the hard edges of ‘true’ memory (Nora 1989, p. 13). The electronic sculptures display memory “in permanent evolution” (Nora 1989, p. 8). Furthermore, these artworks display memory as not exclusively linked to an individual, but instead linked by association, or contingency. The McCoy’s electronic sculptures actively exhibit memories as evidence of Nora’s description of recent shifts in history and memory as he describes it “from the idea of a visible past to an invisible one; from a solid and steady past to our fractured past; from a history sought in the continuity of memory to a memory cast in the discontinuity of history (1989, p. 17). Memory is inspired, produced and exhibited according to images outside of the self, popular visual histories. Nora suggests that, “the lieu de mémoire is double: a site of excess closed upon itself, concentrated in its own name, but also forever open to the full range of possible significations” (1989, p. 24). With new and multiple forms of digital technologies, the McCoy’s electronic sculptures illustrate precisely such a doubling whilst emphasizing the increasingly intimate proximity between memory, screen memories and the history of visual culture.

 

This is an extended and expanded version of ‘Exhibiting Miniature Memories: The McCoy’s Electronic Sculptures’, AntiThesis, March, 2009, pp. 7-11.

 

References

Augè, M 1995 Non-Places: Introduction to an Anthropology of Supermodernity, London, Verso.

Barthes, R 1980 Camera Lucida: Reflections on Photography, translated by Richard Howard, London, Flamingo.

Benjamin, W 1999 The Arcades Project, Cambridge, Mass, Harvard University Press.

Deleuze, G 1989 [c1985] Cinema 2: The Time Image, Minneapolis, University of Minnesota Press.

Doane, MA 2002 The Emergence of Cinematic Time: Modernity, Contingency, the Archive, Cambridge, Harvard University Press.

Frieberg, A 2009 The Virtual Window: From Alberti to Microsoft, Cambridge, Mass, MIT Press.

Halbwachs, M 1992 On Collective Memory, edited, translated and with an introduction by Lewis A. Coser, Chicago, University of Chicago Press.

Himmelsbach, S 2006 ‘Interview With Jennifer and Kevin McCoy’, Automatic Update: MOMA, Viewed 1st of February, 2009 http://www.moma.org/interactives/exhibitions/2007/automatic_update/subs_wrapper.php?section=mccoy_interview.html

Jameson, F (1991) Postmodernism, Or The Cultural Logic of Late Capitalism, Durham, Duke University Press.

Landsberg, A 2004 Prosthetic Memory: The Transformation of American Remembrance in the Age of Mass Culture, New York, Columbia University Press.

Manovich, L 2002a ‘Generation Flash’, Viewed 1st of February, 2009 www.manovich.net/DOCS/generation_flash.doc

Manovich, L 2002b The Language of New Media, Boston, MIT Press.

Marks, LU 2002 Touch: Sensuous Theory and Multisensory Media, Minneapolis, University of Minnesota Press.

Nora, P 1989 ‘Between Memory and History: Les Lieux de Mémoire’, Representations, 26, Spring, pp. 7-24.

Sturken, M, Thomas D, Ball-Rokeach, SJ 2004 Technological Visions: The Hopes and Fears that Shape New Technologies, Philadelphia, Temple University Press.

Stewart, S 2003 On Longing: Narratives of the Miniature, the Gigantic, the Souvenir, the Collection, Durham, NC, Duke University Press.

A/V:

Airworld.net (Jennifer and Kevin McCoy, 1999)

Double Fantasy (Jennifer and Kevin McCoy, 2005)

Dream Sequence (Jennifer and Kevin McCoy, 2006)

Eternal Return (Jennifer and Kevin McCoy, 2003)

Every Shot, Every Episode (Jennifer and Kevin McCoy, 2001)

Every Anvil (Jennifer and Kevin McCoy, 2001)

How We Met (Jennifer and Kevin McCoy, 2004)

Our Second Date (Jennifer and Kevin McCoy, 2004)

Soft Rains (Jennifer and Kevin McCoy, 2003-2004)

Weekend (Jean-Luc Godard, 1967)

What’s Up Doc? (Peter Bogdanovich, 1972).

 

Bio

Wendy Haslem is a lecturer in Screen Studies & Cultural Management and Director of Undergraduate Studies at the University of Melbourne. She is currently involved in researching and writing Gothic Projections: From Méliès to New Media an investigation of the evolution of the Gothic narrative and aesthetic from silent film to digital media.

Volume 3, 2003

Uncanny Spaces and Gods in the Multiverse

Editors, Angela Ndalianis & Leonie Cooper

1. Uncanny Spaces & Gods in the Multiverse: an Introduction – Leonie Cooper

2. Three Poems by William Stobb

4. The Martian in the Multiverse – Michael Punt

5. The Sight of Your God Disturbs Me: questioning the post-Christian bodies of Buffy, Lain, and George – Felicity Colman

6.
The Most Charismatic King: Nascent Celebrity in the French Renaissance – Lisa Mansfield

7. Microstatecraft: Belonging and Difference in Imagined Communities – Darshana Jayemanne