Volume 27, 2016

Themed Issue: Born Digital Cultural Heritage

Edited by Angela Ndalianis & Melanie Swalwell

Introduction: Born Digital Heritage – Angela Ndalianis & Melanie Swalwell

  1. It Is What It Is, Not What It Was: Making Born Digital Heritage – Henry Lowood
  2. Defining The Experience: George Poonhkin Khut’s Distillery: Waveforming, 2012 – Amanda Pagliarino & George Poonkhin Khut
  3. There and Back Again: A Case History of Writing The Hobbit – Veronika Megler
  4. Participatory Historians in Digital Cultural Heritage Process: Monumentalization of the First Finnish Commercial Computer Game – Jaakko Suominen and Anna Sivula
  5. Retaining Traces of Composition in Digital Manuscript Collections: a Case for Institutional Proactivity – Millicent Weber

There and Back Again: A Case History of Writing The Hobbit – Veronika M. Megler

Abstract: In 1981, two Melbourne University students were hired part-time to write a text adventure game. The result was the game The Hobbit (Melbourne House, 1981), based on Tolkien’s book (Tolkien), which became one of the most successful text adventure games ever. The Hobbit was innovative in its use of non-deterministic gameplay, a full-sentence parser, the addition of graphics to a text adventure game and finally “emergent characters” – characters exhibiting apparent intelligence arising out of simple behaviours and actions – with whom the player had to interact in order to “solve” some of the game’s puzzles. This paper is a case history of developing The Hobbit, and covers the development process, the internal design, and the genesis of the ideas that made The Hobbit unique.

 

Fig.1 - C64/128 The Hobbit (disk version). Melbourne House.

Figure 1.  C64/128 The Hobbit (disk version). Melbourne House.

Introduction

This paper is a case history of the development of the text adventure game, The Hobbit (Melbourne House, 1981). The game was a translation of Tolkien’s novel of the same name (Tolkien) into a game that could run on the first generation of home computers that were just beginning to hit the market.

As co-developer of The Hobbit, I offer my recollections of the development process, the internal design, and the genesis of the ideas that made the game unique. Those ideas included the use of non-deterministic gameplay – the game played differently every time and sometimes could not be completed due to key characters being killed early in the game – very different to other games, which had only a single path through the game and responded the same way each time they were played. The Hobbit contained a full-sentence parser that understood a subset of natural language, dubbed Inglish, as compared to the simple “verb noun” constructions accepted by other adventure games of the time. There were graphic renditions of some of the game locations, another groundbreaking addition to a text adventure game. And finally, “emergent characters” – non-player characters exhibiting apparent personalities and intelligence – with whom the player had to interact in order to solve some of the game’s puzzles. In combination, these features led to a game experience that transformed the industry.

Little has been written about the development of the first generation of text-based computer games; this case history provides insight into this developmental period in computer game history. I compare the development environment and the resulting game to the state-of-the-art in text adventure games of the time. Lastly, I discuss the legacy and recent revival of interest in the game.

“Let us not follow where the path may lead.
Let us go instead where there is no path,
And leave a trail.”

– Japanese Proverb

The Tenor of the Times 

It was early 1981. I was a Bachelor of Science student at Melbourne University, majoring in Computer Science (CS) and just starting my last year. These were the early days of Computer Science education, and the curricula required today for undergraduate Computer Science students had not yet been developed. In our classes we were studying topics like sort algorithms and data structures and operating systems such as BSD Unix. Another class focused on calculating rounding and truncation errors occurring as a result of a series of digital calculations. We were taught software development using a systems analysis method called HIPO[1] – Hierarchical Input-Process-Output, the best practice in structured programming – and that documenting our code was a good practice. Object-oriented programming was still in the future.

During our first couple of years in the CS program, programming projects were written using “mark sense cards”, which we marked up with pencils and fed into card readers after waiting in a long queue of students – sometimes for an hour or two to get a single run. You had to get the program running within a certain number of runs or the card reader would redistribute the lead across the cards, making them illegible.

By the time we reached the last year of the Bachelor’s degree, in our CS classes we were actually allowed to log onto a Unix machine in the lab and work there, if we could get access to a terminal (which often meant waiting for hours, or booking a timeslot, or waiting till late in the evening). We programmed in Pascal, Fortran, Assembler, C (our favorite), and Lisp. Our favorite editor was, universally, Vi. I remember programming a PDP8 in Assembler to run a toy train around a set of tracks, switching the tracks as instructed; we hand-assembled the program, typed it in and debugged it using a hexadecimal keypad.

By this time I’d built my own PC, from a project in an electronics hobbyist magazine. I’d purchased the mother board, which came as a peg-board with a printed circuit on it, minus any components or cross-wiring. I would go to the electronics parts store with my list of chips, resistors, capacitors and diodes, and solder for my soldering iron.  In the store they’d say, “tell your boyfriend we don’t have these” – it was not even considered possible that I might be the person purchasing them. The system had a small number of bytes – around 128 bytes, I believe (that is not a misprint) – of free memory, and used a black and white TV as a monitor. For this system we wrote programs out on paper in a simple Assembler, hand-assembled it and typed it in using a hexadecimal keypad. There was no save function, so whenever the system restarted we had to re-type in the program. It was quite impressive to see the programs we could develop in that amount of space.

I was used to being one of around 2-4 women in my university classes, whether it was a smaller class of 30 students or one of the massive Physics classes holding perhaps two or three hundred. Sexism was alive and kicking. The norm for women – for most of the fellow students at my all-girl high school, MacRobertson – was to become secretaries or nurses (although my closest friend for many of those years became a lawyer, traveling to the ‘Stans to negotiate for oil companies, and is now chairman of the board). One fellow student (luckily, I don’t remember who) gave me the ultimate compliment: “you’re bright, for a girl!” In self-defense, I partnered with another woman – Kerryn – for any pair projects. Whenever we had 4-person group projects we joined with another frequent pair, Phil Mitchell and Ray, who were amongst the few men willing to partner with us; these group experiences later led to me recruiting the other three to work at Melbourne House.

My game-playing experience was very limited. There was a Space Invaders arcade game in the lobby of the student union at the university that I sometimes played. For a while there was a game of Pong there, too. The Unix system featured an adventure game we called AdventureColossal Cave, also often referred to as Classic Adventure (CRL, 1976). In our last year I played it obsessively for some time, mapping out the “maze of twisty little passages”, until I had made it to through the game once. At that point it instantly lost interest for me, and I don’t believe I ever played it again. I was not aware of any other computer games.

State-of-the-art PC games were a very new thing – PCs were a very new thing – and at the time were written in Interpretive Basic by hobbyists. Sometimes the games were printed in magazines, taking maybe a page or two at most, and you could type them into any computer that had a Basic interpreter and play them. The code was generally written as a long list of if-then-else statements, and every action and the words to invoke that action was hard-coded. The game-play was pre-determined and static. Even if you purchased the game and loaded it (from the radio-cassette that it was shipped on), you could generally solve the puzzles by reading the code. The rare games that were shipped as compiled Basic could still be solved by dumping memory and reading the messages from the dump.

Getting the Job

I was working early Sunday mornings as a part-time computer operator, but wanted a job with more flexibility. On a notice board I found a small advertisement looking for students to do some programming, and called. I met Alfred (Fred) Milgrom, who had recently started a company he called “Melbourne House”, and he hired me on the spot to write a game for him. Fred was a bit of a visionary in thinking that hiring students with Computer Science background could perhaps do a better job than the general state-of-the-art of self-taught hobbyists.

Fred’s specifications to me were: “Write the best adventure game ever.” Period.

I told Phil Mitchell about the job, as I thought he had the right skills. I brought him along to talk to Fred, who hired him to work on the game with me. Kerryn and Ray joined us later that year to write short games in Basic for publication in the books that Melbourne House was publishing. These books featured a series of games, most of them about a page or two in length. The books were often sold along with a radio-cassette from which you could load the game rather than having to type it in yourself. Ray only stayed briefly, but Kerryn I think stayed for most of the year, and wrote many games. She’d sit at the keyboard and chuckle as she developed a new idea or played a game she’d just written.

Software Design, Cro-Magnon Style

So, what would “the best adventure game ever” look like? I started with the only adventure game I’d ever played: Classic Adventure. What did I not like about it? Well, once I’d figured out the map and solved the puzzles, I was instantly bored. It played the same way every time. Each Non-Player Character (NPC) was tied to a single location, and always did the same thing. Lastly, you had to figure out exactly the incantation the game expected; if the game expected “kill troll”, then any other command – “attack the troll”, for example – would get an error message. You could spend a long time trying to figure out what command the game developer intended you to issue; as a result, most adventure games tended to have the same actions, paired with the same vocabulary.

Phil and I split the game cleanly down the middle, with clearly defined interfaces between the two halves. I took what today we would call the game engine, physics engine and data structures (although those terms did not exist then). Phil took the interface and language portion. I don’t remember who had the original idea of a much more developed language than the standard “kill troll” style of language used by other text adventures of the time; my thinking stopped at the level of having synonyms available for the commands. I had almost no involvement in the parser; I remember overhearing conversations between Fred and Phil as the complexity of what they were aiming at increased. For a time, Stuart Richie was brought in to provide language expertise. However, his thinking was not well suited to what was possible to develop in Assembler in the space and time available, so, according to what Phil told me at the time, none of his design was used – although I suspect that being exposed to his thinking helped Phil crystallize what eventually became Inglish. No matter what the user entered – “take the sharp sword and excitedly hack at the evil troll”, say, he’d convert it to a simple (action, target) pair to hand off to me: “kill troll”, or perhaps, “kill troll with sword”.  Compound sentences would become a sequence of actions, so “take the hammer and hit Gandalf with it” would come to me as two actions: “pick up hammer”, followed by a next turn of “hit Gandalf with hammer”.

I put together the overall design for a game that would remove the non-language-related limitations within a couple of hours on my first day on the job. I knew I wanted to use generalized, abstracted data structures, with general routines that processed that structure and with exits for “special cases”, rather than the usual practice of the time of hard-coding the game-play.  My intent was that you could develop a new game by replacing the content of the data structures and the custom routines – a “game engine” concept I did not hear described until decades later. We even talked about developing a “game editor” that would allow gamers to develop their own adventure games by entering items into the data structures via an interface, but I believe it was never developed. I very early on decided that I wanted randomness to be a key feature of the game – recognizing that that meant the game could not always be solved, and accepting that constraint.

I envisaged three data structures to be used to support the game: a location database, a database of objects and a database of “characters”. The location “database” (actually, just a collection of records with a given structure) was pretty straightforward, containing a description of the location and, for each direction, a pointer to the location reached. There could also be an override routine to be called when going in a direction. The override allowed features or game problems to be added to the game map: for example, a door of limited size (so you could not pass through it while carrying too many items) or a trap to be navigated once specific constraints had been met. There’s a location (the Goblin’s Dungeon) that uses this mechanism to create a dynamic map, rather than having fixed connections to other locations: for each direction, an override routine is called that randomly picks a “next location” for the character to arrive in from a given list of possible locations. Another innovation in the location database occurred when Phil added pictures to specific locations, and drew them when the player entered one of those locations. Rather than representing the entire map of the Middle Earth in the game (as I might do today), I simplified it into a set of individual locations where noteworthy events occurred in the story, and represented those as a linked set of locations, with the links oriented in the directions as laid out on the map. So, for example, “go North” from one location would immediately take you to the next location North in the game where a significant event occurred. I did not then have a notion of variable travel time based on distance between the two locations.

Similarly, I conceived of an object database with a set of abstract characteristics and possible overrides, rather than hard-coding a list of possible player interactions with specific objects as was done in other games. Each object had characteristics and constraints that allowed me treat them generically: weight, size, and so on – in effect, a simple (by today’s standards) physics engine. An object could have the capability to act as a container, and a container could be transparent or opaque; a transparent container’s contents could be seen without having to open it first. There were generic routines that could be applied to all objects: for example, any object could be picked up by something bigger and stronger than it, or put into a bigger container (if there was enough room left in it). Some routines could be applied to any object that matched some set of characteristics; an object could also have a list of “special” routines associated with it that overrode the general routines. There was a general “turn on” routine that applied to lamps, for example, that could also be overridden for a magic lamp by a different, more complex “turn on” routine. I went through the book noting where objects were used to further the plot (swords, lamps, and most obviously, the ring), then added those objects to the game, with appropriate generic characteristics and actions (weight, the ability for lamps to be turned on) and special routines as needed (for example, the ring’s ability to make the wearer invisible).

Each non-player character (NPC) was also an object that began in an “alive” state, but could, due to events in the game, stop being alive – which allowed a player to, for example, use a dead dwarf as a weapon, in the absence of any other weapon). However, the physics engine caused “kill troll with sword” to inflict more damage than “kill troll with (dead) dwarf”.

In addition to regular object characteristics, each NPC had a “character”, stored in the third database. I conceived of an NPC’s character as being a set of actions that the NPC might perform, a sequence in which they generally performed them and a frequency of repetition. The individual actions were simple and were generally the same actions that a player could do (run in a given direction, attack another character, and so on); but again, these routines could be overridden for a specific character. The sequence could be fixed or flexible: an action could branch to a different part of the sequence and continue from there, or even jump to a random location in the sequence. The apparent complexity of the character comes from the length and flexibility of its action sequence; the character “emerges” as a result. For example, Gandalf’s short attention span and kleptomania were represented by a sequence like: “[go] <random direction>. [Pick up] <random object> [Say, “what’s this?”]. [Go] <random direction>. [Put down] <random object>.”

The division between inanimate object and NPC was left intentionally a little blurry, giving extra flexibility. For example, the object overrides could also be used to modify character behaviour. I actually coded an override where, if the player typed “turn on the angry dwarf”, he turned into a “randy dwarf” and followed the player around propositioning him.  If he was later turned off, he’d return to being the angry dwarf and start trying to kill any live character. Fred and Phil made me take that routine out.

In order to develop each character, I went through the book and, for each character, tried to identify common sequences of behavior that I could represent through a sequence of actions that would capture the “texture” of that character. Some characters were easy; for a troll, “{If no alive object in current location} [go] <random direction> {else} [kill] <random object with status ‘alive’>” was pretty much the whole list. Others were harder, such as characterizing Thorin; and yes, I did write the now-classic phrase, “Thorin sits down and starts singing about gold.” (I hereby apologize for how frequently he said that; short character-action list, you see.) An action could invoke a general routine which was the same for all NPCs – like, choose a random direction and run, or choose a live object in the location and kill it; or, it could be an action specific only to this NPC, as with Thorin’s persistent singing (as seen in Figure 2). For Gandalf, the generic “pick up” routine was used under the covers, but overridden for Gandalf to utter “what’s this”.

Figure 1. Gandalf and Thorin exhibit classic behavior. Courtesy Winterdrake.

Figure 2. Gandalf and Thorin exhibit classic behavior. Courtesy Winterdrake.

Sometimes an alternate behaviour list could be chosen based on events, as can be seen in Figure 2. For example, the friendly dwarf would become violent once he’d been attacked (or picked up). For a while, we had terrible trouble with all the NPCs showing up in one location and then killing each other before the player had the chance to work his way through the game, before I got the character profiles better adjusted. Some character would attack another, and once a battle was in progress any (otherwise friendly) character entering that location would be attacked and end up joining in. The same mechanism was used to allow the player to request longer-running actions from NPCs, such as asking a character to follow you when you needed them to help solve a puzzle in a (sometimes far) different location from where they were when you found them. In general the NPCs were programmed to interact with “another”, and did not differentiate whether the “other” was the player or not unless there was a game-related reason for doing so. The NPCs exhibited “emergent behaviour”; they just “played” the game themselves according to their character profile, including interacting with each other. In essence, the NPCs would do to each other almost anything that they could do to or with the player.

Phil programmed the interface to accept input from the player, and after each turn he would hand control to the NPC system, which would allow each (remaining) alive character to take a turn, as can be seen in Figures 2 and 3. For the time, this design was revolutionary; the model then was to have a single, non-mobile NPC in a single location, with only a couple of specific actions that were invoked once the player entered that location, and behaving the same way each time you played the game. Even in the arcade games of the time, we were able to identify that each object the player interacted with behaved the same way each time, and they did not interact with each other at all.

Figure 3. The player modifies Thorin’s default behavior – to the player’s cost.

Figure 3. The player modifies Thorin’s default behavior – to the player’s cost.

At the beginning of the game, we would generate, for each NPC, a random starting point in that NPC’s action list, giving the game much of its random nature. This combination of factors led to the “emergent characters”; or, seen another way, “a bunch of other characters just smart enough to be profoundly, infuriatingly stupid” (Maher).

I quickly transitioned to the concept of the player merely being another character, with a self-generated action list. At some point I experienced the emergent nature of the characters while trying to debug and was joking about the fact that the characters could play the game without the player being there; that discussion led naturally to the famous “time passes” function, where, if the player took too long in taking his next action (or, chose to “wait”, as in Figure 1), the characters would each take another turn. This feature, which Melbourne House trademarked as
“Animaction” (Addison-Wesley Publishing Company, Inc.), was another innovation not seen in prior text adventures, where game-play depended wholly on the player’s actions. (It is also noteworthy how many of the game’s innovations began as jokes. I now believe this to be true of much innovation; certainly it has been, for the innovations I’ve been involved in.)

The next, seemingly obvious step to me was to allow – or even require – the player to ask the NPCs to perform certain tasks for him (as seen in Figure 4), and to set up puzzles that required this kind of interaction in order to solve them. This addition added another layer of complexity to the game. As commented by one fan, “As most veteran Hobbit players know, a good way to avoid starvation in the game is to issue the command “CARRY ELROND” whilst in Rivendell. In the game Elrond is a caterer whose primary function is to give you lunch and if you carry him then he will continue to supply you with food throughout the game.”[2] Another had a less tolerant view: “Sometimes they do what you ask, but sometimes they’re feeling petulant. Perhaps the seminal Hobbit moment comes when you scream at Brand to kill the dragon that’s about to engulf you both in flames, and he answers, “No.” After spending some time with this collection of half-wits, even the most patient player is guaranteed to start poking at them with her sword at some point.”[3]

Figure 4. The Hobbit starting location, and a player action that I never thought of.

Figure 4. The Hobbit starting location, and a player action that I never thought of.

The non-determinism of the overall game meant that it was not, in general, possible to write down a solution to the game. There were specific puzzles in the game, however, and solutions to these puzzles could be written down and shared. However, people also found other ways to solve them than I’d anticipated. For example: “A friend of mine has discovered that you can get and carry both Elrond and Bard. Carrying Elrond with you can by quite useful as he continuously distributes free lunches. And, to be honest, carrying Bard is the only way I’ve found of getting him to the Lonely Mountain. There must be a better way.” (“Letters: Gollum’s Riddle”) As commented by a retrospective, “And actually, therein sort of lies the secret to enjoying the game, and the root of its appeal in its time. It can be kind of fascinating to run around these stage sets with all of these other crazy characters just to see what can happen — and what you can make happen.” (Maher)

Inglish

While I worked on the game, Phil designed, developed and wrote the language interpreter, later dubbed Inglish. I had little interest in linguistics, so I generally tuned out the long discussions that Fred and Phil had about it – and was supported in doing so by the encapsulation and simple interface between the two “halves” of the game, which prevented me needing to know any more.

Figure 5. Opening scene from one of many foreign language versions.

Figure 5. Opening scene from one of many foreign language versions.

Every word was stored in the dictionary, and since only 5 bits are used to represent the English alphabet in lower-case ASCII, the other 3 bits were used by Phil to encode other information about speech parts (verb, adjective, adverb, noun), valid word usages, what pattern to use when pluralizing, and so on. I’ve seen screen images from versions of the game in other languages (e.g., Figure 5), but I do not know how the translations were done or how the design worked with these other languages.

 

Phil translated player commands into simple “verb object” commands to hand to me, with some allowed variations to allow for different action results. For example, I seem to remember that “viciously kill” would launch a more fierce attack, and use up more strength as a result, than just “kill”. Rather than a set of hard-coded messages (as was the norm), we generated the messages “on the fly” from the dictionary and a set of sentence templates. At the end of some action routine, I would have a pointer to a message template for that action. The template would contain indicators for where the variable parts of the message should be placed. I would then pass the message, the subject and object to the language engine. The engine would then generate the message, using, once again, spare bits for further customization.  To take a simple example, “Gandalf gives the curious map to you” used the same template as, say, “Thorin gives the axe to the angry dwarf”.

We were so limited by memory that we would adjust the size of the dictionary to fit the game into the desired memory size; so the number of synonyms available would sometimes decrease if a bug fix required more lines of code. It was a constant trade-off between game functionality and language richness. As a result of all the encoding, dumping memory – a common method of solving puzzles in other text adventures – provided no information for The Hobbit.

Software Development, Cro-Magnon-Style

Our initial development environment was a Dick Smith TRS80 look-alike, with 5 inch floppy drives. Initially I believe we used a 16k machine, then a 32k, and towards the end a 48k or perhaps 64k machine. Our target machine for the game was initially a 32k TRS80. During development, the Spectrum 64 was announced, and that became our new target. Game storage was on a cassette tape, played on a regular radio-cassette player. As the other systems became available we continued using the TRS80 platform as the development environment, and Phil took on the question of how to ports the game to other platforms.

We had a choice of two languages to use for development: Basic, or Assembler. We chose Assembler as we felt the added power offset the added difficulty in using the language.

During initial development, the only development tool available was a simple Notepad-like text editor, and the majority of code was written that way. Later I believe a Vi-like editor became available; even later, I have faint memories of a very early IDE that allowed us to edit, assemble the code and step through it (but that also inserted its own bugs from time to time).

We initially worked with the system’s existing random number generator, but realized that its pseudo-random nature made the game play the same way each time – against what I hoped to achieve. Phil then spent some time writing a “true” random number generator, experimenting with many sources of seed values before he was successful. He tried using the contents of various registers, but discovered that these were often the same values each time. He tried using the time, but the TRS80 did not have a built-in battery or time, and most people did not set the time each time they started the system – so again, if someone turned the machine on and loaded the game, we would get the same results each time. After some experimentation he finally succeeded, and the game – for better or worse, and sometimes for both – became truly random.

Debugging was a nightmare. Firstly, we were debugging machine code, initially without the advantage of an IDE; we ran the program, and when it crashed we tried to read the memory dumps. In Assembler, especially when pushing the memory limit of the system, the Basic programmer’s technique of inserting “print” statements to find out what is happening is not available. We had characters interacting with each other in distant parts of the game, and only actions in the current location were printed on the game player’s console. In one of several cases where a game feature was originally developed for other reasons, we initially wrote the “save” mechanism to help us debug parts of the game without having to start from the beginning each time. It then became part of the delivered version, allowing players to take advantage of the same function.

At some point, the idea of adding graphics came up, I think from Phil. Fred commissioned Kent Rees to draw the pictures, and Phil figured out how to draw them on the various systems; I do know that he adapted the pictures from the originals Kent provided in order to make them easier to draw. The first version of his code always drew the picture when you entered a location that had one; however, it was so slow and annoyed us (me) so much that Phil quickly added a switch to turn them off.

Sidelines

In between coding The Hobbit, we occasionally took time to work on other games. Fred would give us $20 to go and play arcade games, sometimes as often as each week, to see what other folk were doing and what the state of the art was in that industry. Someone in our group of four wrote a version of Pac-Man. We spent hours with one person playing Pac-Man, trying to get up to higher levels in the game, while the others leant over the arcade machine trying to figure out the algorithms that caused prizes to appear and how the behaviour changed across the game levels. We didn’t see it as piracy, as arcade games and home computers were at that time seen as being completely unrelated industries – it was more in the spirit of gaining ideas from another industry for application into ours.

Another game that we wrote was Penetrator (Melbourne House, 1981). Phil was the clear lead on that game while I worked on some pieces of it, and I think Kerryn may have worked on it a bit too.  It was a copy of the arcade game Scramble (Konami, 1981). Because of the speed (or lack thereof) of the processors at the time, we had to ensure that each separate path through the game took the same amount of time; even a difference of one “tstate” (processor state) between one path of an “if-then-else” to another would interfere with smooth motion, so we spent significant time calculating (by hand) the time taken by each path and choosing different Assembler instructions that would compensate for the differences (and given that “NO-op” took 2 tstates, it was not always easy). Another difficulty was getting the radars to turn smoothly, while handling the variable number of other activities taking place in the game. It took forever to get it “right”.

Figure 6. Screen shot from the game Penetrator

Figure 6. Screen shot from the game Penetrator

At the beginning we drew the screen bitmaps for all the landscapes on graph paper and then hand-calculated the hexadecimal representations of each byte for the screen buffer, but that became so tedious so quickly that Phil wrote an editor that we could use to create the landscapes. In the end the landscape editor was packaged with the game, as a feature.

Another “pressing” issue for shooter games of the time was that of keyboard debounce. At the time a computer keyboard consisted of an electrical grid, and when a key was pressed the corresponding horizontal and vertical lines would register a “high”. You checked the grid at regular intervals, and if any lines were registering high you used a map of the keyboard layout to identify the key that had been pressed. However, you had to stall for just the right amount of time before re-reading the keyboard; if you waited too long, the game seemed unresponsive, but if you read too quickly, you would read several key presses for each key press that the player intended. While it was possible to use the drivers that came with the keyboard, they did not respond quickly enough to use for interactive games. “Getting it right” was a tedious matter of spending hours fiddling with timings and testing.

Perhaps A Little Too Random

In addition to all the other randomness it exhibited, The Hobbit was also known to crash seemingly randomly. There were a number of reasons for this. Firstly, The Hobbit was a tough game to test. It was a much bigger game than others of the time. Unlike the other games, it was approximately 40k of hand-coded Assembler[4], as opposed to the commonly used interpreted Basic (a few more advanced games were shipped in compiled Basic). It was written without the benefit of formalized testing practices or automated test suites. The assembly and linking programs we used were also relatively new, and during development, we would find bugs in them. I remember spending hours debugging one time only to discover that the assembler had optimized away a necessary register increment, causing an infinite loop; I had a lot of trouble trying to invent a different coding sequence that prevented the assembler from removing the required increment. Altogether, I took away lessons about not letting your application get too far ahead of the ability of your infrastructure to support it.

Secondly, the game was non-deterministic; it was different every time it was played. It exhibited its own manifestation of chaos theory: small changes in starting conditions (initial game settings, all generated by the random number generator) would lead to large differences in how the game proceeded. Due to the “emergent characters”, we constantly had NPCs interacting in ways that had never been explicitly programmed and tested, or even envisioned. The game could crash because of something that happened in another location that was not visible to the player or to person testing the game, and we might never be able to identify or recreate the sequence of actions that led to it.

It was possible to have an instance of the game that was insoluble, if a key character required to solve a specific puzzle did not survive until needed (often due to having run into a dwarf on the rampage); this was a constraint I was happy to accept, though it frustrated some players. The ability to tell the NPCs what to do also meant that people told them things to do that we hadn’t accounted for. The very generality of the game engine – the physics, the language engine, and the ability for the player to tell characters what to do – led players to interact with the game in ways I’d never thought of, and that were certainly never tested. In some cases, they were things I didn’t realize the game was capable of.

Epilogue

The Hobbit was released in 1982 in Australia and the U.K. Figure 7 shows a typical packaging. It was an instant hit; amongst other awards, it won the Golden Joystick Award for Strategy Game of the Year in 1983, and came second for Best Game of the Year, after Jet-Pac. Penetrator came second in the Golden Joystick Best Arcade Game category, and Melbourne House came second for their Best Software House of the Year, after Jet-Pac’s publishers (“Golden Joystick Awards”). A couple of revisions were published with some improvements, including better graphics. Due to licensing issues it was some time before a U.S. release followed. The book was still covered by copyright and so the right to release had to be negotiated with the copyright holders, which were different in each country. The U.S. copyright holder had other plans for a future game. As a result, future book-based game ideas specifically chose books (such as Sherlock Holmes) that were no longer covered by copyright.

Figure 7. Game release package.

Figure 7. The Hobbit. Game release package.

At the end of 1981, I finished my Bachelor’s degree. We were beginning to discuss using the Sherlock Holmes mysteries as a next games project; I was not sure that the adventure game engine I’d developed was a good fit for the Sherlock style of puzzle solving, although there were definitely aspects that would translate across. However, I was also ready to start something new after a year of coding and debugging in Assembler. I’d proved that my ideas could work, and believed that the result Phil and I had produced was the desired one – an adventure game that solved all my frustrations with Classic Adventure, and in my mind (if not yet in other people’s) met Fred’s target of “the best adventure game ever”.

I interviewed with several major IT vendors, and took a job at IBM, as did Ray. Kerryn took a job in a mining company in Western Australia. Phil stayed on at Melbourne House (later Beam Software), the only member of our university programming team to continue on in the games industry. We eventually all lost touch.

During this time, I was unaware that the game had become a worldwide hit. Immersed in my new career, I lost touch with the nascent games industry. At IBM, I started at the same level as other graduates who had no experience with computers or programming; developing a game in Assembler was not considered professional or relevant experience. Initially I became an expert in the VM operating system (the inspiration and progenitor for VMWare, I’ve heard), which I still admire for the vision, simplicity and coherence of its design, before moving into other technical and consulting position. In late 1991 I left Australia to travel the world. I eventually stopped in Portland, Oregon, with a plan to return to Australia after 2 years – a plan that has been much delayed.

A 3-year stint in a global Digital Media business growth role for IBM U.S. in the early 2000’s brought me back in contact with games developers just as the movie and games industries were moving from proprietary to open-standards based hardware and infrastructure. The differences in development environments, with large teams and sophisticated supporting graphics and physics packages, brought home to me how far the games industry had come. But while I appreciate the physics engines and the quality of graphics that today can fool the eye into believing they are real, the basis of a good game has not changed: simple, compelling ideas still captivate and enchant people, as can be seen in the success of, for example, Angry Birds. I also believe that the constraints of limitations – such as small memories and slow processors – can lead to a level of innovation that less limited resources does not.

And Back Again

As the Internet era developed, I started receiving letters from fans of The Hobbit. The first person I recall tracking me down emailed me with an interview request for his Italian adventure fan-site in 2001, after what he said was a long, long search. The subsequent years made it easier to locate people on the Internet, and the emails became more frequent. At times I get an email a week from people telling me the impact the game had on the course of their lives.

In 2006, the Australian Centre for the Moving Image (ACMI) held an exhibition entitled “Hits of the 80s: Aussie games that rocked the world” (Australian Centre for the Moving Image), featuring The Hobbit. It felt a little like having a museum retrospective while still alive: a moment of truth of how much things have changed, and at the same time how little. The games lab curator, Helen Stuckey, has since written a research paper about the challenge of collecting and exhibiting videogames for a museum audience, using The Hobbit as an example (Stuckey).

In late 2009 I took an education leave of absence from IBM US to study for a Masters/PhD in Computer Science at Portland State University. (IBM and I have since parted company.) When I arrived one of the PhD students, who had played The Hobbit in Mexico as a boy, recognized my name and asked me to present on it. While searching the Internet for graphics for the presentation, I discovered screen shots in many different languages and only then began to realize the worldwide distribution and impact the game had had. Being in a degree program while describing work I’d done during my previous university degree decades before caused many conflicting emotions. I was also amazed at the attendance and interest from the faculty and other students.

In 2012, the 30-year anniversary of the release, several Internet sites and magazines published retrospectives; a couple contacted me for interviews, while others worked solely from published sources. The same year I was contacted by a fan who had been inspired by a bug (“this room is too full for you to enter”) to spend time over the intervening decades in reverse-engineering the machine code into a “game debugger” of the kind I wish we’d had when we originally developed it: Wilderland (“Wilderland: A Hobbit Environment”). It runs the original game code in a Spectrum emulator, while displaying the position and state of objects and NPCs throughout the game. His eventual conclusion was that the location is left over from testing (and I even have a very vague memory of that testing). That a game I spent a year writing part-time could cause such extended devotion is humbling.

In retrospect, I think we came far closer to Fred’s goal of “the best adventure game ever” than we ever imagined we would. The game sold in many countries over many years, and by the late 1980’s had sold over a million copies (DeMaria) – vastly outselling most other games of the time. During one interview, the interviewer told me that in his opinion, The Hobbit transformed the genre of text adventure games, and that it was the last major development of the genre: later games merely refined the advances made. Certainly Beam Software’s games after The Hobbit did not repeat its success.

While many of the publications, particularly at the time of release, focused on the Inglish parser, it is the characters and the richness of the gameplay that most people that contact me focus on. I believe that just as the game would have been less rich without Inglish, putting the Inglish parser on any other adventure game of the time would in no way have resembled the experience of playing The Hobbit, nor would it have had the same impact on the industry or on individuals.

In 2013, the Internet Archive added The Hobbit to its Historical Software Collection[5] – which, in keeping with many other Hobbit-related events, I discovered via a colleague’s email. Late that year, ACMI contacted me to invite me to join the upcoming Play It Again project[6], a game history and preservation project focused on ANZ-written digital games in the 1980s. That contact led to this paper.

As I complete this retrospective – and my PhD – I was again struck again by the power a few simple ideas can have, especially when combined with each other. It’s my favorite form of innovation. In the words of one fan, written 30 years after the game’s release, “I can see what Megler was striving toward: a truly living, dynamic story where anything can happen and where you have to deal with circumstances as they come, on the fly. It’s a staggeringly ambitious, visionary thing to be attempting.” (Maher) A game that’s a fitting metaphor for life.

Disclaimer

This paper is written about events 35 years ago, as accurately as I can remember. With that gap in time, necessarily some errors will have crept in; I take full responsibility for them.

 

 

References

Addison-Wesley Publishing Company, Inc. The Hobbit: Guide to Middle-Earth. 1985.

Australian Centre for the Moving Image. “Hits of the 80s: Aussie Games That Rocked the World.” N.p., May 2007. Web. 24 Feb. 2014.

Crowther, Will. Colossal Cave. CRL, 1976. Print.

DeMaria, Rusel Wilson, Johnny L. High Score!: The Illustrated History of Electronic Games. Berkeley, Cal.: McGraw-Hill/Osborne, 2002. Print.

Golden Joystick Awards. Computer and Video Games Mar. 1984 : 15. Print.

Letters: Gollum’s Riddle. Micro Adventurer Mar. 1984 : 5. Print.

Maher, Jimmy. “The Hobbit.The Digital Antiquarian. N.p., Nov. 2012. Web. 24 Feb. 2014.

Mitchell, Phil, and Veronika Megler. Penetrator. Melbourne, Australia: Beam Software / Melbourne House, 1981. Web. <Described in: http://www.worldofspectrum.org/infoseekid.cgi?id=0003649>.

—. The Hobbit. Melbourne, Australia: Beam Software / Melbourne House, 1981. Web. <Described in: http://en.wikipedia.org/wiki/The_Hobbit_%28video_game%29>.

Stuckey, Helen. “Exhibiting The Hobbit: A Tale of Memories and Microcomputers.” History of Games International Conference Proceedings. Ed. Carl Therrien, Henry Lowood, and Martin Picard. Montreal: Kinephanos, 2014. Print.

Tolkien, J. R. R. The Hobbit, Or, There and Back Again,. Boston: Houghton Mifflin, 1966. Print.

Wilderland: A Hobbit Environment. N.p., 2012. Web. 24 Feb. 2014.

 

 

Notes:

[1] https://en.wikipedia.org/wiki/HIPO

[2] http://solearther.tumblr.com/post/38456362341/thorin-sits-down-and-starts-singing-about-gold

[3] http://www.filfre.net/2012/11/the-hobbit/

[4] An analysis by the Wilderland project (“Wilderland: A Hobbit Environment”) shows the following code breakdown: game engine and game, 36%; text-engine for input and output, the dictionary, the graphics-engine, and the parser 22%, graphics data 25%; character set (3%), buffers (8%), and 6% as yet unidentified.

[5] https://archive.org/details/The_Hobbit_v1.0_1982_Melbourne_House

[6] https://www.acmi.net.au/collections-research/research-projects/play-it-again/

 

Bio

Veronika M. Megler now works for Amazon Web Services in the U.S. as a Senior Consultant in Big Data and Analytics. She recently completed her PhD in Computer Science at Portland State University, working with Dr. David Maier in the emerging field of “Smarter Planet” and big data. Her dissertation research enables Information-Retrieval-style search over scientific data archives. Prior to her PhD, she helped clients of IBM U.S. and Australia adopt a wide variety of emerging technologies. She has published more than 20 industry technical papers and 10 research papers on applications of emerging technologies to industry problems, and holds two patents, including one on her dissertation research. Her interests include applications of emerging technologies, big data and analytics, scientific information management and spatio-temporal data. Ms. Megler was in the last year of her B.Sc. studies at Melbourne University when she co-wrote The Hobbit. She currently lives in Portland, Oregon, and can be reached at vmegler@gmail.com.

It Is What It is, Not What It Was – Henry Lowood

Abstract: The preservation of digital media in the context of heritage work is both seductive and daunting. The potential replication of human experiences afforded by computation and realised in virtual environments is the seductive part. The work involved in realising this potential is the daunting side of digital collection, curation, and preservation. In this lecture, I will consider two questions. First, Is the lure of perfect capture of data or the reconstruction of “authentic” experiences of historical software an attainable goal? And if not, how might reconsidering the project as moments of enacting rather than re-enacting provide a different impetus for making born digital heritage?

Keynote address originally delivered at the Born Digital and Cultural Heritage Conference, Melbourne, 19 June 2014

Let’s begin with a question. When did libraries, archives, and museums begin to think about software history collections? The answer: In the late 1970s. The Charles Babbage Institute (CBI) and the History of Computing Committee of the American Federation of Information Processing Societies (AFIPS), soon to be a sponsor of CBI, were both founded in 1978. The AFIPS committee produced a brochure called “Preserving Computer-Related Source Materials.” Distributed at the National Computer Conference in 1979, it is the earliest statement I have found about preserving software history. It says,

If we are to fully understand the process of computer and computing developments as well as the end results, it is imperative that the following material be preserved: correspondence; working papers; unpublished reports; obsolete manuals; key program listings used to debug and improve important software; hardware and componentry engineering drawings; financial records; and associated documents and artifacts. (“Preserving …” 4)

Mostly paper records. The recommendations say nothing about data files or executable software, only nodding to the museum value of hardware artefacts for “esthetic and sentimental value.” The brochure says that artefacts provide “a true picture of the mind of the past, in the same way as the furnishings of a preserved or restored house provides a picture of past society.” One year later, CBI received its first significant donation of books and archival documents from George Glaser, a former president of AFIPS. Into the 1980s history of computing collections meant documentation: archival records, publications, ephemera and oral histories.

Software preservation trailed documentation and historical projects by a good two decades. The exception was David Bearman, who left the Smithsonian in 1986 to create a company called Archives & Museum Informatics (AHI). He began publishing the Archival Informatics Newsletter in 1987 (later called Archives & Museum Informatics). As one of its earliest projects, AHI drafted policies and procedures for a “Software Archives” at the Computer History Museum (CHM) then located in Boston. By the end of 1987, Bearman published the first important study of software archives under the title Collecting Software: A New Challenge for Archives & Museums. (Bearman, Collecting Software; see also Bearman, “What Are/Is Informatics?”)

In his report, Bearman alternated between frustration and inspiration. Based on a telephone survey of companies and institutions, he wrote that “the concept of collecting software for historical research purposes had not occurred to the archivists surveyed; perhaps, in part, because no one ever asks for such documentation!” (Bearman, Collecting Software 25-26.) He learned that nobody he surveyed was planning software archives. Undaunted, he produced a report that carefully considered software collecting as a multi-institutional endeavor, drafting collection policies and selection criteria, use cases, a rough “software thesaurus” to provide terms for organizing a software collection, and a variety of practices and staffing models. Should some institution accept the challenge, here were tools for the job.

Well, here we are, nearly thirty years later. We can say that software archives and digital repositories finally exist. We have made great progress in the last decade with respect to repository technology and collection development. Looking back to the efforts of the 1980s, one persistent issue raised as early as the AFIPS brochure in 1978 is the relationship between collections of historical software and archival documentation about that software. This is an important issue. Indeed, it is today, nearly forty years later, still one of the key decision points for any effort to build research collections aiming to preserve digital heritage or serve historians of software. Another topic that goes back to Bearman’s report is a statement of use cases for software history. Who is interested in historical software and what will they do with it? Answers to this fundamental question must continue to drive projects in digital preservation and software history.

As we consider the potential roles to be played by software collections in libraries and museums, we immediately encounter vexing questions about how researchers of the future will use ancient software. Consider that using historical software now in order to experience it in 2014 and running that software in 2014 to learn what it was like when people operated it thirty years ago are two completely different use cases. This will still be true in 2050. This may seem like an obvious point, but it is important to understand its implications. An analogy might help. I am not just talking about the difference between watching “Gone with the Wind” at home on DVD versus watching it in a vintage movie house in a 35mm print – with or without a live orchestra. Rather I mean the difference between my experience in a vintage movie house today – when I can find one – and the historical experience of, say, my grandfather during the 1930s. My experience is what it is, not what his was. So much of this essay will deal with the complicated problem of enacting a contemporary experience to re-enact a historical experience and what it has to do with software preservation. I will consider three takes on this problem: the historian’s, the media archaeologist’s, and the re-enactor.

Take 1. The Historian

Take one. The historian. Historians enact the past by writing about it. In other words, historians tell stories. This is hardly a revelation. Without meaning to trivialize the point, I cannot resist pointing out that “story” is right there in “hi-story” or that the words for story and history are identical in several languages, including French and German. The connections between story-telling and historical narrative have long been a major theme in writing about the methods of history, that is, historiography. In recent decades, this topic has been mightily influenced by the work of Hayden White, author of the much-discussed Metahistory: The Historical Imagination in Nineteenth-Century Europe, published in 1973.

White’s main point about historians is that History is less about subject matter and source material and more about how historians write.

He tells us that historians do not simply arrange events culled from sources in correct chronological order. Such arrangements White calls Annals or Chronicles. The authors of these texts merely compile lists of events. The work of the historian begins with the ordering of these events in a different way. Hayden writes in The Content of the Form that in historical writing, “the events must be not only registered within the chronological framework of their original occurrence but narrated as well, that is to say, revealed as possessing a structure, an order of meaning, that they do not possess as mere sequence.” (White, Content of the Form 5) How do historians do this? They create narrative discourses out of sequential chronicles by making choices. These choices involve the form, effect and message of their stories. White puts choices about form, for example, into categories such as argument, ideology and emplotment. There is no need in this essay to review all of the details of every such choice. The important takeaway is that the result of these choices by historians is sense-making through the structure of story elements, use of literary tropes and emphasis placed on particular ideas. In a word, plots. White thus gives us the enactment of history as a form of narrative or emplotment that applies established literary forms such as comedy, satire, and epic.

In his book Figural Realism: Studies in the Mimesis Effect, White writes about the “events, persons, structures and processes of the past” that “it is not their pastness that makes them historical. They become historical only in the extent to which they are represented as subjects of a specifically historical kind of writing.” (White, Figural Realism 2.) It is easy to take away from these ideas that history is a kind of literature. Indeed, this is the most controversial interpretation of White’s historiography.

My purpose in bringing Hayden White to your attention is to insist that there is a place in game and software studies for this “historical kind of writing.” I mean writing that offers a narrative interpretation of something that happened in the past. Game history and software history need more historical writing that has a point beyond adding events to the chronicles of game development or putting down milestones of the history of the game industry. We are only just beginning to see good work that pushes game history forward into historical writing and produces ideas about how these historical narratives will contribute to allied works in fields such as the history of computing or the history of technology more generally.

Allow me one last point about Hayden White as a take on enactment. Clearly, history produces narratives that are human-made and human-readable. They involve assembling story elements and choosing forms. How then do such stories relate to actual historical events, people, and artifacts? Despite White’s fondness for literary tropes and plots, he insists that historical narrative is not about imaginary events. If historical methods are applied properly, the resulting narrative according to White is a “simulacrum.” He writes in his essay on “The Question of Narrative in Contemporary Historical Theory,” that history is a “mimesis of the story lived in some region of historical reality, and insofar as it is an accurate imitation, it is to be considered a truthful account thereof.” (White, “The Question of Narrative …” 3.) Let’s keep this idea of historical mimesis in mind as we move on to takes two and three.

Take 2. The Media Archaeologist

My second take is inspired by the German media archaeologist Wolfgang Ernst. As with Hayden White, my remarks will fall far short of a critical perspective on Ernst’s work. I am looking for what he says to me about historical software collections and the enactment of media history.

Hayden White put our attention on narrative; enacting the past is storytelling. Ernst explicitly opposes Media Archaeology to historical narrative. He agrees in Digital Memory and the Archive, that “Narrative is the medium of history.” By contrast, “the technological reproduction of the past … works without any human presence because evidence and authenticity are suddenly provided by the technological apparatus, no longer requiring a human witness and thus eliminating the irony (the insight into the relativity) of the subjective perspective.” (Ernst, Loc. 1053-1055.) Irony, it should be noted, is one of White’s favourite tropes for historical narrative.

White tells us that historical enactment is given to us as narrative mimesis, with its success given as the correspondence of history to some lived reality. Ernst counters by giving us enactment in the form of playback.

In an essay called “Telling versus Counting: A Media-Archaeological Point of View,” Ernst plays with the notion that, “To tell as a transitive verb means ‘to count things’.” The contrast with White here relates to the difference in the German words erzählen (narrate) and zählen (count), but you also find it in English: recount and count. Ernst describes historians as recounters: “Modern historians … are obliged not just to order data as in antiquaries but also to propose models of relations between them, to interpret plausible connections between events.” (Ernst, Loc. 2652-2653) In another essay, aptly subtitled “Method and Machine versus the History and Narrative of Media,” Ernst adds that mainstream histories of technology and mass media as well as their counter-histories are textual performances that follow “a chronological and narrative ordering of events.” He observes succinctly that, “It takes machines to temporarily liberate us from such limitations.” (Ernst, Loc. 1080-1084)

Where do we go with Ernst’s declaration in “Telling versus Counting,” that “There can be order without stories”? We go, of course, directly to the machines. For Ernst, media machines are transparent in their operation, an advantage denied to historians. We play back historical media on historical machines, and “all of a sudden, the historian’s desire to preserve the original sources of the past comes true at the sacrifice of the discursive.” We are in that moment directly in contact with the past.

In “Method and Machine”, Ernst offers the concept of “media irony” as a response to White’s trope of historical irony. He says,

Media irony (the awareness of the media as coproducers of cultural content, with the medium evidently part of the message) is a technological modification of Hayden White’s notion that “every discourse is always as much about discourse itself as it is about the objects that make up its subject matter. (Ernst, Loc. 1029-1032)

As opposed to recounting, counting in Ernst’s view has to do with the encoding and decoding of signals by media machines. Naturally, humans created these machines. This might be considered as another irony, because humans- have thereby “created a discontinuity with their own cultural regime.” We are in a realm that replaces narrative with playback as a form of direct access to a past defined by machine sequences rather than historical time. (Ernst, Loc. 1342-1343)

Ernst draws implications from media archaeology for his closely connected notion of the multimedia archive. In “Method and Machine,” he says, “With digital archives, there is, in principle, no more delay between memory and the present but rather the technical option of immediate feedback, turning all present data into archival entries and vice versa.” In “Telling versus Counting,” he portrays “a truly multimedia archive that stores images using an image-based method and sound in its own medium … And finally, for the first time in media history, one can archive a technological dispositive in its own medium.” (Ernst, Loc. Loc. 1745-1746; 2527-2529.) Not only is the enactment of history based on playback inherently non-discursive, but the very structure of historical knowledge is written by machines.

With this as background, we can turn to the concrete manifestation of Ernst’s ideas about the Multimedia Archive. This is the lab he has created in Berlin. The website for Ernst’s lab describes The Media Archaeological Fundus (MAF) as “a collection of various electromechanical and mechanical artefacts as they developed throughout time. Its aim is to provide a perspective that may inspire modern thinking about technology and media within its epistemological implications beyond bare historiography.” (Media Archaeological Fundus) Ernst explained the intention behind the MAF in an interview with Lori Emerson as deriving from the need to experience media “in performative ways.” So he created an assemblage of media and media technologies that could be operated, touched, manipulated and studied directly. He said in this interview, “such items need to be displayed in action to reveal their media essentiality (otherwise a medium like a TV set is nothing but a piece of furniture).” (Owens) Here is media archaeology’s indirect response to the 1979 AFIPS brochure’s suggestion that historical artifacts serve a purpose similar to furnishings in a preserved house.

The media-archaeological take on enacting history depends on access to artifacts and, in its strongest form, on their operation. Even when its engagement with media history is reduced to texts, these must be “tested against the material evidence.” This is the use case for Playback as an enactment of software history.

Take 3. The Re-enactor

Take three. The Re-enactor. Authenticity is an important concept for digital preservation.   A key feature of any digital archive over the preservation life-cycle of its documents and software objects is auditing and verification of authenticity, as in any archive. Access also involves authenticity, as any discussion of emulation or virtualization will bring up the question of fidelity to an historical experience of using software.

John Walker (of AutoDesk and Virtual Reality fame) created a workshop called Fourmilab to work on personal projects such as an on-line museum “celebrating” Charles Babbage’s Analytical Engine. This computer programming heritage work includes historical documents and a Java-based emulator of the Engine. Walker says, “Since we’re fortunate enough to live in a world where Babbage’s dream has been belatedly realised, albeit in silicon rather than brass, we can not only read about The Analytical Engine but experience it for ourselves.” The authenticity of this experience – whatever that means for a machine that never existed – is important to Walker. In a 4500-word essay titled, “Is the Emulator Authentic,” he tells us that, “In order to be useful, an emulator program must be authentic—it must faithfully replicate the behaviour of the machine it is emulating.” By extension, the authenticity of a preserved version of the computer game DOOM in a digital repository could be audited by verifying that it can properly run a DOOM demo file. The same is true for Microsoft Word and a historical document in the Word format. This is a machine-centered notion of authenticity; we used it in the second Preserving Virtual Worlds project as a solution to the significant properties problem for software. (Walker, “Introduction;” Walker, “Analytical Engine.”)

All well and good. However, I want to address a different authenticity. Rather than judging authenticity in terms of playback, I would like to ask what authenticity means for the experience of using software. Another way of putting this question is to ask what we are looking for in the re-enactment of historical software use. So we need to think about historical re-enactment.

I am not a historical re-enactor, at least not the kind you are thinking of. I have never participated in the live recreation or performance of a historical event. Since I have been playing historical simulations – a category of boardgames – for most of my life, perhaps you could say that I re-enact being a historical military officer by staring at maps and moving units around on them. It’s not the same thing as wearing period uniforms and living the life, however.

Anyway, I need a re-enactor. In his 1998 book Confederates in the Attic, Tony Horwitz described historical re-enactment in its relationship to lived heritage. (Horwitz) His participant-journalist reportage begins at a chance encounter with a group of “hard-core” Confederate re-enactors. Their conversation leads Horwitz on a year-long voyage through the American South. A featured character in Confederates in the Attic is the re-enactor Robert Lee Hodge, a waiter turned Confederate officer. He took Horwitz under his wing and provided basic training in re-enactment. Hodge even became a minor celebrity due to his role in the book.

Hodges teaches Horwitz the difference between hard-core and farby (i.e., more casual) re-enactment. He tells Horwitz about dieting to look sufficiently gaunt and malnourished, the basics of “bloating” to resemble a corpse on the battlefield, what to wear, what not to wear, what to eat, what not to eat, and so on. It’s remarkable how little time he spends on martial basics. One moment sticks out for me. During the night after a hard day of campaigning Horwitz finds himself in the authentic situation of being wet, cold and hungry. He lacks a blanket, so he is given basic instruction in the sleeping technique of the Confederate infantryman: “spooning.” According to the re-enactor Scott Cross, “Spooning is an old term for bundling up together in bed like spoons placed together in the silver chest.” (Horwitz) Lacking adequate bedding and exposed to the elements, soldiers bunched up to keep warm. So that’s what Horwitz does, not as an act of mimesis or performance per se, but in order to re-experience the reality of Civil War infantrymen.

It interested me that of all the re-enactment activities Horwitz put himself through, spooning reveals a deeper commitment to authenticity than any of the combat performances he describes. It’s uncomfortable and awkward, so requires dedication and persistence. Sleep becomes self-conscious, not just in order to stick with the activity, but because the point of it is to recapture a past experience of sleeping on the battlefield. Since greater numbers of participants are needed for re-enacting a battle than sleep, more farbs (the less dedicated re-enactors) show up and thus the general level of engagement declines. During staged battles, spectators, scripting, confusion and accidents all interfere with the experience. Immersion breaks whenever dead soldiers pop up on the command, “resurrect.” In other words, performance takes over primacy from the effort to re-experience. It is likely that many farbs dressed up for battle are content to find a hotel to sleep in.

Specific attention to the details of daily life might be a reflection of recent historical work that emphasizes social and cultural histories of the Civil War period, rather than combat histories. But that’s not my takeaway from the spooning re-enactors. Rather, it’s the standard of authenticity that goes beyond performance of a specific event (such as a battle) to include life experience as a whole. Horvitz recalled that,

Between gulps of coffee—which the men insisted on drinking from their own tin cups rather than our ceramic mugs—Cool and his comrades explained the distinction. Hardcores didn’t just dress up and shoot blanks. They sought absolute fidelity to the 1860s: its homespun clothing, antique speech patterns, sparse diet and simple utensils. Adhered to properly, this fundamentalism produced a time travel high, or what hardcores called a ‘period rush.’ (Horwitz, Loc. 153-157)

Stephen Gapps, an Australian curator, historian, and re-enactor has spoken of the “extraordinary lengths” re-enactors go to “acquire and animate the look and feel of history.” Hard-core is not just about marching, shooting and swordplay. I wonder what a “period rush” might be for the experience of playing Pitfall! in the mid-21st century. Shag rugs? Ambient New Wave radio? Caffeine-free cola? Will future re-enactors of historical software seek this level of experiential fidelity? Gapps, again: “Although reenactors invoke the standard of authenticity, they also understand that it is elusive – worth striving for, but never really attainable.” (Gapps 397)

Re-enactment offers a take on born-digital heritage that proposes a commitment to lived experience. I see some similarity here with the correspondence to lived historical experience in White’s striving for a discursive mimesis. Yet, like media archaeology, re-enactment puts performance above discourse, though it is the performance of humans rather than machines.

Playing Pitfalls

We now have three different ways to think about potential uses of historical software and born digital documentation. I will shift my historian’s hat to one side of my head now and slide up my curator’s cap. If we consider these takes as use cases, do they help us decide how to allocate resources to acquire, preserve, describe and provide access to digital collections?

In May 2013, the National Digital Information Infrastructure and Preservation Program (NDIIPP) of the U.S. Library of Congress (henceforth: LC) held a conference called Preserving.exe. The agenda was to articulate the “problems and opportunities of software preservation.” In my contribution to the LC conference report issued a few months later, I described three “lures of software preservation.” (Lowood) These are potential pitfalls as we move from software collections to digital repositories and from there to programs of access to software collections. The second half of this paper will be an attempt to introduce the three lures of software preservation to the three takes on historical enactment.

  1. The Lure of the Screen

Let’s begin with the Lure of the Screen. This is the idea that what counts in digital media is what is delivered to the screen. This lure pops up in software preservation when we evaluate significant properties of software as surface properties (graphics, audio, haptics, etc).

This lure of the screen is related to what media studies scholars such as Nick Montfort, Mark Sample and Matt Kirschenbaum have dubbed (in various but related contexts) “screen essentialism.” If the significant properties of software are all surface properties, then our perception of interaction with software tells us all we need to know. We check graphics, audio, responses to our use of controllers, etc., and if they look and act as they should, we have succeeded in preserving an executable version of historical software. These properties are arguably the properties that designers consider as the focus of user interaction and they are the easiest to inspect and verify directly.

The second Preserving Virtual Worlds project was concerned primarily with identifying significant properties of interactive game software. On the basis of several case sets and interviews with developers and other stakeholders, we concluded that isolating surface properties, such as image colourspace as one example, while significant for other media such as static images, is not a particularly useful approach to take for game software. With interactive software, significance appears to be variable and contextual, as one would expect from a medium in which content is expressed through a mixture of design and play, procedurality and emergence. It is especially important that software abstraction levels are not “visible” on the surface of play. It is difficult if not impossible to monitor procedural aspects of game design and mechanics, programming and technology by inspecting properties expressed on the screen.

The preservation lifecycle for software is likely to include data migration. Access to migrated software will probably occur through emulation. How do we know when our experience of this software is affected by these practices? One answer is that we audit significant properties, and as we now know, it will be difficult to predict which characteristics are significant. An alternative or companion approach for auditing the operation of historical software is to verify the execution of data files. The integrity of the software can be evaluated by comparison to documented disk images or file signatures such as hashes or checksums. However, when data migration or delivery environments change the software or its execution environment, this method is inadequate. We must evaluate software performance. Instead of asking whether the software “looks right,” we can check if it runs verified data-sets that meet the specifications of the original software. Examples range from word processing documents to saved game and replay files. Of course, visual inspection of the content plays a role in verifying execution by the software engine; failure will not always be clearly indicated by crashes or error messages. Eliminating screen essentialism does not erase surface properties altogether.

The three takes compel us to think about the screen problem in different ways. First, the Historian is not troubled by screen essentialism. His construction of a narrative mimesis invokes a selection of source materials that may or may not involve close reading of personal gameplay, let alone focus on surface properties. On the other hand, The Re-enactor’s use of software might lead repositories to fret about what the user sees, hears and feels. It makes sense with this use case to think about the re-enactment as occurring at the interface. If a repository aims to deliver a re-enacted screen experience, it will need to delve deeply into questions of significant properties and their preservation.

Screen essentialism is also a potential problem for repositories that follow the path of Media Archaeology. It is unclear to me how a research site like the MAF would respond to digital preservation practices based on data migration and emulation. Can repositories meet the requirements of media archaeologists without making a commitment to preservation of working historical hardware to enable playback from original media? It’s not just that correspondence to surface characteristics is a significant property for media archaeologists. Nor is the Lure of the Screen a criticism of Media Archaelogy. I propose instead that it is a research problem. Ernst’s vision of a Multimedia Archive is based on the idea that media archaeology moves beyond playback to reveal mechanisms of counting. This machine operation clearly is not a surface characteristic. Ernst would argue, I think, that this counting is missed by an account of what is seen on the screen. So let’s assign the task of accounting for counting to the Media Archaeologist, which means showing us how abstraction layers in software below the surface can be revealed, audited and studied.

  1. The Lure of the Authentic Experience

I have already said quite a bit about authenticity. Let me explain now why I am sceptical about an authentic experience of historical software, and why this is an important problem for software collections.

Everyone in game or software studies knows about emulation. Emulation projects struggle to recreate an authentic experience of operating a piece of software such as playing a game. Authenticity here means that the use experience today is like it was. The Lure of the Authentic Experience tells digital repositories at minimum not to preserve software in a manner that would interfere with the production of such experiences. At maximum, repositories deliver authentic experiences, whether on-site or on-line. A tall order. In the minimum case, the repository provides software and collects hardware specifications, drivers or support programs. The documentation provides software and hardware specifications. Researchers use this documentation to reconstruct the historical look-and-feel of software to which they have access. In the maximum case, the repository designs and builds access environments. Using the software authentically would then probably mean a trip to the library or museum with historical or bespoke hardware. The reading room becomes the site of the experience.

I am not happy to debunk the Authentic Experience. Authenticity is a concept fraught not just with intellectual issues, but with registers ranging from nostalgia and fandom to immersion and fun. It is a minefield. The first problem is perhaps an academic point, but nonetheless important: Authenticity is always constructed. Whose lived experience counts as “authentic” and how has it been documented? Is the best source a developer’s design notes? The memory of someone who used the software when it was released? A marketing video? The researcher’s self-reflexive use in a library or museum? If a game was designed for kids in 1985, do you have to find a kid to play it in 2050? In the case of software with a long history, such as Breakout or Microsoft Word, how do we account for the fact that the software was used on a variety of platforms – do repositories have to account for all of them? For example, does the playing of DOOM “death match” require peer-to-peer networking on a local area network, a mouse-and-keyboard control configuration and a CRT display? There are documented cases of different configurations of hardware: track-balls, hacks that enabled multiplayer via TCPIP, monitors of various shapes and sizes, and so on. Which differences matter?

A second problem is that the Authentic Experience is not always that useful to the researcher, especially the researcher studying how historical software executes under the hood. The emulated version of a software program often compensates for its lack of authenticity by offering real-time information about system states and code execution. A trade-off for losing authenticity thus occurs when the emulator shows the underlying machine operation, the counting, if you will. What questions will historians of technology, practitioners of code studies or game scholars ask about historical software? I suspect that many researchers will be as interested in how the software works as in a personal experience deemed authentic.   As for more casual appreciation, the Guggenheim’s Seeing Double exhibition and Margaret Hedstrom’s studies of emulation suggest that exhibition visitors actually prefer reworked or updated experiences of historical software. (Hedstrom, Lee, et al.; Jones)

This is not to say that original artefacts – both physical and “virtual” – will not be a necessary part of the research process. Access to original technology provides evidence regarding its constraints and affordances. I put this to you not as a “one size fits all” decision but as an area of institutional choice based on objectives and resources.

The Re-enactor, of course, is deeply committed to the Authentic Experience. If all we offer is emulation, what do we say to him, besides “sorry.” Few digital repositories will be preoccupied with delivering authentic experiences as part of their core activity. The majority are likely to consider a better use of limited resources to be ensuring that validated software artefacts and contextual information are available on a case-by-case basis to researchers who do the work of re-enactment. Re-enactors will make use of documentation. Horwitz credits Robert Lee Hodge with an enormous amount of research time spent at the National Archives and Library of Congress. Many hours of research with photographs and documents stand behind his re-enactments. In short, repositories should let re-enactors be the re-enactors.

Consider this scenario for software re-enactment. You are playing an Atari VCS game with the open-source Stella emulator. It bothers you that viewing the game on your LCD display differs from the experience with a 1980s-era television set. You are motivated by this realization to contribute code to the Stella project for emulating a historical display. It is theoretically possible that you could assemble everything needed to create an experience that satisfies you – an old television, adapters, an original VCS, the software, etc. (Let’s not worry about the shag rug and the lava lamp.) You can create this personal experience on your own, then write code that matches it. My question: Is the result less “authentic” if you relied on historical documentation such as video, screenshots, technical specifications, and other evidence available in a repository to describe the original experience? My point is that repositories can cooperatively support research by re-enactors who create their version of the experience. Digital repositories should consider the Authentic Experience as more of a research problem than a repository problem.

  1. The Lure of the Executable

The Lure of the Executable evaluates software preservation in terms of success at building collections of software that can be executed on-demand by researchers.

Why do we collect historical software? Of course, the reason is that computers, software, and digital data have had a profound impact on virtually every aspect of recent history. What should we collect? David Bearman’s answer in 1987 was the “software archive.” He distinguished this archive from what I will call the software library. The archive assembles documentation; the library provides historical software. The archive was a popular choice in the early days. Margaret Hedstrom reported that attendees at the 1990 Arden Conference on the Preservation of Microcomputer Software “debated whether it was necessary to preserve software itself in order to provide a sense of ‘touch and feel’ or whether the history of software development could be documented with more traditional records.” (Hedstrom and Bearman) In 2002, the Smithsonian’s David Allison wrote about collecting historical software in museums that, “supporting materials are often more valuable for historical study than code itself. They provide contextual information that is critical to evaluating the historical significance of the software products.” He concluded that operating software is not a high priority for historical museums. (Allison 263-65; cf. Shustek)

Again, institutional resources are not as limitless as the things we would like to do with software. Curators must prioritize among collections and services. The choice between software archive and library is not strictly binary, but choices still must be made.

I spend quite a bit of my professional life in software preservation projects. The end-product of these projects is at least in part the library of executable historical software. I understand the Lure of the Executable and the reasons that compel digital repositories to build collections of verified historical software that can be executed on-demand by researchers. This is the Holy Grail of digital curation with respect to software history. What could possibly be wrong with this mission, if it can be executed?   As I have argued on other occasions there are several problems to consider. Let me give you two. The first is that software does not tell the user very much about how it has previously been used. In the best case, application software in its original use environment might display a record of files created by previous users, such as a list of recently opened files found in many productivity titles like Microsoft Office. The more typical situation is that software is freshly installed from data files in the repository and thus completely lacks information about its biography, for want of a better term.

The second, related problem is fundamental. Documentation that is a prerequisite for historical studies of software is rarely located in software. It is more accurate to say that this documentation surrounds software in development archives (including source code) and records of use and reception. It is important to understand that this is not just a problem for historical research. Documentation is also a problem for repositories. If contextual information such as software dependencies or descriptions of relationships among objects is not available to the repository and all the retired software engineers who knew the software inside-and-out are gone – it may be impossible to get old software to run.

Historians, of course, will usually be satisfied with the Archive. Given limited resources, is it reasonable to expect that the institutions responsible for historical collections of documentation will be able to reconcile such traditional uses with other methods of understanding historical computing systems? The Re-enactor will want to run software, and the Media Archaeologist will not just want access to a software library, but to original media and hardware in working order. These are tall orders for institutional repositories such as libraries and archives, though possibly a better fit to the museum or digital history centre.

In Best Before: Videogames, Supersession and Obsolescence, James Newman is not optimistic about software preservation and he describes how the marketing of software has in some ways made this a near impossibility. He is not as pessimistic about video game history, however. In a section of his book provocatively called “Let Videogames Die,” he argues that a documentary approach to gameplay might be a more pragmatic enterprise than the effort to preserve playable games. He sees this as a “shift away from conceiving of play as the outcome of preservation to a position that acknowledges play as an indivisible part of the object of preservation.” (Newman 160) In other words, what happens when we record contemporary use of software to create historical documentation of that use? Does this activity potentially reduce the need for services that provide for use at any given time in the future? This strikes me as a plausible historical use case, but not one for re-enactment or media archaeology.

Software archives or software libraries? That is the question. Is it nobler to collect documentation or to suffer the slings and arrows of outrageous software installations? The case for documentation is strong. The consensus among library and museum curators (including myself) is almost certainly that documents from source code to screenshots are a clear win for historical studies of software. Historians, however, will not be the only visitors to the archive. But there are other reasons to collect documentation. One of the most important reasons, which I briefly noted above, is that software preservation requires such documentation. In other words, successful software preservation activities are dependent upon technical, contextual and rights documentation. And of course, documents tell re-enactors how software was used and can help media archaeologists figure out what their machines are showing or telling them. But does documentation replace the software library? Is it sufficient to build archives of software history without libraries of historical software? As we have seen, this question was raised nearly forty years ago and remains relevant today. My wish is that this question of the relationship between documentation and software as key components of digital heritage work stir conversation among librarians, historians, archivists and museum curators. This conversation must consider that there is likely to be a broad palette of use cases such as the historian, media archaeologist and re-enactor, as well as many others not mentioned here. It is unlikely that any one institution can respond to every one of these use cases. Instead, the more likely result is a network of participating repositories, each of which will define priorities and allocate resources according to both their specific institutional contexts and an informed understanding of the capabilities of partner institutions.

 

References

Allison, David K. “Preserving Software in History Museums: A Material Culture Approach. Ed. Ulf Hashagen, Reinhard Keil-Slawik and Arthur L. Norberg. History of Computing: Software Issues. Berlin: Springer, 2002. 263-272.

Bearman, David. Collecting Software: A New Challenge for Archives and Museums. Archival Informatics Technical Report #2 (Spring 1987).

— “What Are/Is Informatics? And Especially, What/Who is Archives & Museum Informatics?” Archival Informatics Newsletter 1:1 (Spring 1987): 8.

Cross, Scott. “The Art of Spooning.” Atlantic Guard Soldiers’ Aid Society. 13 July 2016. Web. http://www.agsas.org/howto/outdoor/art_of_spooning.shtml. Originally published in The Company Wag 2, no. 1 (April 1989).

Ernst, Wolfgang. Digital Memory and the Archive. (Minneapolis: Univ. Minnesota Press, 2012). Kindle edition.

Gapps, Stephen. “Mobile monuments: A view of historical reenactment and authenticity from inside the costume cupboard of history.” Rethinking History: The Journal of Theory and Practice, 13:3 (2009): 395-409.

Hedstrom, Margaret L., Christopher A. Lee, Judith S. Olson and Clifford A. Lampe, “‘The Old Version Flickers More’: Digital Preservation from the User’s Perspective.” The American Archivist, 69: 1 (Spring – Summer 2006): 159-187.

Hedstrom, Margaret L., and David Bearman, “Preservation of Microcomputer Software: A Symposium,” Archives and Museum Informatics 4:1 (Spring 1990): 10.

Horwitz, Tony. Confederates in the Attic: Dispatches from the Unfinished Civil War. New York: Pantheon Books, 1998. Kindle Edition.

Jones, Caitlin. “Seeing Double: Emulation in Theory and Practice. The Erl King Study.” Paper presented to the Electronic Media Group, 14 June 2004. Electronic Media Group. Web. http://cool.conservation-us.org/coolaic/sg/emg/library/pdf/jones/Jones-EMG2004.pdf

Lowood, Henry. “The Lures of Software Preservation.” Preserving.exe: Toward a National Strategy for Software Preservation (October 2013): 4-11. Web. http://www.digitalpreservation.gov/multimedia/documents/PreservingEXE_report_final101813.pdf

Media Archaeological Fundus. Web. 21 Jan. 2016. http://www.medienwissenschaft.hu-berlin.de/medientheorien/fundus/media-archaeological-fundus

Newman, James. Best Before: Videogames, Supersession and Obsolescence. London: Routledge, 2012.

Owens, Trevor. “Archives, Materiality and the ‘Agency of the Machine’: An Interview with Wolfgang Ernst.” The Signal: Digital Preservation. Web. 8 February 2013. http://blogs.loc.gov/digitalpreservation/2013/02/archives-materiality-and-agency-of-the-machine-an-interview-with-wolfgang-ernst/

“Preserving Computer-Related Source Materials.” IEEE Annals of the History of Computing 1 (Jan.-March 1980): 4-6.

Shustek, Len. “What Should We Collect to Preserve the History of Software?” IEEE Annals of the History of Computing, 28 (Oct.-Dec. 2006): 110-12.

Walker, John. “Introduction” to The Analytical Engine: The First Computer.” Fourmilab, 21 March 2016. Web. http://www.fourmilab.ch/babbage/

— “The Analytical Engine: Is the Emulator Authentic?,” Fourmilab, 21 March 2016. Web. http://www.fourmilab.ch/babbage/authentic.html

White, Hayden. The Content of the Form: Narrative Discourse and Historical Representation. Baltimore: Johns Hopkins Univ. Press, 1987.

Figural Realism: Studies in the Mimesis Effect. Baltimore: Johns Hopkins Univ. Press, 2000.

— “The Question of Narrative in Contemporary Historical Theory.” In: History and Theory 23: 1 (Feb. 1984): 1-33.

 

Bio

Henry Lowood is Curator for History of Science & Technology Collections and for Film & Media Collections at Stanford University. He has led the How They Got Game project at Stanford University since 2000 and is the co-editor of The Machinima Reader and Debugging Game History, both published by MIT Press. Contact: lowood@stanford.edu