Daniel L. Golden (Eötvös Loránd University of Budapest)

 

The Electronic Turn: Changes in Textual Structure

 

 

It has been a commonplace for nearly two decades now, that the shift between the old ways of writing and the new, electronic-based ones has a lot of serious theoretical and practical consequences. We already know a lot about the specialties of working with a word processor, producing materials for a multimedia CD-ROM and being connected to the endless textual universe of the Internet.

 

Most discussion on the impact of that shift has focused on issues like the fate of linear narrative, changing notions of authorship, readership, copyright and so on. Quite rarely the electronic text becomes a topic for philological investigations, although more and more knowledge gets represented only or mainly in that form. In this paper I would like to concentrate on the text itself and to raise some questions from the point of view of the philologist, which are generally left out of consideration. I will try to take a closer look at electronic texts, list some important features of them and make conclusions about their philological status.

 

 

1. Inconceivable

 

One of the essential differences between printed and electronic texts is visible for the first glance, namely the first is visible, while the last entirely not. I have my lecture here with me in two written forms: One in print, which I can transform here and now to speech without any problem. The other in electronic, on a floppy disk. To achieve the same result with that one, I would have to realize a sequence of complicated actions in connection to a computer.

 

There is no information per se, every human product of culture has it’s own vehicle transporting it. But there are differences about how mediated these communication systems are. To read a traditional book we have to use only our eyes. To view a film recorded on a videocassette we have to use also a videoplayer and a television. The case of electronic texts is the same, only even more complicated. To have an e-text properly appearing I have to have the right version of the right word processor for the right operating system on the right computer-hardware. The printed text was in front of our faces; to make an e-text readable we need a set of interfaces.

 

In the age of printing “texts” are standing side by side on a bookshelf. They are visible and touchable, they have a physical reality perceivable directly. Electronic texts are lying somewhere hidden on a hard disk of a computer. They are somehow similar to elementary particles. We have never really seen them, we know about there existence only indirectly: making the same experiment with them they react in the same way (except of some word processors made by worldwide multinational companies).

 

The importance of this change is that the evidence of the visible text was the base of any philological discussion. It is a kind of commonplace, that the birth of the humanities is strongly connected to the appearing of literacy, when different parts of texts became comparable. In the case of electronic literacy one can never be assured, whether the text read by him is the proper form of it, or it is modified by one of the interfaces used. While a specimen of a printed book seems to be “given” for once and all, an electronic text changes it’s faces from platform to platform.

 

 

2. Unreachable

 

Recently the eminent home for electronic texts became the Internet. It offers a lot of new and attractive possibilities. One of these is the phenomenon of hypertext. Indeed, the Internet itself as a whole is perceivable as one great system of hypertext.

 

For that we can often hear the fascinating sentence: a hypertext structure has no limits. That is, of course, not entirely true. Since the creating of such a structure is made by using the tool of linking, we can precisely follow all the connections placed by the author of the work. However, there is no guarantee, that the avid reader will not pass over that border: the inner limit formed by the links made by the author, and get far away from the original starting point. While the claim quoted is false from the point of the author and the structure of the text, it can be true in the experience of the reader. Browsers give quite a few help to one to identify his place in virtual space. The common reader does not care about obscure URLs, which are the only signals for getting precise knowledge about where we are and what we are reading. The total homogeneity of the Net hides any difference between site and site, text and text. There are a lot of tries to give more solid identifiers to electronic documents, but none of them succeeded to get really widely used up to now.

 

There is another way to feel ourselves “lost in space”. One of the most important changes in electronic texts is the disappearing of the page as a structural entity. It is replaced by the screen, which works as a window can be scrolled up and down over the whole text. This change has no (or almost no) effect on the content of the text, but has a very big one on the reader of it. Our old manners of finding our ways in a book were strongly connected with the page-structure: we could find special parts of a text browsing in it and searching something e.g. “at the top of a page on the left”. That kind of visual memory will work no longer in the world of electronic texts. The tool of the human memory has to be replaced by a technical one, namely searching robots. These programs fight with the same problems as e.g. electronic library catalogues: they can’t reproduce the stochastic of the searching methods of the human mind, which is so effective. So the human mind should accommodate to the changing external reality: instead of the old visual memory has to develop a kind of keyword-memory: has to remember the word-combination which is characteristic only for the very part of the text looked for.

 

Hypertext-theorist George P. Landow makes a distinction between the availability and the accessibility of an electronic document (Hypertext: The Convergence of Contemporary Critical Theory and Technology). The first means that the source exists in electronic form in an electronic archive. But the more important question is the second one: is the text in a place, which is well known, so that readers really can get access to it. Hence recording data is only one part of the work of the modern philologist. After that someone has to take care of those records: assure, that they find their place at a server and make changes to keep them up-to-date in content and form as well.

 

This process is described by Esther Dyson in her book Release 2.0 as an important shift in the essence of intellectual work. She argues, that the intellectual product (an article, a book) is getting replaced by the intellectual provision: a continuos attend on the reader.

 

In the traditional comparison there were two ways of working on letters: to work about the unchanging past as a philologist or live in the eternal presence as a journalist. Today the first has to come closer to the last: the requirement of being up-to-date became as pressing for the editors of scientific databases as for managers of internet-newspapers.

 

 

3. Indigestible

 

From the very beginning of using computers in humanities there have been the expectation that it will finally realize one of the biggest dreams of mankind: a system (book or machine) of the totality of human knowledge. The Internet is the latest candidate for that holy role: we should make available everything via the Net – the old task is reformulated in that way today.

 

But that kind of unlimited may become rather frightening than inspiring. The rational limits of intellectual work gets in danger. From now on one has to decide how much material not to read, before preparing a scholarly paper. Too much information in fact is no information: e.g. the third, electronic edition of George Landow’s book on hypertext contains over fifty comments of his students on specific parts of his work. Does anyone have the aim to read them all...? The holism of the Internet as a form of global knowledge turns to relativism on the side of individuals: decisions about including or excluding something remain to be the problem of the actual reader/writer.

 

 

4. Unpreserveble

 

Let us suppose that we are all deeply scared now about the possibility of loosing information represented in electronic texts. Perhaps the same feeling made already the old librarians of Alexandria try to collect and preserve documents of the past for the future. We also want to do that, against all difficulties. But what are the real possibilities?

 

Let me differentiate two types of electronic documents: a) originally printed, non-digital, only digitized and b) ones created already in digital form. Archiving of the first type means necessarily rewriting: so we have an unchanged printed original, which can be reproduced electronically in many different ways. That causes also a lot of problems, but much less – I tend to claim – than the other group.

 

We have the methods for archiving printed texts; that means, we know how to decide between important and non important information. We have to reconsider our beliefs on that topic confronting the new genres of digital documents. What should be archived of a hypertext-document, a multimedia CD-ROM or a poem-generator, for example? The rewriting of such complicated structures probably wastes almost as much time as the creating of them. In the documents of the new medium the graphic component becomes more and more important. Does this mean that we should make only “facsimiles” of digital documents?

 

And what about internet-documents changing rather often? Should the archivist define a temporal limit, and archive such a document every month? Or rather every week? On what base this is decidable? It seems, that we can select from two bad alternatives: there will be no past at all (we give up archiving), or there will be too many pasts (we make backups automatically every five minutes, which will serve only for security goals, not for philological investigations).

 

What is catastrophic for archiving, is that while our methods for reading a book stayed unchanged for thousands of years, our methods for reading an e-text changes in every two or three months.

For example: if someone got interested in personal computers already when they first appeared, and decided to use one of them to facilitate his work as an author – what can he do today about his electronic texts saved on floppy disks formatted for, let’s say, Commodore 64...?

 

The most successful way of archiving is perhaps to preserve the original one in its original form. That means, that a real electronic archive should be in someway also a museum: a collection of all kinds of all hardware in order to enable the reading of old electronic texts made on them.

 

The problem of encoding is even  more fundamental in the case of databases. The structure of a database should be planned before the beginning of the recording act, which can take decades. We may call this the “paradox of database-making”: you have to plan the convenient structure for the material you do not really know yet. In consequence you risk the obligation of changing and rewriting your whole database any time, when a new species appears with features till then not considered.

 

We should also not forget about the “trap of simplicity” in text encoding. To the challenge of different and fast changing platforms some of the electronic archive projects gives the answer of a so-called “encoding minimum” (for example plain vanilla ASCII), which, in their opinion, will stay comprehensible for any kind of technical background. Using such a restricted encoding system necessarily means that you have to cut off much useful information which cannot be expressed in the new code. So that is the paradox of archiving: if you want to represent the complexity of the original, you have to use a more complex encoding system, which in turn will be accessible for fewer readers.

 

Of course, in most of the cases we can have an automatic program for converting data from one coding system to another. But that is an automatism – public enemy number one for the philologists. In a new philology for electronic texts a separate chapter shall deal with the mistakes coming from automatic procedures used during word processing (e.g. search and replace).

 

So the great question about electronic documents is the following: Who will have the energy and money to rewrite the whole human culture in a new code in every five or ten years? The conclusion must be, that the new age of electronic information will mean a great loss of information, namely a big part of human knowledge will get out from the mainstreams of everyday information exchange: less and less will be properly rewritten in the new codes.

 

In that way in a few years our whole cultural heritage will consist of cultural inclusions: databases of times and structures already forgotten. Indeed, it is an interesting question: can any kind of computer-archeology be developed to resolve those antiquities?

 

In his book The Cultural Memory Jan Assmann describes a great shift at the birth of European culture: a change from ritual to textual coherence. Knowledge takes it’s place in texts, instead of rites. At the same time knowledge becomes something mortal, hidden, encoded, which needs reviving: interpretation. The goal of rites is to reproduce the symbolic order without any changes (where the system of symbolic allusions get their meaning with the help of remembrance, which makes the past present). Texts at the same time need interpretation. While letters stay recorded, reality changes, so interpretation becomes the way (principle) to remake cultural coherence and identity.

 

That means, says Assmann, that the text is a very “risky” way of perpetuating sense, because it enables the derivation of sense from circulation and communication, which the rite never lets it do.

 

Electronic writing with it’s distinctive features of general uncertainty seems to be an even more risky way of doing that.

 

The possible final conclusion of this paper has to be quite pessimistic (and possibly also a bit provocative): working with great enthusiasm on developing of the electronic memory of mankind, we may achieve just the opposite: building the way of oblivion.