Video: “BOOKISH”

Please enjoy this curator’s introduction to “BOOKISH: Artist Books from the Collection of the Rotch Library of Architecture and Planning, 1960-present.” Executed in conjunction with “Unbound: Speculations on the Future of the Book,” BOOKISH explored the means and methods through which artist books challenged the idea of the book as traditionally conceived.

Video no longer available

BRAVER, NEWER LITERARY WORLDS

The following video (in two parts) was part of my presentation to the Louisville Conference of Literature, February 2012. I am presenting a more extensive multimedia paper at the International Book Conference in Barcelona, June 29-July 2, 2012.

PART ONE:

http://www.youtube.com/watch?v=XPJwlCd1PvE

PART TWO:

http://www.youtube.com/watch?v=DwzjYZu1sCw&feature=relmfu

Jaded Ibis Productions (and its imprint Jaded Ibis Press) will be moving into research regarding literature as it can and may manifest in Brain Computer Interface, while still publishing print mashups.

Follow our news on Facebook, or at http://jadedibisproductions.com

Debra Di Blasi, Founding Publisher

A Brief Snapshot of eBooks, Videotext, and Future Shock

a guest post by Linda VandeVrede

Ebooks have come a long way in the last three decades.  In 1983-84, when I was researching my master’s thesis at Boston University, ebooks were only just emerging and were referred to as “electronic novels.”   Many of us studying the industry foresaw their future popularity, but it took a bit longer than we anticipated and the screen clarity and ease of access have far surpassed our original predictions.

The first ebook – 1983   

The very first electronic novel was created in 1983 at a computer show in Toronto. Author Burke Campbell wrote a “suspense novelette” on an Apple III and sent it to the Source Telecomputing Corporation, a commercial enterprise based in McLean, Virginia, that provided information along telephone lines. Campbell created a 19-chapter, 20,000-word book, which was then edited by The Source, and made available to subscribers only three hours later. Subscribers had the choice of reading “Blind Pharoah” on their video display terminal, printing it, or storing it on a floppy disk. The cost to download the story was just over $2.00 for nighttime fees, cheaper than purchasing a paperback novel.

Videotext was a 2-way channel

Videotext was initially a government-subsidized service that had become popular in Europe in the 1970s. For that reason, it took off faster in Europe than in the United States, where it was a commercial enterprise. It was a two-way interactive system which transmitted information along telephone or cable lines to a specially adapted television set or home computer. Books as a service, however, as opposed to time-sensitive data, were considered rather a novelty at the time.

Factors Slowed Down Acceptance

The reason for this is that the television sets, video display terminals and home computers that received the information were very primitive. There was a lot of flickering on the screen, which made long-term reading very challenging. Refresh rates of the images was approximately only 30Hz – 60Hz.  Extensive reading off the screen was therefore not a pleasant experience.  Most people at the time could only see electronic delivery being viable for time-sensitive information. Indeed, Grolier Encyclopedia’s Academic American was provided online in the early 80s through Compuserve, another information retrieval system, and this was deemed a perfect delivery vehicle because it allowed easy updating and changes and smaller chunks of information.

Another factor that slowed acceptance of ebooks was the heavy cost to access the novels.  Fees for The Source were $20.75/hour during the day, and $7-10/hour in the evening.  There was both private and public Videotext at the time, but the costs were significant, and there was a learning curve among subscribers. Subscribers to the Source tended to be more affluent and more educated, based on demographic surveys conducted at the time.  This limited the initial acceptance and proliferation of the technology.

The big question facing publishers then was puzzling over where videotext would fit in relation to print publishing: what would be published, and for whom? The concept of being able to interact with the data, request changes, ask for more details and customize the information was revolutionary.

The next big development did not arrive until 2000, when authors Michael Crichton and Stephen King provided electronic books – the first big name authors to do so. They were part of an evolution of the growing online exchange of music spurred on by Napster and other services. Even as late as 2008, however, pundits were still saying that digital books couldn’t compete with the cozy familiarity of curling up by a fireside with a traditional print book. Recent statistics, however, have shown ebook formats surpassing traditional formats in sales. The changes to and acceptance of ebooks since 2008 have been exponential.

What does the future hold?

More than 40 years ago, Alvin Toffler predicted the information change in his book “Future Shock.” This is a time phenomenon, he wrote: “a product of the greatly accelerated rate of change in society. It arises from the superimposition of a new culture on an old one.  It is culture shock in one’s own society.” Children under the age of 12 have grown up around the concept of Kindles and Nooks. What major changes are in store for them in terms of their understanding of what a book is, as they grow older? Interesting to ponder.

a guest post by Linda VandeVrede

Haptics, Lovely Objects, Archives, and e-readers

On Friday during the first panel, and particularly during its Q&A, there was a great deal of concern about the loss of haptic experience when we read electronically, the ways that archival research is changed by digitization, and where the book as an object (especially a beloved, sensuous object) is left as these shifts happen.

A few things struck me as odd about this conversation and the way it was framed. The first is the assumption that reading on an e-reader is not a haptic experience. It is; that is why when the iPad came out there were people rushing to the store to pick it up, hold it. It is also why so much time is spent considering how to adorn e-readers, do we want them to be nubbly, leather bound, or bamboo. Each person can make more choices about what they want their book to feel like now than they could thirty years ago. Of course, we don’t change the cover based on whether we are reading an electronic copy of Stephen King’s The Stand or an electronic copy of Whitman’s Leaves of Grass (and I will grant that presents a problem, but perhaps not an unsolvable one).

This issue of the haptic is already being solved, bit by bit. As Reid-Cunningham noted, books might become more “boutique.” Nouvella, a small press in California prints small runs of novellas by emerging authors and offers those for a week as a “launch” and then continues to sell them as e-books after that. Their books are lovely objects. I adore the one I have. It is small, fitting into the back pocket of my jeans and has a butter smooth cover and lovely design. Also, it’s a signed first edition by an author whose career I believe in. Of course I’ll hang on to that! And I’ll encourage my friends to buy the ebook; the object, but not the content, can be boutique.

I do not worry about people falling in love with reading, or falling in love the objects associated with reading. I don’t worry about archives. Instead, I am amazed by the possibilities that digitization allows. To discuss this I have two examples, both of which show how easily the gateway is held open between the digital and the archival, paper page. In both cases, the haptic archival moment is expanded and made more accessible. When that gateway is open, and librarians, scholars, and writers work together the best things about each mode are preserved.

Recently, my partner went to visit the Maine Women’s Writers Collection at the University of New England in Portland Maine to see the papers of Margaret Mussey Sweat. Sweat wrote perhaps the first American lesbian novel, and the archive is not much visited. He spent a day and a half looking through her papers, and because his time with the archive was limited, he took pictures of many things with his iPhone. He digitized an archive that never had been before. And posted it a page of it on Facebook. Posting this one page on Facebook meant that there were probably twenty of his friends and colleagues working with him to figure out some spidery script.

Poet Jill McDonough hoards digitized primary sources and then shares them with her students. Here she describes just how transformative that experience is for the poems her students write and for her own writing. As a former student of hers, I can say that it makes me more, not less, likely to seek out the physical archive. She mentioned to me recently that there is a shelf of old anatomy books in the basement of the Athenaeum, and I want to be there now instead of writing this.

Friday morning, visiting the archive exhibit, the aesthetic choices of the past resonated deeply with me, in ways that their creators might not have been able to anticipate. They were just considering the priorities of their moment. Similarly, we can’t say what our aesthetic priorities will be in the future, but we’ll still have them. Reading will continue to be a thing we do with our bodies. Finding ways to, not just preserve, but also expand reading practices and haptic experience of the page can be one of the opportunities not just the anxieties of the book now.

From Constantinople to Silicon Valley: A Byzantine Approach to iBooks

In 1947, the art historian Kurt Weitzmann published his seminal text on Byzantine manuscript illustration entitled, Illustrations in Roll and Codex: a Study of the Origin and Method of Text Illustration. His text postulated and explored a fundamental shift in pictorial logic with the advent of the codex in the early Byzantine period.

 

 

 

 

 

 

 

 

As demonstrated by his reconstruction of a roll of the Odyssey from the Third Century BCE, Weitzmann concluded in his analysis that early codex illustrations originally adopted the in-line, ‘freize-like’ logic of the roll. He reconstructed the archetype of the Vienna Genesis as a witness to this change given that he perceived the extant copy as bearing vestiges of the previous scroll-based system of illustrations. Weitzmann’s conclusions and postulated models have been disproven by art historians repeatedly, particularly given his romantic philological approach that postulated lost archetypes for known typologies. However, Weitzmann’s work implicitly acknowledged that a radical rupture in production and visuality had occurred with the advent of the codex. His methodological attempt to bridge the roll and the codex itself bears testament to a desire for and respect of the change in medium that affects the production of images and also their reception, interaction, and conception within and as part of a text.

A similar shift becomes evident with the advent of smart phones and touchscreen tablets, such as the iPad, whose software design demonstrates a conflict between the book as a typological medium based on horizontal swiping versus the webpage’s vertical scroll.

For example, when accessing a PDF online on the iPad, via Safari, the PDF adheres to the scroll-based logic of the webpage. Here the individual pages of the PDF progress vertically up the screen.

However, when downloaded into iBooks, the PDF can only be viewed page-by-page and scrolled through horizontally as one would a book. Nevertheless, one does not necessarily “scroll” or swipe through a book per se.  The experience of a physical book engages a complex variety of sensual experiences as one touches and lifts the page, hearing the crinkling of the sheet and seeing the play of light on it as it makes its 180 degree rotation in space. Therefore, the iBook app experience is highly stylized, yet nevertheless acknowledges the horizontality of the reading experience.

After all, the iBook app homepage, if we can even call it that, is presented as a varnished wooden bookshelf, replete with texture and knots in the wood. One scrolls down the bookshelf just as one would roll one’s eyes down a bookshelf.

When reading an actual “book,” purchased through iTunes, the conflation of the digital and the codex experience becomes most pronounced as the iBook is replete with a cover, which peeks out past the margins of the pages, and these virtual pages animatedly fold across rather than just swiping past the user. The program makes a concerted effort to capture the complex experience of the book in this new medium. Museums and libraries before the iPad had already applied such simulacral viewing conditions for their digital collections. The British Library, for example, allows one to browse through select digitized manuscripts through a system that replicates the visual and sonic experience of flipping through the pages of their codices, whose gold illumination glimmers in the light as the page is turned. Nevertheless, the iPad’s viewing experience does not merely try to replicate the book, but also incorporates the expected benefits of digital browsing.

This book’s table of contents is hyperlinked for increased accessibility and there exists the option of utilizing a scroll at the bottom of the page for quick browsing. These techniques, and similar ones in other book-based apps, bear testament to the hybridity of media implicit in the iPad’s design and potential.

Apple, which prides itself on it’s user-friendly design deploys a gamut of mimetic, representational strategies in its iPad design precisely to mediate between media and their errant iterations.

The Notes app carefully mimics a yellow-notepad, originally deploying Comic Sans font as an allusion to handwriting. It imitates torn sheets, left from previous notes, and circling the note being edited with a textured, red-pencil mark — all this is set into a leather business-like binder.

Their Contacts app, which also represents a book, even cites the structure of the codex by depicting the address book as being perpetually opened to the center of a quire — the group of folios that are sewn together in the center and added together to form a book. This is made evident by the visible thread down the gutter of the open book.

The Calendar app, which again features a codex, does not feature the same open quire, but instead shows the book open always to another spread in a quire.  This can be discerned by the app’s depiction of the stress on the sheet of paper from the thread holding the text together.

These details may seem inconsequential since they are peripheral and lacking any direct function, nevertheless they instantiate a field of known practices and options for their user.

The Notes app, for example, not only articulates its usable areas and functions through the familiar space of the business planner and notebook, but also distinguishes itself from the relatively unremarkable Pages app. Pages presents the viewer with a typical, albeit reduced, word-processing system. Its mundane appearance speaks not to “bad” design, but rather to its ready comprehension in a digital medium. The Notes app partakes of a now “primitive” medium per se of yellow-pads and pens, which thus gives it a more impromptu, short-hand utility. Instead of a word program intended for the composition of large texts, the Notes app advertises itself as an on-the-go tool intended for the jotting down of quick, brief notes — a practice that until now was associated with pens and paper, rather than computers. These stylistic differences thus create a language and logic of use and application for these two apps.

The codex’s influence is not unintentional feedback or some vestigial trace of an older medium, but rather the active utilization of this prior language in order to make the new system readily accessible. Yet the iPad comes with its own unique visual language that causes its own feedback into such pre-digital technologies. One hears of the iPhone user who accidentally “pinched” a physical picture in an attempt to get a look closer. As I recently observed at a Barnes and Nobles, non-touchscreens (keyboard accessible computers) in public places sometimes have to be labeled in order to remind users to use the keyboard or merely be contempt with the information being broadcast. While Kurt Weitzmann’s focus on the change in illustration techniques from roll to codex was primarily geared toward questions of narrative and time, such an investigation leads me to ponder the changing epistemic foundations of the image and its resultant ontologies. In order to address this investigation, I wish to turn to Byzantine image theory and practices around icons as a crucial reference point. They too experienced the radical shift from roll to codex and likewise had to mediate between material and ethereal images, whose presence in the world structured itself more as an event rather than a materially bound form.

However, before investigating the Byzantine case I wish to first remark on the crucial foundation of the image in the nineteenth century when the medium of photography reworked the semiotic possibilities of the image much like the digital age has done for us.

In analyzing the representation through photography of the stock-character Pierrot by the French photographer Félix Nadar in the late-nineteenth century, art historian Rosalind Krauss focuses on the question of the photographic medium itself as capable of capturing the indexical trace of its image.  In analyzing the figure of Pierrot, Krauss writes:

[T]he costume of Pierrot worn by the mime becomes the white field into which cast shadows are thrown, creating a secondary set of traces that double two of the elements crucial to the image. One of these is the Pierrot’s hand as it points to the camera; the other is the camera itself, the apparatus that is both the subject of the mime’s gesture and the object of recording it. On the surface of the mime’s clothing, these shadows, which combine the conventional language of gesture (pointing) and the technical mechanism of recording (camera) into a single visual substance, have the character of merely ephemeral traces. But the ultimate surface on which the multiple traces are not simply registered, but fixed, is that of the photograph itself.

Krauss’s reading of Nadar’s image is as specific as it is general.  It exists within a longstanding discussion in modern and contemporary art’s history of medium-specificity and medium-reflexivity.  For Nadar’s image, the focus is on the trace, on the power of the photograph to capture, by the physical contact of light, the objects that it represents.  In our present, the notion of the trace, which was crucial to the nineteenth century’s conceptualization of the photograph, is supplanted by a cultural logic in which matter and form, image and original, are no longer prevalent models for conceptualizing the image.  There may still be a lust for originality, both creatively and materially, as that which is experienced by museum visitors who seek out both aesthetic and artifactual experiences, but in the realm of YouTube and iTunes, these technologies exceed such logics.

Marie-José Mondzain, writing in French, adeptly uses the term l’imaginaire to address the construction of Byzantine visuality in relation to the iconomachy, the Byzantine debates on images between the eighth and ninth centuries.  The “imaginary” in French, as her English translator Rico Franses points out, does not denote an opposition between the real and the fictive.  Instead, the term has close ties to the discipline of psychoanalysis where it encompasses the process of perceiving or imagining the world in images.  In her book, Image, Icon, Economy: The Byzantine Origins of the Contemporary Imaginary, as the title suggests, Mondzain excavates a medieval, Byzantine origin for the contemporary conceptualization of images.  Unlike Mondzain, however, I employ the Byzantine image cosmology as a useful theory for comprehending the present’s image economy, rather than as a genealogically inherited trait or origin myth.  For the Byzantine world, the debate on the epistemological foundations for and ontology of the icon were central to the preservation of the Empire and its constituents — as it is arguably in our present where images not only structure social relations, but also religious and political action.  In conceiving the reproducibility of images and the very possibility of depicting the image of holy persons, particularly that of Christ, the Byzantines were led to consider the limits and potentialities of the image.  In order to answer whether it was proper to depict a divinity, it was necessary to investigate the image as a production of form through colors and consider what types of information could the resulting forms present, particularly in relation to a divinity that was uncircumscribable in mere matter.  The resulting thesis, building on the early Church Fathers, Platonic and Aristotelian philosophy, was that the honor offered to the image is passed on to its prototype, as St. Basil suggested.  Thus, the image functioned as what Charles Barber has called a “directed absence,” presenting a conduit to the divine, rather than re-presenting the divinity.  As was constantly reiterated and enforced throughout the Byzantine empire, in order for this understanding of the image to function it required the perception of a separation between form and matter.

Like a seal pressed into the virgin wax, the image was constantly being impressed on different matter, but matter itself was only relatively partaking in the divine as long as the image was present.  The notion of the artifact that carries a trace inherent in matter was linked to the relic, a bone or belonging of a holy person, but not to the icon.  Today, images exist as archetypal codes, preserved in multiples on memory-sticks and servers.  At any given moment, this code is visualized on an iPhone or computer screen, which partakes of the image.  The image is both wholly embodied and wholly digital, its form may be circumscribed, but its digital existence is uncircumscribable. In Byzantium, Christ is the Logos, a Greek term that has no definite English translation, but spans a range of meanings, potentially translated as “word, utterance, or discourse.”  By virtue of the incarnation, the Logos is made into human form, an image, through the flesh of the Virgin Mary, whose virgin body is equated to the virgin wax on which a seal may be impressed and given its purity is able to embody itself without any other images interfering in its countenance.  Thus, the doctrine of the incarnation is the same as the doctrine of the image.  This identity is subsumed by the concept of the economy (oikonomia), often translated in religious studies as the “divine dispensation,” which addresses the incarnation and the creation of the image of Christ as one and the same: Christ is discourse partaking of flesh, but not limited by it.

Stripping these concepts from a definite theological argument, I argue that the contemporary image participates in a similar economy: a code incarnated in technology, but neither matter nor the image is confined by the other. The digital of the image stands in for the divinity of the Christ, while the iPad embodies the Virgin flesh in which the image is impressed. Unlike Krauss’s Pierrot whose body captures the projected indexicality of the shadow, the Byzantine image economy presents a body in whose flesh the archetypal image is incarnated.  This is an economy focusing on a mimesis that is not imitation, but making manifest.  It is not about representation, but about presentation without the indexical trace, returning the faith of presence to the network, to the reproducible image.  In this proposed model for our contemporary image economy the form is the digital code, the computer is the flesh, and from this combination emerges the image.  Therefore, this figurative and literal body becomes the unseen nexus of investigation, since it is the receptive body that is able to house the viral image and once that virus invades, the body fades in service of the faithful representation of the image through it.  Recently, touching upon similar issues, but not fully engaging with the Byzantine legacy, W.J.T. Mitchell’s Cloning Terror: The War of Images, 9/11 to the Present, argues for a parallel structure in the fear of cloning to the viral distribution of images, particularly those of terrorist acts.  Following September 11, Jacques Derrida paralleled the distributed and broadcast image of terrorism, to an “autoimmune disorder,” whereby the body’s own defenses attack itself. How does the notion of the viral fit then into our discourse on the traditionally materialized logos, i.e. the book?  How has the “book” as a formal entity been subsumed into the viral economy?

So far I have presented what I believe is a crucial nexus in understanding the radical effects of the image with the advent of digital technologies, particularly smartphones and touchscreen tablets. Byzantium because of the religious, social, and political motivations of the icon presents us with a robust body of evidence through which to articulate a working theory of the contemporary image’s foundations, while recent studies on viral images and terrorism demonstrate art history’s capability of becoming a political science — a sentiment which art historian David Joselit shares in his book on television and political activism, entitled Feedback, published in 2007.

On the Byzantine side of things, and as a Byzantinist myself, I would go as far as to argue that until the advent of digital, touchscreen media scholars have been unable to fully grasp the poly-sensory potentialities of the icon in the Byzantine Empire.  It is not surprising then that recent studies on Byzantine icons by art historians Liz James in 2004 and Bissera Pentcheva in 2010 have been the first to embrace the synaesthetic, sensual dimensions of the icon as key factors in its constitution. The haptic processes of Byzantine worship required the viewer to contemplate the icon, while touching, kissing, and embracing it — even awaiting for it to miraculously respond to the venerator’s entreaties. The iPad likewise requires the user to contemplate it, stroke it, and develop a kinesthetic language of gestures in order to utilize it.

It is precisely this sensual experience that was sold via the first iPad’s advertising campaign. This series of advertisements all featured casually dressed bodies utilizing the iPad in a variety of manners that featured its various tools.  In a typical Apple fashion, the iPad ads would often be displayed serially, thus conveying the gamut of possibilities offered by the new technology. The bodies were headless, it was the viewer that was placed in the position of the user as if proleptically playing with their new gadget in their encounter with the image.

The iPad 2, on the other hand, had only a couple of ads with little variation. This time there were no humans involved, only the product. This time Apple was not selling a new experience — but rather a better version of it.  The message was simple: “iPad 2 Thinner. Lighter. Faster. FaceTime. Smart Covers. 10-Hour Battery.” This advertising campaign retrieved into its rhetoric the previous well-known campaign through its prevalent use of comparatives.  The product was better, but the same experience was being sold.  Most recently, the iPad “3,” “new” iPad, or simply, the iPad, ad campaign has done the same thing, focusing on the high-resolution, “retina display” screen as its major selling point.

The touchscreen experience — which subsumes the “book” as one of its manifestations — has presented itself as a notable site of resistance, perhaps most recently embodied in the Twitter Revolutions of the 2011 Arab Spring.

On a street in New York several months back, I encountered an iPad (1) ad with the phrase “iHomeless” scribbled on it. This piece of graffiti plays with the distinctive “I” — a linguistic shifter — in Apple iTechnology branding that advocates the very personalization of experience which the iPad ad conveys. This scribbling on an advertisement for the latest gadget is a poignant reminder of poverty in light of excessive consumer consumption and commodity fetishes — particularly in a city like New York.  Moreover, however, it demonstrates the Apple brand and its technologies as a locus for image-based socio-political activism.

Following the invasion of Iraq and the Abu Ghraib Scandal, a series of viral posters emerged with the images of Abu Ghraib and soldiers that mimicked the then popular and equally viral color-and-silhouette iPod ads. The iRaq posters, as they were called, were interspersed among the Apple advertisement campaign as a form of viral resistance, a tactic which paralleled the Apple’s own strategies.

MADtv even had a skit that featured a Steve Jobs impersonator announcing a new Apple product, the iRack — and the upcoming, the “iRan” running sneakers — as a commentary on the US invasion of Iraq.

Thus, the book in its integration into this new media emerges as a site of resistance structured around an expanded notion of what constitutes a book — one that pushes past the material essence to various medial iterations. The sensual experience of the book has been formally and tactically abstracted into the logic and conception of a new medium for the logos.  The book, like the Ancient Greek and Early Christian Logos, can no longer be explained merely as “word,” but partakes of a formal, visual logic and cross a variety of media — it brings with it both a doctrine of incarnation and image-making.  This places the expanded field of the book as a crucial discursive and political space — and the historian and artist as crucial players in this nexus.

 

A version of this paper was presented at the University of British Columbia on 3 October 2011 as part of a conference entitled, “From Scroll to Screen: Translation and Reading from Ancient to Modern.” Roland Betancourt is a Doctoral Candidate and Teaching Fellow in the Department of the History of Art at Yale University. 

The Future of Permanent Collection Catalogues

A guest post by Brooke Kellaway, Getty Fellow, Visual Arts, Walker Art Center.

In the Walker Art Center’s library there are shelves of collection catalogues from museums around the world, dating from the mid 1900s. Since museums have the means to publish these books only once every several years (or decade), the care put into each is sometimes so intensive that the books themselves seem as special as the art written about inside of them. They capture events—cultural moments based on the stories told, works featured, design decisions made, and contributing writers selected.

What will become of collection catalogues in print when collection catalogue websites become increasingly prevalent? I’d like to think of the latter not as replacements for the book (long live it!) or upgrades of database-driven websites, but as the result of the best of both formats remade into something new and great…

The Walker Art Center’s next collection catalogue will launch on collections.walkerart.org later this year. We’ve radically expanded the book model and are completely revamping the collection website to create a dynamic media-rich space for vast (and free) information on works of art in the Walker’s collection.

The online catalogue entries will provide interested art historians, professionals, students, and the general public with updated material and a range of critical perspectives on the works. For example, the online entry for Yves Klein’s Suaire de Mondo Cane (Mondo Cane Shroud) (1961), will provide multiple high-resolution images of the painting with close-ups of the International Klein Blue pigment on the gauze fabric, several videos about the work’s conservation treatment and the process of installing it in the gallery, and an on-camera interview with the curator who researched the work before its acquisition. Scholarly essays will be included, as well as a detailed presentation history with floor plans and checklists, a bibliography with cited texts hyperlinked or embedded, and the work’s provenance.

Yves Klein's Suaire de Mondo Cane (Mondo Cane Shroud), 1961. Pigment, synthetic resin on gauze. 108 x 118.5 inches.

This new collection catalogue will be perpetually in production, featuring new entries with every new acquisition. It’s a sensible step for the Walker, being that in the past several years the generation and storage of information on artworks has been primarily through electronic systems (from typed wall labels to digital photography and video to artist correspondence). It’s the same at other museums, as evidenced in the collection pages of San Francisco Museum of Modern Art or Museum of Contemporary Art Australia, and these online catalogues by the Corcoran and Rijksmuseum. With ever more sophisticated collection management databases, digital asset management systems, web publishing software, and interactive technologies, the documentation and interpretation of collection works is happening in the virtual world much more frequently.

The Walker’s catalogue is being built with the support of the Getty Foundation’s Online Scholarly Catalogue Initiative (OSCI) grant. The nine participating museums have considered the elements that make their printed books excellent resources (from thoroughly researched essays to useful glossaries and maps) and are incorporating these aspects into their collection websites with content that is current, searchable, and links out to a wider spectrum of both the museum’s activities and scholarship originating from elsewhere.  Some of them emulate the look and feel of a book, while others explore alternative interfaces.  For more information on their progress, the Getty just released its OSCI report that gathers the museum grantees’ experiences in publishing these new multifaceted collection catalogues.

I recently read the New York Times article, A Vast Museum That You Can Carry, reviewing the Metropolitan Museum of Art’s new 449-page guidebook. Ken Johnson wrote, “How useful the new guide will be when anything you want to know about the Met and its holdings can be quickly accessed on the museum’s world-class Web site is an interesting question.” It’s an interesting question indeed, and one that we at the Walker look forward to investigating as we work on melding the best of the book with the amazing possibilities offered by digital publishing.

“Ceci tuera cela”: Narrative Games and the Future of Books

Today’s post comes courtesy of MIT GAMBIT researcher, game author, and Shakespeare scholar Clara Fernandez-Vara:

Fourteen years ago, Umberto Eco already wrote an essay on the future of the book in which he invoked Victor Hugo’s passage in The Hunchback of Notre-Dame:

“As you no doubt remember, in Hugo’s Hunchback of Notre Dame, Frollo, comparing a book with his old cathedral, says: “Ceci tuera cela” (The book will kill the cathedral, the alphabet will kill images). McLuhan, comparing a Manhattan discotheque to the Gutenberg Galaxy, said “Ceci tuera cela.” One of the main concerns of this symposium has certainly been that ceci (the computer) tuera cela (the book).”

The future of Eco’s book is now. His concept of the computer is somewhat reductionist; rather, we have to talk about digital media. Computers are everywhere, from phones, to rice makers or fridges. The print book industry is revolutionized by the widespread use of e-book readers and tablets, which allow us not only to have instant access to a lot of books, but also carry around more books than we could read in a lifetime. Books will not be killed by computers, rather, it turns out that computers are giving a new life to books by finding a new technology to access them. It turns out that books are pretty resilient to technological change.

Talking about media and killing, the media form that may threaten books is videogames, which are routinely accused of doing horrible things to people. The threat digital games pose is (supposedly) that they absorb you in their worlds and make you dumb, making you forget about other people and having a life. Literature already prefigured this supposed media effect long ago–chivalry novels dried to Don Quijote’s brains out and so that he couldn’t distinguish reality from fantasy.

Even today, people still think of digital games as a frivolous pastime, discounting their narrative possibilities. Playing videogames requires specialized literacy–in the same way that novels require not only knowing how to read, but understanding genre conventions and intertextual references. videogames require being able to navigate a virtual space, and being familiar the rules of different game genres, amongst other things. Games and books may have more to do with each other than one would think, because they both absorb the reader / player into their worlds, trapped in their narratives, and require specialize knowledge.

A media form is not going to kill another, in spite of what Frollo said, but it can certainly transform it. Videogames can change how and why we read books. We can read books in games, where we find bestiaries of the creatures that haunt the dungeons that we traverse as the heroes of games like Castlevania: Symphony of the Night. Books can open gates to new worlds, as in the Myst series, where we read diaries of the previous inhabitants of the world we explore, and we can jump into them literally in order to enter other parts of the world. Videogames are another medium that novels and short stories can be adapted to–we can become the protagonist of The Great Gatsby, avoid drunk partygoers and fight the disembodied eyes in glasses that seem to watch the action of the movie. We can also become Moby Dick itself and decimate the merciless whalers, earning our reputation as the killer whale.

Videogames and books will never be at odds; they are already part of the media ecology, along with movies, websites, magazines, or television. They are all gates to worlds that we participate and experience. One can lead us to another–the Myst games were complemented by a series of novels that expanded on the story of the world of the game. Dante’s Inferno can lead players to read the poem it is allegedly based on; then players can be horrified at the distorted notion of what adaptation means. Games and books as media forms are already in dialogue: we have books about games, not only fiction (Neal Stephenson’s Snow Crash), but also hint books to help players know everything about their favourite videogames, or biographies of people’s playing experience (Sudnow’s Pilgrim in the Microworld, or Bissel’s Extra Lives). The challenge remaining is making more games about books, not only adaptations, but also games about reading (Gregory Weir’s Silent Conversation). In the same way we have books to help us become better at games, we could make games that help us be better at reading books.

Videogames will not kill books, although there may be a bit of a friendly scuffle. The day when we read games and we play books is not far.

Other Electronic Books: Print Disability and Reading Machines

A guest post by Mara Mills, Assistant Professor of Media, Culture, and Communication, NYU. Mills is currently researching the history of talking books and reading machines.

The demand for “print access” by blind people has transformed the inkprint book. Some scholars today distinguish between e-books and p-books, with the “p” standing for print, yet already by the early twentieth century blind people and blindness researchers had partitioned “the book” and “reading” into an assortment of formats and practices, including inkprint, raised print, braille, musical print, and talking books. In turn, electrical reading machines—which converted text into tones, speech, or vibrations—helped bring about the e-book through their techniques for scanning, document digitization, and optical character recognition (OCR).

The first such reading machine, the Optophone, was designed in London by Edmund Fournier d’Albe in 1913. A “direct translator,” it scanned print and generated a corresponding pattern of tones.  Vladimir Zworykin (now known for his work on television) visited Fournier d’Albe in London in the 19-teens and saw a demonstration of the Optophone. At RCA in the 1940s, he built a reading machine that operated on the same principles, followed by an early OCR device that spelled out words letter by letter using a pre-recorded voice on magnetic tape.  John Linvill began working on an optical-to-tactile converter—the Optacon—in 1963, partly as an aid for his blind daughter.  Linvill soon became chair of the electrical engineering department at Stanford, and the Optacon project became central to early microelectronics research at the university. Linvill and his collaborator, Jim Bliss, believed that a tactile code was easier to learn than an audible one, because the analogy between visible and vibratory print was more direct (both formats being two-dimensional). Extending the technique of character recognition (rather than direct translation), in 1973 Raymond Kurzweil launched the Kurzweil Reading Machine for the Blind, a text-to-speech device with multi-font OCR. As he recalls in The Age of Spiritual Machines, “We subsequently applied the scanning and omni-font OCR to commercial uses such as entering data into data bases and into the emerging word processing computers. New information services, such as Lexis (an on-line legal research service) and Nexis (a news service) were built using the Kurzweil Data Entry Machine to scan and recognize written documents.”

Harvey Lauer, one of the foremost experts on twentieth-century reading machines, was the blind rehabilitation and technology transfer specialist at the Hines VA Hospital for over thirty years. Colleagues Robert Gockman and Stephen Miyagawa have called him “the ‘father’ of modified electronic devices for the blind and the ‘Bionic Man’ of the Central Blind Rehabilitation Center.” Lauer attended the Janesville State School for the Blind, where he studied music and tinkered with electronics and audio components. He earned his B.A. in Sociology from the University of Wisconsin-Milwaukee in 1956 and his M.S. in Vocational Counseling from Hunter College the following year.  Shortly before his retirement from the VA in 1997, Lauer wrote a speculative paper on the “Reading Machine of the Future.” By that time, personal computers were common and flatbed scanners were becoming affordable for home use. Text-to-speech software was beginning to replace the standalone reading machine. Yet the increasing complexity of graphical user interfaces inhibited blind computer users, and a conservative approach to reading (i.e. tying print to speech) was embedded in commercial OCR software. Lauer advocated a “multi-modal reading aid” with braille, tonal, vibratory, and speech outputs for translating text and graphics. With Lauer’s permission, I’ve excerpted the following selection from his unpublished article.

 

READING MACHINE OF THE FUTURE
BUT THE FUTURE WON’T JUST HAPPEN

Harvey Lauer
September 12, 1994

From 1964 to the present, I have used, tested and taught fourteen reading machines and many more devices for accessing computers.  Working for the Department of Veterans Affairs, formerly the Veterans Administration, I saw much progress and several lessons forgotten.

The system I feel we really need will have a choice of modalities—speech, Braille, large print and dynamic graphic displays.  It will be configurable according to the user’s needs and abilities. It will scan pages into its memory, process them as best it can, and then allow us to read them in our choice of medium.  Automatic sequencing would be our first choice for easily-scanned letters, articles and books.  But it will also let us examine them with a keyboard, a tablet, a mouse or perhaps tools from Virtual Reality. It will offer us any combination of speech, refreshable braille or large print as well as a verbal description of the format or layout.  Because we will be able to use that description to locate what we want to read, it will be easier to use than current OCR machines, but not larger. When we also need to examine shapes, we will switch on tonal and/or vibratory (graphical) outputs.  As I have noted, examining the shape of a character or icon is far easier than reading with such an output.

In short, the system will offer a three-level approach to reading.  The first choice is to have a page or screenful of text recognized and presented either as a stream of data or as data formatted by the machine.  We can now do that with OCR machines.  At the second level, we can choose to have the machine describe items found on pages or screen displays and their locations.  We can have either brief descriptions or descriptions in “excruciating detail.”  We can then choose items by name or characteristics. That won’t always be sufficient, so we will have a third choice.  We can choose to examine portions of the page or individual items found by the machine, using speech, braille characters, a display of tones, an array of vibrators, a graphic braille-dot display or magnified and enhanced images. Once the basic system is developed, it will constitute a “platform” for people like us to test its practical values and for researchers to test new ideas for presenting information to humans.

It’s 1997.  You place a page on your scanner.  It could be a recipe, a page from a textbook or part of a manual.  You direct the machine to scan it into memory.  You suspect that it isn’t straight text, so you don’t first direct the machine to present it in speech or braille.  You request a description of the format and learn that the machine found two columns of text at the top, a table, and a picture with a caption.  It also noted there were some tiny unidentified shapes, possibly fractions.

You then turn to your mouse (or other tracking device) which you move on an X/Y tablet.  (This concept of a tablet was best articulated by Noel Runyan of Personal Data Systems in Sunnyvale, California.) You switch to freehand tracking and examine the rest of the page for gross features, without zooming.  You find the table, plus what appears to be a diagram and some more text.  With the mouse at the top of that text, you switch to assisted tracking.  Now the system either corrects for mistracking or the mouse offers resistance in one or the other direction, depending upon your choices.  As you scan manually, the text is spoken to you.  After reading the block of text, you read the caption and examine the table.  You find that some of the information needs to be read across columns, and some makes sense only when read as columns.  You are thankful that you don’t have an old-fashioned OCR, screen reader and Optacon to tackle this job.

Then you find a longer piece of data you want to copy, so you “block and copy” it to a file.  In examining the diagram, you find tiny print you want to read, but the OCR can’t recognize it, so you zoom in (magnify) and switch to the mode in which shapes can be examined.  Depending on your equipment and your abilities, you can have them presented as vibrating patterns on an Optacon array, as tone patterns, as a graphic, dot image on a rapidly-refreshing array of braille dots, or as a combination of those modalities.  You may or may not have the skill to read in this way; few people make the effort to develop it nowadays.  What you do is examine the characters slowly and trace the lines of drawings in which you are interested.

With the new instrument, we won’t have to give up nearly as often and seek sighted assistance.  Optacon users will no longer have to remove the page and search about with camera in hand as if reading a map through a straw.  Computer users will still have our screen access software.  OCR users will still have their convenient, automatic features.  However, when you use a current OCR machine to scan a page with a complex format, the data is frequently rearranged to the point where it’s unusable.  Such items as titles, captions and dollar amounts are frequently scrambled together.  It makes me feel as if I am eating food that someone else has first chewed. With the proposed system, when its automatic features scramble or mangle our data, we can examine it as I have described.

The exciting point is this:  The proposed integrated system with several optional modules would harness available technology to allow us to apply the wide gamut of human abilities among us to a wide gamut of reading tasks.  In 1980, I presented this idea in a paltry one-page document added to an article about reading machines.  I then called it the Multi-dimensional Page Memory System.  I’ve given it a new name—the Multi-modal Reading Aid.

 

Two Directions for Material Books

I am a book artist who spent twelve years hand printing and binding a book entitled Pictorial Webster’s. I then spent a year with Chronicle Books making it into a trade edition that would sell for $35.00 and yet still retain some of the qualities of a finely printed book of yore. I have been on both sides of the physical book world. I am one of the very tiny publishers that is most concerned with making beautiful books using quality materials and bookbinding technique, allowing the price be dictated by time and materials. Not having an intermediary agent I worked directly with Chronicle and gleaned some insight into the continual march to the cheapest product by the larger publishers.

There is a segment of society that still craves a beautifully bound book, and this segment also tends to have a great deal of wealth. Much like the turn of the 20th Century when the linotype and monotype helped speed the mass production of books, they also spawned the Arts and Crafts movement and small publishers such as Kelmscott Press who published lavish, traditional books. Modern day publishers such as 21st Editions have capitalized on this same feeling that beautiful books are about to become extinct. (21st editions has even registered the phrase “The Art of the Book” as their own!?) I came to publishing from the angle of a book artist. Pictorial Webster’s was first a small run of 100 hand-printed, leather bound books that took me twelve years to produce and all of the copies have yet to be bound. I have noticed that I have made many more sales to private individuals than I have to institutions.

Because Pictorial Webster’s is such a visual book, it lives best as a physical book which accounts for its great sales as a trade edition. When I was ready for the mass produced book to be printed, I was in negotiations with two publishers. One was Chronicle Books, the publisher I thought from day one would be a good fit for selling an “Artist’s Book,” as they had published Griffen and Sabine, aunt Sally’s Lament, and other books that were way off the norm. I had also been approached by Melcher Media, a book packager in NYC. Melcher Medias lure was greater control over the final product. Mr. Melcher did his best to give me the impression we would produce the book using the most responsible production techniques and materials as environmental and economic concerns are important to me. I believe sustainability should be a concern for all disciplines. It is sad that many companies have been using paper pulp made from clearcut rainforests in Indonesia. I had made a decision that I would not print my book commercially if it was to be produced in China. My dream was to buy paper from the Mowhawk paper company which is responsible about using post-consumer paper and produces all of their energy with wind power. My ultimate printer would have been Stinehour press and perhaps use the Acme Bookbindery in the Boston area. I tried to convince the publishers that consumers who would buy my book would also be willing to pay more, but I heard about research that showed that there is a big cutoff at $30 that many book buyers will not cross. I began Pictorial Webster’s in 1996 and planned from day one to try to have a trade edition made. In those days Merriam-Webster had expressed interest in publishing the book and had given me the green light to find a printer that could do it for under $4 per book. (I was told they liked production costs to be 1/8 of list price.) Therefore I was sourcing materials and sending samples of the product to various printers and scouring the shelves of libraries and bookstores to see what was possible in commercial printing. Although there continue to be bright spots in commercially produced books, much of what I have experienced is depressing. At my first real meeting to discuss the production of the book with Melcher, it became clear that China was what they had in mind for everything. “Come on, get real,” was his response to my complaint. “You will never get this printed domestically. It’s never going to make money.” Thankfully, Chronicle delivered production in Canada. It wasn’t ideal, but it was the best they could do. Few books are sewn in the United States, and many of the printers I had originally contacted to get quotes for printing the book had gone out of business, including Stinehour Press. One of the stumbling blocks for production in China it turns out was the cream colored paper I desired. At the time we printed Pictorial Webster’s, the only way to get off-colored paper in China was to have an initial print run with a crème colored ink! I wrote an article in Ampersand Magazine detailing my struggle to get my book produced in a way that I thought would make it a pleasing product. As a bookbinder I wanted my book to retain what I thought were the most important qualities of a book: good printing, a pleasing feel in the hands, and good design that works in a book. As much as I agonized, Chronicle was very good at making that happen. Unfortunately, though I thought I had convinced Chronicle Books that making a beautiful book would help drive sales. I bought a copy of their fifth reprint only to discover that it was shipped overseas to China! I had had an understanding with Chronicle that this would not happen. (Was it in the contract? I can’t recall.) I had given them .5% of my royalties, in fact, to help keep production local. The new book is not printed on the Environmental Fiber paper cover that we used for the first book, and the printing on the cover is no where near the quality of the original. . . . I will update this with news as this story evolves.

As much as I thought there was hope at some point for mass-produced books – I feel I’m agreeing more and more with William Morris. If you want a beautiful book, you may as well make it yourself. Most physical books for the trade will probably be more and more cheaply made while a very small segment of the publishers – folks such as David Godine will continue to make quality books in design and manufacture as there will continue to be a demographic that craves good books. And as Book Arts continue to flourish in college art departments, hand crafted, self published books will increase as well.

Angels & Books

(A guest blog by Erika Boeckeler, Northeastern University)

“An angel..vnderstondyth and knowyth sodaynly wythout collacion of one thynge to a nother.”

-Bartholomew de Glanville, De Proprietatibus Rerum (13th c, translated 1495) ii. xviii. 43

Somehow any quotation about angels evokes the most lovely romantic notions: fluttery wings, a breath of air, purity, beauty as knowledge and knowledge as beauty, the color white or the rainbow wings of painted Renaissance angels or just a sparkling light of the “Let there be light” variety…

What I love about this quotation is that it describes a fantasy of angelic epiphany that evokes all those notions. We cherish those moments in which we suddenly, instantaneously just know and understand, with a knowledge so pure that we flutter free from the weight of the world’s thingyness.

But the word I want to focus on in this quotation is collation. The Oxford English Dictionary, which lists this quotation as an early example of the word, defines collation as “the action of bringing together and comparing; comparison,” with the more specific definition given as the “textual comparison of different copies of a document; critical comparison of manuscripts or editions with a view to ascertain the correct text, or the perfect condition of a particular copy.” Angels don’t need to collate–they already have the perfect text–but we mortals do. The practice of collation seems a fitting way to think about what we’re doing as we experience the once and future history of the book.

I owe my knowledge of this quotation to the bibliographer Carter Hailey, inventor of a collating machine called Hailey’s COMET (see below) and whom I met in person during a seminar on the first printed version of Hamlet last weekend at the annual meeting of the Shakespeare Association of America. Hailey’s paper for the seminar describes two kinds of collation that bibliographers–people who meticulously describe books–perform: horizontal and vertical. Horizontal collation involves the comparison of multiples copies from the same printed edition of a given work. You may wonder why anyone would need to do this; aren’t all books in an edition exactly alike? In fact, early modern books from a single print run may differ wildly from each other! Corrections would be made while the book was in press, but paper and labor were expensive so pages with mistakes would be bound up with other pages further along in the run. The three main printed versions of Hamlet contain hundreds of differences within each, including one that calls the title character “Hamlee.” While that example may be merely surprising, others can be more substantial and radically alter the way we understand a passage.

What I’m interested in exploring here is the second kind of collation: vertical collation involves comparing different manuscripts and/or different printed editions across time. In the case of Hamlet, this involved noticing that successive readers encountered a play with characters first named Corambis and then Polonius, Ofelia then Ophelia, Rossencraft and Gilderstone then Rosencraus and Guyldensterne then Rosencrance and Guildenstein, Gertrard then Gertrude. Would the real Hamlee please stand up?

Working across time in ways that readers rarely or never did, collators bring invisible histories of books to light. Bibliographer and mad scientist inventor of another collation machine, Randall McCleod, reads Renaissance books that were never printed –the faint indent contours of type never inked- and books hiding within other books –-the traces or ink from one page in a book dried upon the pages of another. Sometimes the findings of collators spawn heated debates in the halls of academic academies, sometimes they spur actual revolutions. The great religious schism of seventeenth century Russia in which thousands died was caused by–yes!–an act of collation: someone discovered that fundamental religious texts had radically altered over centuries of copying and recopying.

In a future of the book study, the work of vertical collation is inevitable. In some ways, our non-angelic brains are constantly collating–measuring the experience of one book form against another. Books now are becoming less weighty thynges or not even thynges at all; collation technologies are becoming increasingly sophisticated and scholars are performing more sophisticated kinds of reading with them. Collation may morph into a different kind of work altogether.

Collation in the twentieth century has been intimately associated with technologies. Here are a few collation machines:

A fifteenth century collator from the first European bestseller, Sebastian Brandt's _Ship of Fools_ (1494).
The Hinman Collator, invented by Charles Hinman in the late 1940's to collate Shakespeare's texts. It's gigantic.
The McLeod Portable Collator, designed by Randall McLeod in the early 1980's. The picture features McLeod himself.
Hailey's COMET, an even more portable collator designed by Carter Hailey in the 1990's. Image courtesy of Carter Hailey.