University of Virginia Library

1. Material Orientation

The attraction of the material orientation derives primarily from the fact that documents, regardless of one's orientation, are the "bottom line" or primary evidence for works. Efforts to "get behind the evidence" or to "sort out priorities among documents" involve critical analysis, inference, and argument, which introduce dispute and opinion. Since all documents are manufactured objects, subject to the human foibles of creation and production, any new document would, by that logic alone, take its place along with already extant documents as a document with a potentially variant text. If the new document's text is somehow inferior (or superior) to the historically extant documents, then mere documentary existence is not the only essential element for scholars who use the material argument. It is something else, such as recognition of the importance of one or more of the other textual elements, or some concept of authority that asks of the new document, "By what authority do you claim to be primary evidence?" This question must then be asked of each document including drafts and manuscripts. Or differences in the value of one document over another could be related to temporal priority. Or value might arise from the particular agents who created the more valuable document. For most documentary editors there is something beyond its status as document that raises the value or authority of the text in an historical document above that of a new text created with the tools of modern scholarship.

The material orientation is usefully divided into two subsets: (a) lexical and (b) bibliographic. Both are depersonalized approaches to the document. Neither asks, "Who did this?" But, interestingly, only the lexical subset allows, logically, for editorial work.

(a) The lexical approach distinguishes between the document and the text far enough to allow the text to be replicated but, usually, not emended because the "lexical text in the document" is the ultimate textual evidence.3 We call it lexical because what usually indicates to an editor that there is a flaw to be pointed out or corrected is its violation of lexical conventions. But authors who are known to have deliberately violated those conventions (famously Joyce, for example) pose serious difficulties for determining what is an error. The lexical text as found in a document is a historical fact—replicable, but only minimally emendable under this view.

(b) The bibliographic approach4 includes both the visual or iconic aspects of documents (which can be reproduced photographically) and the tactile or physical


Page 31
structure (which can be photographed but not reproduced).5 The bibliographic approach logically allows neither replication nor emendation.

Both lexical and bibliographic are documentary in that they consider the physical document to be the basic unit of textual evidence, but where the lexical focuses on the text, the bibliographic focuses on the material object. Replication of any material aspect of a work in a new edition would entail new invention and, hence, failure to reproduce an essential aspect of the original object. For the bibliographic approach, facsimiles can produce similar effects and give some notion of the effects created by the original, but any emendation distorts the physical, historical record.


Some, notably among historical-critical editors, do emend "demonstrable errors," though in doing so, they must invoke a non-materialist evaluation.


The bibliographic approach, based primarily on Jerome McGann's commentary on the distinctions between lexical and bibliographic code, considers the design of the book (page layout, fonts, deployment of white space) and characteristics of the material object to be part of what is meant by "the work in this form." See Jerome J. McGann, The Textual Condition(Princeton: Princeton Univ. Press, 1991); and "Theories of the Text," London Review of Books (18 Feb. 1988): 20–21. McGann provides ample description of the types of meaning that the bibliographic grafts onto the lexical text. However, the logic of this view of the work dictates that any attempt to edit or replicate the work would have to adopt a new bibliographic code with new implications and would, thereby, forfeit any editorial claim to have replicated the original. We do not call it a bibliographic "code" because that suggests a semantics that can be codified. See Paul Eggert's argument about this in "Text as Algorithm and Process," Text and Genre in Reconstruction: Effects of Digitalization on Ideas, Behaviours, Products and Institutions, ed. Willard McCarty (Cambridge: Open Book, 2010): 183–202, esp. 189–191.