University of Virginia Library

Search this document 


  

collapse section 
 1. 
 2. 
 3. 
 4. 
 5. 
  
collapse section 
 1. 
 2. 
 3. 
II
  
collapse section 
 1. 
 2. 
 3. 
 4. 
 5. 
  
collapse section 
  
  
collapse section 
 1. 
 2. 
 3. 
 4. 
 5. 
 6. 
  
collapse section 
 1. 
 2. 
 3. 
 4. 
 5. 
 6. 
  
collapse section 
  
  
collapse section 
  
  
collapse section 
  
  
collapse section 
 1. 
 2. 
 3. 
 4. 
 5. 
 6. 
collapse section7. 
 01. 
 02. 
 03. 
 04. 
 8. 
 9. 
 10. 
 11. 
 12. 
 13. 
 14. 
 15. 
 16. 
 17. 
 18. 
  
collapse section 
 1. 
 2. 
 3. 
  
collapse section 
 1. 
 2. 
  
collapse section 
  
  
collapse section 
  
  
collapse section 
  
  
collapse section 
  
  
collapse section 
 1. 
 2. 
 3. 
 4. 
 5. 
 6. 
 7. 
 8. 
 9. 
 10. 
  
collapse section 
  
  

collapse section 
  
  
  
  
  

II

If discussion of the role of punctuation and spelling and of physical analysis in editing has been rather neglected in the theoretical writings devoted to early texts, the problem of how to determine the relationships among the surviving texts certainly has not been neglected. The great body of literature on the theory of the textual criticism of ancient and medieval writings has focused on what is after all at the heart of all critical editing: the question how to choose among variant readings, which in turn involves an assessment of the relationships among the witnesses and an evaluation of when a departure from all the variants would bring one closer to the author's intention. These matters have of course been debated at length by editors of modern works also, and the same central issues link all these discussions together. All of them in fact can be seen as variations on the theme of objectivity versus subjectivity. Some urge the desirability of as objective a system as possible,


46

Page 46
in which the role of the scholar's own judgment is minimized; others argue for the superiority of taste and insight applied to individual cases over the attempt to follow a predetermined rule. One's position along this spectrum affects, directly or indirectly, how one will approach all other textual questions—such as how much authority one assigns to a "copy-text" or a "best text" and how much freedom one perceives to be justified in altering it (by drawing on other texts or on one's own conjectures). Fluctuations from one direction to the other have characterized editorial thinking in all fields; but the lack of interdisciplinary communication is reflected in the fact that the various fields have not fluctuated in unison.

Fredson Bowers, writing on "Textual Criticism" in the 1958 edition of the Encyclopaedia Britannica, illustrates this point by suggesting how the editing of modern texts has benefited from earlier work on the classics: "The acceptance of Housman's attitude and its extension, about the middle of the 20th century, to editing from printed texts constitutes one of the most interesting of modern developments in editorial theory." Bowers here takes Housman as the exponent of a movement away from the Lachmannian tradition of relying whenever possible on the archetype as established through genealogical reasoning. Although many have pointed out the fallacy of believing that a "best" text has the correct readings at points where it is not obviously in need of emendation, Housman's famous remark in the preface to his 1903 edition of the first book of the Astronomicon of Manilius must be regarded as the classic statement of it:

To believe that wherever a best MS. gives possible readings it gives true readings, and that only where it gives impossible readings does it give false readings, is to believe that an incompetent editor is the darling of Providence, which has given its angels charge over him lest at any time his sloth and folly should produce their natural results and incur their appropriate penalty. Chance and the common course of nature will not bring it to pass that the readings of a MS. are right wherever they are possible and impossible wherever they are wrong: that needs divine intervention; . . . .[30]
The reason that this fallacious approach (the "art of explaining corrupt passages instead of correcting them" [p. 41]) gained currency, according to Housman, is not only that "superstition" is more comfortable than truth but also that it was a reaction against an earlier age in which "conjecture was employed, and that by very eminent men, irrationally" (p. 43). Exactly the same sequence—but delayed by several decades—can be

47

Page 47
observed in the history of editorial approaches to printed texts. R. B. McKerrow's edition of Thomas Nashe (1904), though it appeared at almost the same time as Housman's Manilius, represented the kind of distrust of eclecticism that Housman was attacking. McKerrow was one of a group of scholars of English Renaissance drama whose work would revolutionize the study of printed texts by showing the interdependence of physical and textual evidence; the analytical techniques that resulted did at times enable McKerrow and his colleagues to settle a textual point conclusively, as a matter of demonstrable fact, and to that extent editing was legitimately put on a more "scientific" basis. But in most cases there was still a large area in which the facts were not conclusive, and here McKerrow took the position that involved the least exercise of editorial judgment, the decision to adhere to the text chosen as copy-text. In so doing he was reacting, at least in part, against the undisciplined eclecticism that had characterized the nineteenth-century editing in this field.[31] The event that Bowers refers to, in the mid-twentieth century, representing a reinstatement of editorial judgment—but, like Housman's, on a more responsible basis than previously—was W. W. Greg's "The Rationale of Copy-Text." Greg broke down the notion of a single authoritative text in two ways: the more novel way, which he was the first to suggest, was that the primary authority for accidentals might reside in a different text from that for substantives (generally an early text for the accidentals, a later one for the substantives); the less surprising way, in line with Housman's criticisms, was that an editor could judge individual readings on their own terms and did not have to accept all variants that were not manifestly impossible simply because they came from a text that was known to contain some authorial revisions.[32] Both Greg and Housman restore editorial judgment to a place of prominence; but that judgment is firmly directed toward the determination of what the author would have written, whereas the earlier proponents of eclecticism (against whom the immediate predecessors of Greg and Housman were rebelling) tended to be less scrupulous in distinguishing between what they themselves preferred and what the authors being edited would have preferred.

The hope of having a single text to rely on dies hard, however, and


48

Page 48
one mark of the wisdom of Greg's essay is that he recognized the danger that he labeled "the tyranny of the copy-text."[33] Although his rationale for selecting a copy-text entailed choosing a text that could justifiably be accorded presumptive authority in cases where the variants seemed completely equal (particularly, in practice, in regard to accidentals), he understood that there had always been a temptation to let the weight of copy-text authority extend to readings that did not deserve such support. Greg's rationale does not (though some of its critics seem to think it does) provide timid editors with the opportunity to shirk, in the respectable name of conservatism, difficult decisions. Of course, it can rightly be regarded as conservative, and sensibly so, to retain a copy-text reading, even if one personally does not prefer it, when one is not convinced that any of the alternatives are authorial; Greg's point is simply that one should not be deterred, by whatever authority attaches to the copy-text, from altering it when one is convinced (through critical insight, in the light of all available evidence) that another reading is, or comes nearer to, what the author intended.[34] Sometimes editors, both of classical and of modern works, argue that the most they are justified in doing is to attempt to purge the copy-text, or archetype, or paradosis, of errors—not to try to restore what the author wrote. But this argument cannot be praised for its respect of historical evidence; rather, it confuses two kinds of edition, both legitimate, neither of which, when done properly, disregards the evidence. If one is interested in a text as it appeared at a particular time to a particular audience, a diplomatic or facsimile edition of it serves the purpose best; correcting errors in it—editing it critically—would be out of place, for the errors, though unintended, were part of what the contemporary readers saw in the text in front of them. If, on the other hand, one wishes to correct errors—to try to repair the damage done to the text in transmission, however famous or influential its corrupt form may be—then one is producing a text that differs from any now extant (probably from any that ever existed), and

49

Page 49
the aim of the alterations is obviously not the preservation of a documentary from of the text but the construction of a text as close as possible (as close, that is, as surviving evidence permits) to the one the author intended.[35]

Some confusion on this point has been exhibited in the debate among editors of modern works over whether to choose an author's final manuscript as copy-text in preference to the first printed edition set from it. Of course, any attempt to fix a general rule on this matter is misguided, since situations vary greatly, and in some cases an author's revisions in proof may have been so thorough as to make the printed edition the proper choice. Some editors, however, prefer the first edition not for such reasons, but because it is the product of a historical moment; even though some aspects of its text may be the result of changes made in the publishing office or pressures brought to bear on the author by the publisher or others, the author accepted these conditions, they say, as part of the whole publishing process, and the text of the first edition is the one that emerged from a specific set of historical forces and the one that the public first read. This argument, however, leads only to the production of a facsimile edition; it has no relevance to a critical edition, although it is sometimes offered as if it did have, through a failure to think clearly about what the two approaches mean. Editors of earlier material do not encounter the problem in quite this form, since they do not deal with authorial manuscripts or authorially supervised printed texts, but the general issues are familiar to them. One manifestation of the exaggerated respect accorded to individual printed texts is the problem of the textus receptus of ancient writings. The text of the New Testament, or of other writings, that reached print was not, of course, necessarily more authoritative than other texts; but the controversy that sometimes surrounds editorial decisions to depart from the textus receptus suggests the irrationality with which a favored text can be defended. Clearly there are many differences between this situation and the question, faced by editors of modern works, whether to turn from printed book to manuscript for copy-text. But there is an essential similarity as well: in both cases the scholar's responsibility is to examine all the evidence in an effort to come as close as possible to the text intended by the author,[36] however many or few steps removed


50

Page 50
such a text may be from the texts that survive. Deciding whether an author's intention includes acquiescence to changes made by the publisher is a problem of more immediate concern to editors of modern writings; even so, such an editor's decision to follow a first edition may look just as foolish as the hesitation to depart from the textus receptus on the part of an editor of earlier material.

Greg's rationale for selecting a copy-text was of course set forth in the first instance for editors of printed texts that are not far removed from authorial manuscripts; and near the beginning of his essay he distinguishes his approach (growing out of McKerrow's) from that appropriate for the classics. In the latter, he says, "it is the common practice, for fairly obvious reasons, to normalize the spelling," whereas in the editing of English texts "it is now usual to preserve the spelling of the earliest or it may be some other selected text":

Thus it will be seen that the conception of "copy-text" does not present itself to the classical and to the English editor in quite the same way; indeed, if I am right in the view I am about to put forward, the classical theory of the "best" or "most authoritative" manuscript, whether it be held in a reasonable or in an obviously fallacious form, has really nothing to do with the English theory of "copy-text" at all. (p. 375)
It is true that a concern for incorporating in an edition documentary punctuation and spelling led to Greg's perception that the text with authority for accidentals might not be the same as the one with authority for substantives and to his statement that "the copy-text should govern (generally) in the matter of accidentals" (p. 381). In fact, however, the distinction between substantives and accidentals, though it has its uses, is not crucial to the concept of copy-text that Greg calls "English," as the word "generally" in his sentence suggests. Editors following Greg's general line would in practice emend the copy-text with a later reading of any kind, a substantive or an accidental, that could convincingly be argued to be authorial; and in the cases where the variants seem evenly balanced, they would fall back on the copy-text reading. Thus what underlies this conception of copy-text is the idea of presumptive authority, a text to be relied on when one finds no basis for preferring one variant over another—an authority, it must be emphasized, that does not restrict one's freedom to choose variants from other texts when there is reason to do so. It may be that editors of modern writings will normally choose their copy-texts, as Greg was the first to point out explicitly, to serve primarily as the authority for accidentals; but it does not follow

51

Page 51
that a different understanding of copy-text is required for editors of earlier materials, even when they are not concerned with reproducing documentary accidentals.[37] The fact that editors dealing with different periods may have to take somewhat different positions regarding accidentals is a superficial matter that does not alter the fundamental questions they all have to face. The real issue that should be raised about the "English" conception of copy-text is whether the idea of a text of presumptive authority is appropriate to all patterns of textual descent—an issue relevant to modern as well as earlier texts. If we are not distracted by the problem, undeniably troublesome, of how to treat spelling and punctuation, we can see that Greg's essay takes its place in the larger tradition of textual theory: like the seminal pieces on the editing of classical, biblical, and medieval works, its dual theme is textual authority and editorial freedom. To state a rationale of copy-text is inevitably to take a position on how much weight should be given to the editor's critical judgment in establishing a text—that is to say, how much alteration should be permitted in any given documentary form of the text, on the basis of the editor's assessment of its status, of the variants in other texts, and of further conjectures. The principal approaches to this question that have been advanced over the years are well known, and have often been surveyed.[38] I propose to do no more here than specify some

52

Page 52
main lines, so that Greg's rationale can be seen in relation to them. They have not usually been taken up in this context, but doing so shows, I think, that editorial discussion might be sharpened by greater awareness of the entire tradition.[39]

For this purpose it is not necessary to go back beyond the approach usually associated with Karl Lachmann. Although scholars have shown that Lachmann's own contributions to the development of the "genealogical" approach have been greatly exaggerated,[40] his editions of the New Testament (1831) and of Lucretius (1850) stand as monuments linking his name with this method. Historically the importance of this movement is that it represented a reaction against the unprincipled eclecticism that had prevailed in the previous century (of which Richard Bentley was the most important, and most notorious, exemplar) and marked a recognition of what a scholarly approach must entail, at a time when ancient documents were beginning to be more accessible. There can be no question that the general drift of the genealogical approach is correct: that scholars must examine all the extant documents, learn as much about them as possible, and attempt to establish the relationships among the texts they contain. This much we would now take for granted as part of what it means to be scholarly. The difficulty comes in choosing a means for working out those relationships and in deciding what use to make of the data thus postulated; and when people refer to "the genealogical method" they normally mean the particular recommendations on these matters associated with Lachmann and his followers. Taken in this sense, the genealogical method can certainly be criticized, and its defects have by now been enumerated many times.[41] The essence of the method is to classify texts into families by


53

Page 53
examining "common errors," on the assumption that texts showing common errors have a common ancestor.[42] Despite the obvious fallacies of such an approach, it had an influential life of more than a century and is regarded as the classic method of textual criticism. Two landmarks in its history added to its stature but at the same time can be seen to have made its weaknesses evident. One is B. F. Westcott and F. J. A. Hort's great Introduction to their edition of the New Testament (1881), which brilliantly stated the rationale for the approach and improved it methodologically (e.g., by focusing on agreements in correct or possibly correct readings rather than agreements in errors); they conclusively showed the illogic of relying on the textus receptus. However, as Ernest Colwell has carefully explained, Westcott and Hort in practice did not strictly adhere to the method, recognizing that editorial judgment in assessing the general credibility of individual manuscripts and the intrinsic merits of individual readings must remain central, even in an approach that emphasizes objectivity.[43] The second classic statement of the genealogical method is Paul Maas's famous essay, "Textkritik" (1927), best known to English readers in Barbara Flower's translation (not published until 1958).[44] It is a highly abstract distillation of the basic principles, showing their logic and soundness under certain conditions; but unfortunately those stated conditions (p. 3)—that each scribe copied from a single exemplar, not "contaminating" the tradition by drawing readings from two or more exemplars, and that each scribe also

54

Page 54
made distinctive departures, consciously or unconsciously, from that exemplar—are unlikely to have obtained in real situations.[45]

The force of these weaknesses is obvious, as is their relevance to the textual analysis of later material. Another of the often-discussed limitations of the method deserves to be underscored here: the fact that it does not make allowance for authorial revisions, for the possibility that variant readings result from the author's second thoughts as well as from scribes' errors and alterations. This oversight is not unique to the genealogical method but in fact exists, in greater or less degree, in all the approaches to textual criticism, regardless of the date of the works being considered. It springs from wishful thinking, for however difficult it is to choose among variants, it is easier to proceed on the basis that one is right and the others wrong than to recognize that several may be "right" or at least represent the author's preference at different times. Even among editors of modern works, where many authorial revisions can be documented, there is a reluctance to conceive of a text as containing multiple possibilities; and though an editor's goal is indeed to "establish" a text, editors—of works from all periods—should not forget that a "work" comprehends all the authorial readings within its several texts.

Another common criticism of the genealogical method—that one must revert to one's own judgment when the choice is, to quote Maas, "between different traditions of equal 'stemmatical' value" (p. 1)—calls attention to what may be a more serious problem: the tendency to think that the method generally minimizes the role of subjective judgment. The Lachmannian system is responsible for the standard division of editorial activity into recension and emendation and is therefore conducive to an attitude, as I suggested earlier, that takes the first of these procedures to be more objective than it is (or can be). There is superficially an appropriateness in distinguishing readings thought of by the


55

Page 55
editor from those present in at least one of the surviving documents; indeed, from the point of view of documentary evidence, one is bound to regard any proposed reading not in the documents as falling into a distinctly separate category. But from the point of view of what are likely to be the authorial readings, this distinction is of no significance, for an editorial "conjecture" may be more certainly what the author wrote than any of the alternative readings at a point of variation. The very term "conjecture," or "conjectural emendation," prejudices the case; readings in the manuscripts are less conjectural only in the sense that they actually appear in documents, but they are not necessarily for that reason more certain. One is conjecturing in deciding that one of them is more likely to be authorial than another, just as one conjectures in rejecting all the variants at a given point in favor of still another reading. The process of conjecture begins as soon as one combines readings from two documents,[46] and every decision about what is an "error" in a document rests on the editor's judgment. Unquestionably the attempt to establish first a transmitted text is a more responsible procedure than to engage at once in speculation, before surveying the range of documentary evidence; but one must then resist the temptation to regard that text as an objective fact. Colwell states this point well in a comment on Hort:
His prudent rejection of almost all readings which have no manuscript support has given the words "conjectural emendation" a meaning too narrow to be realistic. In the last generation we have depreciated external evidence of documents and have appreciated the internal evidence of readings; but we have blithely assumed that we were rejecting "conjectural emendation" if our conjectures were supported by some manuscripts. We need to recognize that the editing of an eclectic text rests upon conjectures. (p. 107)
This problem is equally of concern to editors of modern works. Although their tendency to use "emendation" to mean any editorial change in the copy-text, including readings drawn from other documents, is more realistic, they are inclined to think that they are being cautious if they choose a documentary reading over one newly proposed by an editor. Such is not necessarily true, of course: the quality of the reading is everything, finally, and the editorial tact necessary to recognize that quality is at the heart of the whole process. The system associated with Lachmann's name cannot be held entirely responsible for editors' misunderstanding of this point, but it does seem to make the point harder to see by imputing to certain kinds of editorial decisions a greater objectivity than can usually exist.


56

Page 56

Some of the people who have criticized the "Lachmann method" have set forth alternative approaches that have themselves become the subject of considerable discussion. One such person is Joseph Bédier, whose work, particularly influential in the medieval field, can serve to represent another general approach to editing. The introduction to his second edition (1913) of Jean Renart's Le Lai de l'Ombre, which has become the point of departure for the twentieth-century criticism of Lachmann,[47] concentrates on the two-branched stemma as evidence of the weakness of the genealogical method. The fact that most stemmata turn out to be dichotomous is regarded suspiciously as indicating more about the operation of the system than about the actual relationships among the manuscripts. What Bédier recommends instead is to choose a single good manuscript and to reprint it exactly except for any alterations that the editor finds imperative. This approach has been called "a return to the method of the humanists of the Renaissance";[48] certainly it is a move in the opposite direction from Housman's criticism of Lachmann at nearly the same time. When Giorgio Pasquali, ridiculing this best-manuscript approach, linked the English Shakespeare scholars with the medievalists in following it,[49] he was essentially correct in regard to the period before Greg's "Rationale." There is no question that, in spite of Housman's incontrovertible logic, the best-text theory —whether or not directly influenced by Bédier in every case—held sway over a great deal of editing in the first half of the twentieth century. An instructive paradox of the commentary on Bédier is that his position has been regarded both as extremely conservative, restricting the role of editorial judgment, and as extremely subjective, emphasizing the editor's own critical decisions. The strict adherence to a single text does suggest an attempt to minimize subjectivity; but the leeway then allowed the editor in deciding what readings are not possible and must be replaced sets very few restrictions on subjectivity. The point in the editorial


57

Page 57
procedure where subjectivity enters may seem to have been shifted, but its extent has not been reduced. And in fact it is present from the beginning in both approaches—both in the selection of a "best" text and in the decisions involved in recensio.

Followers of Bédier and of Lachmann have been adept at suppressing recognition of the role of critical judgment at certain stages of the processes they favor, and they have failed to see that their apparently quite different approaches have much in common. The narrowness and confusion exhibited by such partisans can be illustrated in the work of a distinguished medievalist, Eugène Vinaver.[50] Admiring Bédier's criticism of Lachmann, he makes sweeping claims for the newer system:

Recent studies in textual criticism mark the end of an age-long tradition. The ingenious technique of editing evolved by the great masters of the nineteenth century has become as obsolete as Newton's physics, and the work of generations of critics has lost a good deal of its value. It is no longer possible to classify manuscripts on the basis of "common errors"; genealogical "stemmata" have fallen into discredit, and with them has vanished our faith in composite critical texts. (p. 351)
The real issue of course is whether objective rules or individual judgment will bring us closer to the author's text, and this fact is nowhere better shown than in the conclusion Vinaver draws from these observations (that "composite critical texts" are discredited) or in the statement he proceeds to make: "nothing has done more to raise textual criticism to the position of a science than the realisation of the inadequacy of the old methods of editing." Housman, for instance, would have agreed in general with most of Vinaver's paragraph but would have come to the opposite conclusion: that we must put more faith in critical texts and not aim to place editing in "the position of a science."[51] Vinaver realizes that Bédier's position, which he essentially approves, does not eliminate subjectivity, and his own effort toward injecting more objectivity into it is to explain six kinds of errors that arise from scribal transcription.[52]

58

Page 58
Knowledge of them, he believes, will "widen the scope of 'mechanical' emendation" and "narrow the limits of 'rational' editing" (p 365).[53] Vinaver is one of those editors who, in their eagerness to find objective criteria for editorial decisions, exaggerate the distinction between correcting an error and making a conjectural emendation. Vinaver's attention to scribal error stems from his belief that an editor should aim at "lessening the damage done by the copyists," not at reconstructing the original. To do the latter, he thinks, would be to "indulge in a disguised collaboration with the author" (p. 368). He does not seem to see that attempting to restore what the author wrote is different from altering the text to what, in one's own opinion, the author should have written. Like Bédier and other advocates of the best-text approach, he is not willing to say that the former is important enough to be worth risking along the way a few instances of the latter. Yet in defining the editor's role as that of "a referee in the strictly mechanical conflict between the author and the scribe" (p. 368), he does not eliminate the problem; he is not, after all, ruling out every editorial departure from the chosen text, and he leaves unsolved the question how one can satisfactorily distinguish safe and unsafe categories of critical activity. His effort to assist Bédier proves no assistance in the end, for he overestimates, along with Bédier, the difference between their approach and Lachmann's. It is interesting to learn that the predominance of the dichotomous stemma is mathematically not such an oddity as Bédier thought;[54] but that fact does not make Lachmann right and Bédier wrong. The approaches associated with both their names are in fact subject to the same criticisms, for they both cover up much of the uncertainty and subjectivity in the detection of error and therefore entail a misunderstanding of the nature and scope of conjectural emendation.

It was inevitable that the desire for objectivity in textual analysis would lead to the use of quasi-mathematical or quasi-statistical approaches.


59

Page 59
Nine years after Bédier's famous introduction, Henri Quentin, in his Mémoire sur l'etablissement du texte de la Vulgate (1922), announced a system that proved to be the first of a long line of twentieth-century attempts to make textual analysis something akin to formal logis.[55] The heart of Quentin's system is the rule that, in any group of three manuscripts, the intermediary between the other two will sometimes agree with one or both of them, but they will never agree against it. Quentin's system is thus to build a stemma by taking up manuscripts (and their families) in groups of three, following this rule. In the process of comparison no attempt is made to recognize "errors"; variants are simply variants, without a direction of descent implied. The concept of the intermediary therefore encompasses three possibilities: the intermediary could be (a) the archetype from which the other two manuscripts are independently descended, (b) the descendant of one of them and the ancestor of the other, or (c) the descendant, through conflation, of both of them. In order to determine which of these possibilities is actually true, Quentin resorts to so-called "internal" evidence—that is, to subjective judgments about the nature of the variants. He envisions his system as an attempt to reconstruct the archetype—the latest ancestor of all the surviving texts—rather than the author's original; in Lachmannian terms, he is concerned only with recensio. And certain central difficulties in Lachmann are present in Quentin also: the definiteness Quentin imputes to his method does not seem fully to recognize the amount of subjectivity that is finally relied upon; nor does the suggestion that there is something more objective in attempting to reconstruct the archetype than in trying to approach the author's original acknowledge adequately how indistinct the line is between the two, at least from the point of view of the nature and certainty of the conjectures involved.[56] Although the same cannot be said of W. W. Greg's effort five years later (The Calculus of Variants, 1927)—for Greg more openly admits the limitations of his "calculus"—the problems with his work are essentially the same. The details of his procedure are of course different (in a quasi-algebraic operation, he factors his formulaic representations of complex variants so that he can focus on two variants at a time), but it reaches an impasse, as Quentin's does, beyond which one cannot proceed without the introduction of subjective judgments regarding genetic

60

Page 60
relationships. As a mental exercise (and as a demonstration of the keenness of Greg's analytic mind), the Calculus is a fascinating work; but as a contribution to editorial theory it does not have the significance of his "The Rationale of Copy-Text" a generation later.[57] Not long after the publication of Quentin's and Greg's proposals, William P. Shepard performed the interesting experiment of applying both to a number of medieval works, some of which had previously been studied by other textual scholars. Invariably the two methods produced different stemmata, both from each other and from those proposed by earlier editors. Shepard's experiments, as he stressed, are not conclusive, but they lend weight to his doubt whether the human activity of copying can be given a "mechanistic explanation."[58]

He recognized, however, that "we are bound to seek such an explanation if we can"; and the dream that "some day a law or a formula will be discovered which we can apply to the reconstruction of a text as easily and as safely as the chemists now apply laws of analysis or synthesis" (p. 141) continues to intrigue us, as evidenced by the scholars—such as Archibald Hill, Antonín Hrubý, and Vinton A. Dearing—who have followed in the tradition of Quentin and Greg.[59] Hill, Hrubý, and Dearing all attempt to work out problems left unsettled by Greg, and all recognize the importance, first seen clearly by Quentin, of examining distributional before genealogical evidence (i.e., studying the record of


61

Page 61
variant readings for evidence of relationships before attempting to assess which descended from which). Hill proposes a principle of "simplicity" as a mechanical means for choosing among alternative stemmata: one scores two points for each line connecting a hypothetical intermediary and one point for the other lines and then selects the diagram yielding the smallest total. Hrubý tries to use probability calculus applied to individual readings in texts in order to solve what Greg called "the ambiguity of three texts"—to distinguish, in other words, between states of a text resulting from independent descent and those resulting from successive descent. Dearing's work is an extension of Greg, taking into account and adapting Quentin's idea of intermediaries and Hill's of simplicity; like Greg, he offers a "calculus" that involves the rewriting of variations, and he sets forth in detail the formal logic that underlies it. Because a primary deficiency of earlier approaches was their inability to deal with situations in which a scribe conflated the texts of two or more manuscripts,[60] Dearing's handling of this problem is of particular interest. For him, a logical consequence of his distinction between bibliographical and textual analysis is that in the latter conflation simply does not exist. A scribe using two manuscripts, he says, would not think of himself as conflating them but as attempting to produce a more accurate text (a text nearer the archetype) than either of them; to say that he had "manufacturered one state of the message out of two others" would be "to confuse means and ends" (p. 17). The bibliographer, who is concerned with the physical means of textual transmission, can say that a record has been produced out of two others; but the textual analyst will see it simply as a message that may at times have affinities with other texts. Although this observation is presented as a remarkable revelation ("The light of truth blinded Saint Paul. New insights are not always easy to understand, much less to accept when understood"), one may wonder whether it is not in fact commonly understood and taken for granted. Clarity of thought does demand that some such distinction be recognized, and one cannot quarrel with Dearing for attempting to make it explicit; but whether it materially affects one's dealing with "conflation" is another matter. If, from the textual point of view, there can be no "conflation," one has eliminated the word as an appropriate way of describing the situation; but one has not eliminated the situation itself or the problem it poses for textual analysis. Dearing speaks instead of "rings" in genealogical trees and devotes considerable space to techniques

62

Page 62
for rewriting trees so as to eliminate rings, either by inferring states or by breaking the weakest connection in a ring. One breaks the weakest, rather than some other, link in deference to the "principle of parsimony": "The fewest possible readings are treated as something different from what they really are" (p. 88). As with the other systems, the nature of the concessions required to make the system work causes one to question the validity of the results.[61] Dearing's effort to encompass conflation within his system is laudable, but his confidence that his book "for the first time formulates the axioms of textual analysis and demonstrates their inevitability" (p. x) would seem to be excessive.[62] In the half century since Shepard discussed Quentin and Greg a great deal of effort has been expended on statistical approaches to textual analysis, but there seems little reason for a more optimistic verdict than his.

Different as these various methods—from Lachmann to Dearing—are, they all have the same problem: the questions of conflation and the direction of descent prove to be the stumbling block for systems that attempt to achieve objectivity, and those systems either rely on subjective decisions, covertly or openly, or else set up conditions that limit their relevance to actual situations. This is not to say that one or another of the procedures developed in these systems will not be helpful to editors—of modern as well as earlier material—on certain occasions,[63] and editors can profit from the discussion of theoretical issues that the exposition of these systems has produced. But the impulse to minimize the role of human judgment (the view, in Dearing's words, that "textual analysis, having absolute rules, is not an art" [p. 83]) has not led to any satisfactory comprehensive system. In this context, it is useful to look again at the approach suggested by Greg in "The Rationale of Copy-Text," for it places no restrictions on individual judgment—that is, informed judgment, taking all relevant evidence into account and directed toward the scholarly goal of establishing the text as the author wished it. The idea that all alterations made by an editor in the selected copy-text are emendations—whether they come from other documentary texts or from the editor's (or some editor's) inspiration—gives rise to a fundamentally different outlook from that which often has prevailed in the


63

Page 63
textual criticism of earlier material. It leads to a franker acceptance of the centrality of critical judgment because it calls attention to the similarity, rather than the difference, between adopting a reading from another text and adopting a reading that is one's own conjecture. Both result in a form of the text unlike that in any known document and therefore represent editorial judgment in departing from documentary evidence. Some documentary readings are—or seem—obviously wrong, but obviousness is itself subjective, and correcting even the most obvious error is an act of judgment; and attempting to work out the relationships among variant documentary readings involves judgment, or at least, as we have seen, evaluation of the varying results of different systems for establishing those relationships. This approach recognizes that what is transmitted is a series of texts and that to think of a single text, made up of readings from the documentary texts, as "what is transmitted" is to confuse a product of judgment based on the documentary evidence with the documentary evidence itself. But the choice of one of the extant texts as a copy-text in the sense that emerges from Greg's rationale is not at all the same as taking a "best-text" approach (whether Bédier's or some other variety), for one has no obligation to favor the copy-text whenever one has reason to believe that another reading is nearer to what the author intended. Indeed, if one has a rational basis for selecting one reading over another at all points of variation, there is no need for one text to be designated as "copy-text" at all. In this conception, therefore, copy-text is a text more likely than any other—insofar as one can judge from all available evidence—to contain authorial readings at points where one has no other basis for deciding. The usual deterioration of a text as it is recopied suggests that normally the text nearest the author's manuscript is the best choice for copy-text—except, of course, when the circumstances of a particular case point to a different text as the more appropriate choice.

All available evidence should be considered by the editor in making these decisions—evidence from the physical analysis of the documents and from the textual analysis of their contents as well as from the editor's own judgment as to what, under the circumstances, the author is likely to have written. Although Greg's proposal is specific, dealing with the printed dramas of the English Renaissance, the spirit of his rationale can, I think, be legitimately extended in this way, providing a comprehensive approach that encompasses other more limited approaches. It allows one to go wherever one's judgment leads, armed with the knowledge of what evidence is available and what systems of analysis have been proposed; and it provides one with a mechanical means of deciding among variants only when all else fails, a means that is still rationally


64

Page 64
based. One must postulate a relationship among the texts, of course, before one can select and emend a copy-text, and Greg does not suggest in his essay on copy-text how to work out that relationship. His emphasis is different from that of most of the writers on the textual criticism of earlier materials, and in this sense his work is not directly comparable to theirs. But many of them have also talked about the construction of a critical text and have revealed in the process that the two activities cannot always be kept entirely separate. Since the analysis of textual relationships involves judgment at some point, the examination of variants for that purpose is intimately linked with the consideration of variants for emendation. It is not arguing in a circle to decide (having used subjective judgment to some extent) on a particular tree as representing the relationship among the texts, and then to cite that relationship as one factor in the choice among variant readings; the latter is simply a concomitant of the former, for the process of evaluation employed in working out the relationship between the two readings overlaps that used in making a choice between them. Ideally the relationship among the texts should be a matter of fact, which can then be taken as a given in the critical process of deciding what the author wrote. But historical "facts" vary in their degree of certainty; and the more judgment is involved in establishing the "fact" of textual relationship the more such a process will coincide with that of evaluating readings to produce a critical text. The traditional division between recension and emendation is an illustration of this point, though it often has served as a way of concealing it. The open reliance on critical judgment in Greg's rationale and the lack of dogmatism manifested there can appropriately be extended to the prior task of dealing with genealogical relationships. It would seem reasonable to maintain an openness to all approaches that might be of assistance both in evaluating variants and in pointing to relationships. A statistical analysis might prove suggestive, for example, but should be used in conjunction with other data, such as physical evidence. Bibliographical and textual evidence, though undeniably distinct, must be weighed together, since physical details sometimes explain textual variants.

Because Greg spoke specifically of copy-texts that were chosen for the relative authority of their accidentals, editors of earlier works—of which the preserved documents are not likely to contain authoritative accidentals—have concluded that his approach is relevant only for works preserved in authorial manuscripts or in printed editions based on them. Such a view does not take into account the natural extension of Greg's position that I have mentioned: the idea of copy-text as presumptive authority, which one accepts (for both accidentals and substantives)


65

Page 65
whenever there is no other basis for choosing among the variants. This concept of copy-text is relevant for materials of any period, for it is not tied to the retention of accidentals: any feature of the copy-text that one has good reason for emending can be emended without affecting the status of the copy-text as the text one falls back on at points where no such reason exists to dictate the choice among variants. Dearing takes too narrow a view of the matter, therefore, when he says that one chooses a particular text as copy-text if one concludes that the scribes "tended to follow copy even in accidentals" (p. 154). Furthermore, the point is not whether they followed copy; it is simply that the text located at the smallest number of steps from the original is likely to be the best choice to use where the variants are otherwise indifferent, because that text can be presumed, in the absence of contrary evidence, to have deteriorated least, even if the scribes were not careful in following copy.[64] When there are two or more lines of descent, an editor may conclude in a given case that a text in one line, though it is probably more steps removed from the original than a text in another line, is nevertheless more careful and more representative of the original; one would then select it as copy-text, for the point of this approach is that one turns to the text nearest the original only when there is no other evidence for deciding.

This procedure, derived from Greg, would seem to be appropriate for all instances in which—if the choice of copy-text is not clear on other grounds—one can decide that a particular text is fewer steps removed from the original than any other known text. It is not helpful, however, in those instances in which two or more texts are an equal, or possibly equal, number of steps from the original. These situations are taken up by Fredson Bowers in an important essay on "Multiple Authority,"[65] which is the logical complement to Greg's "Rationale." What is particularly


66

Page 66
interesting about Bowers's essay is that, although it deals with a problem especially relevant to earlier material, it is occasioned by work on modern literature, specifically Stephen Crane's stories that were published through a newspaper syndicate.[66] In the absence of any of the presumably duplicate copies of the text sent out by the syndicate office, what the editor has are the appearances of the text in the various newspapers that belonged to the syndicate. These are all apparently removed from the syndicate's master proof, and from the author's original, to exactly the same extent; and unless one has other evidence to suggest that one of the newspapers is likely to be more accurate than the others, there is no way to choose one of these texts as carrying presumptive authority. In such cases, therefore, Bowers recognizes that "critical tests (guided by bibliographical probabilities) must be substituted for the test of genealogical relationship" (p. 467). Statistical analysis is important, but, as Bowers says, "quantitative evidence is not always enough" and "qualitative evidence, the real nature of the variant, needs to be considered" (p. 468). What Bowers implies, but does not quite say, is that in such cases there is no copy-text at all, since no text can be elevated over the others and assigned presumptive authority; the critical text is constructed by choosing among readings, at all points of variation, on critical and bibliographical grounds.[67] If one finds two readings evenly matched, there is no copy-text authority to fall back on, and one must settle the dilemma some other way (such as by a statistical analysis to determine which text has apparently been correct most often). This approach to radiating texts, taken in conjunction with the idea of a copy-text of presumptive authority, when the situation warrants, provides a comprehensive plan for dealing with variants. The point that should be stressed is that neither part of this plan is limited to material of a certain type or period.[68]


67

Page 67

My comments in the preceding pages aim to be nothing more than a series of reflections arising from an effort to think about what connections there are between the textual criticism of ancient writings and the editorial scholarship devoted to modern works. I do not claim to have proposed a new "method"; but I do hope that I have exhibited a coherent line of thinking applicable to all editorial scholarship. The issues will always be debated, and there will always be champions of various approaches. But no approach can survive in the long run that does not recognize the basic role of human judgment, accept it as something positive, and build on it. Welcoming critical judgment is not incompatible with insisting on the use of all possible means for establishing demonstrable facts. Scholarly editors are, after all, historians as well as literary critics, and they must understand the subjective element in the reconstruction of any event from the past. Establishing texts from specific times in the past, including the texts intended by their authors, is a crucial part of this large enterprise of historical reconstruction and cultural understanding. It seems obvious that textual scholars dealing with modern works can benefit from examining the ways in which editors of earlier materials have dealt with complicated problems of transmission and from studying the theories underlying those treatments; I think it equally clear that editors of earlier writings will find relevant what students of later texts have said about authors' revisions and the choice and treatment of a copy-text. One of the textual scholars who have emphasized the importance of cooperation among specialists in different areas is Bruce Metzger. He has urged New Testament scholars, through his own impressive example, to explore textual work in the Septuagint and the Homeric and Indian epics and to "break through the provincialism . . . of restricting one's attention only or chiefly to what has been published in German, French, and English." As he says, "An ever present danger besets the specialist in any field; it is the temptation to neglect taking into account trends of research in other fields. Confining one's attention to a limited area of investigation may result in the impoverishment rather than the enrichment of scholarship."[69] It is to be hoped that many more textual scholars will pursue their work with this same breadth of vision and will welcome the "cross-fertilization of ideas and


68

Page 68
methods" that results. Editing ancient texts and editing modern ones are not simply related fields; they are essentially the same field. The differences between them are in details; the similarities are in fundamentals.