Metacritic Journal


for Comparative Studies and Theory

Literature between Canon and Archive. New Distant Reading 6.2 (December 2020)
ISSN 2457 – 8827
Download this as PDF

Martin Paul EVE, Close Reading with Computers. Textual Scholarship, Computational Formalism and David Mitchell’s Cloud Atlas, Stanford University Press, 2019, DOI: 10.21627/9781503609372, 271 p.

Salma Al Refaei


For formatted text, please download as pdf (upper right).


The term “distant reading” is an approach to literary studies that applies reductive and labor-saving computational methods to literary data. It uses the tireless repeatability of computational tasks in order to amass “statistically informed deductions about novels or other works that one has not read” (3). Given the fact that nowadays more books are being published than one has the physical capacity to read in a lifetime, distant reading comes as a tool to help us cope with the amount of published work; in this sense it is an “antinecrotic practice” (3) as it staves off the limiting effects of death. However, it can also be interpreted as an “antireading practice” (4) because it substitutes the direct, human engagement with literature.

Close reading, on the other hand, is a careful sustained interpretation, which relies heavily on small details encountered throughout the work in order to support one’s arguments in this hermeneutical approach. However, close reading has become “a form of theology that invests too heavily in the sacrosanct nature of a few texts” (5) so it has come under fire in certain digital humanities circles. What Martin Paul Eve proposes is a combination of the two, a “close reading with computers”; an approach that pivots between a telescopical and microscopical perspective and he achieves that by conducting a series of close reading exercises through computational methods, which alienates neither the reader from the text, nor the findings from mainstream literary criticism.

That being said, the title of Eve’s study includes two more concepts that need to be defined: textual scholarship and computational formalism. The former is an umbrella term which refers to disciplines dealing with describing, transcribing, editing or annotating texts and physical documents. Eve uses the “digital microscope” (21) for the first time on textual scholarship, more specifically on the publishing history and variants of Cloud Atlas. He makes a clear distinction between two versions of the text: the US electronic edition and the UK paperback edition, both of which suffer modifications at the text level. The most notable one is reflected in the chapter “An Orison to Sonmi ̴ 451” where the texts are desynchronized almost totally at the level of linguistic expressions, while there are a number of episodes that appear only in one of the two versions.

The second structuring aspect of Eve’s book refers to computational formalism, a term derived from the long history of formalist literary studies, but also from the Stanford Literary Lab’s first pamphlet “Quantitative formalism”. The reason for the change of the first term is quite clear, as Eve employs a computational approach, so he needed a term that indicated “the repetitious task-based nature of the work here conducted using computers, as opposed to the purely numerical approach signaled in the Stanford Lab’s publication” (156-157).

Close reading with computers is comprised out of four chapters, each being further subdivided into smaller parts. The first chapter explores the two versions and how significant is the contrast between them, the second chapter concentrates on classifying Mitchell’s work while focusing on the process of writing rather than the end result, in the third one he looks into what makes a literary work historical by analyzing the language used and in the last chapter he examines the reader’s position in relation to the nature of interpretation. His aim in this study is to recover “those overspills that may be anomalies in terms of broad-scale history but that lend literary works their singularity” (11).

He chose to analyze Mitchell’s Cloud Atlas because it is a “genre-bending contemporary novel” (11) as it is divided into six distinct registers with a “pyramid-style cascade toward the future” (12) where each chapter breaks halfway through only to move to the next chapter, providing an innovative formal mechanism. Cloud Atlas is, in a sense, a novel that “performs a distant reading of world history and its future projection” (12). The techniques used by him do not pertain entirely to the distant reading method, that is why he refers to his techniques as being microscopic, because no matter how much attention one pays to the text, some aspects just slip away and so, this is the part where the computational methods come in handy: they reveal to us small particles of meaning that we could not have seen before. Eve’s book is centered around several questions meant to decipher Cloud Atlas and to achieve that he conducts a series of experiments to test his theories- and his results are surprising.

The first two chapters are maybe the most important and bring astounding results. The first chapter, “The contemporary history of the book”, focuses on the two aforementioned variants of the text (the US electronic version and the UK paperback version) and Eve uses a series of methods which reveal how different they are, at the linguistic and syntactic level. Before analyzing the two versions, Eve seeks an answer for another question- how reliable can a reader be on the consistency of the text in this ever changing digital world when the possibility of version variance- or even disappearance- of a text takes place? In 2009 Amazon came under fire when Orwell’s novel Nineteen Eighty-Four disappeared from its users’ Kindles. This episode made people extremely uncomfortable as it made them realize that such a corporate giant might control what they read and even modify some texts. It is ironic that they removed specifically Orwell’s novel as it tackles and criticizes the historical censorship and totalitarian regimes which interfered with the rights of the individual.

Mitchell himself points out that the reason for the electronic and paperback inconsistent versions is simply a desynchronization based on chance and inexperience (at that point Cloud Atlas was Mitchell’s third novel). The US version was “orphaned” for three months as his publisher took a job elsewhere and by that time the UK version had already been published. Here is the answer given by Mitchell to Eve when asked about this: “I interacted with my UK editor and copy-editor on the manuscript, but there was no-one in New York synch-ing up the changes I made with the US side to form a definite master-manuscript, as it happened to my subsequent novels” (46).

The main variations occur in the “An Orison for Sonmi ̴ 451” chapter of the book- which is structured as an interview- between the printed and electronic versions. Eve uses a software that analyzes syuzhet and he openly releases it for others to use it as well. Syuzhet comes from the early twentieth-century Russian formalism (Propp and Šklovskij) and it makes the distinction between the chronological content of the narrative (the fabula) and the way that a particular text organizes its presentation of that narrative (syuzhet)- and it ultimately helps readers understand a text’s narrative flow. The methodology thus used by Eve was firstly applied on a thematic level: “I acknowledge that much of this parallel reading is hermeneutic in its data derivation and may be contested” (34). Such an example could be the fact that in the paperback version there is a discussion about the fabricants and whether they can dream or not, which is not present in the electronic version. This small detail is important when “thinking about the diegetic layering of the novel, with respect to dreams” (37). Another approach is on the language level and during Sonmi ̴ 451’s interview, one of her answers differs across variants. In one adaptation her answer is “no other version of the truth has ever mattered to me”, while in the other is “‘Truth is singular. The other versions are mistruths’. The former contains a social-constructivist view of the truth while the latter renounces such a stance”. One of Eve’s conclusions after performing this experiment is that “with the extent of modifications between these variants- at the level of syuzhet, theme and language- the different editions of Cloud Atlas are so distinct as to render close readings between editions almost incomparable” (44). However, can we believe that these differences are the result of mere technical error? People are often lulled into believing this, but the truth is that literary productions are social and “coproductive”, not solely technical.

The second chapter, “Reading Genre Computationally”, addresses genre and authorship, more specifically Eve’s aim is not to define genre, but to seek an answer for a smaller-scale question: “what does it mean to write as David Mitchell does in Cloud Atlas?” (62) – could there be any features which cannot be perceived by the naked eye? Here, computational methods prove useful as Eve first focuses on something called “computational stylometry” which is the use of computer to measure (-metry) the stylistic properties of the text (stylo-). Stylometry, as a quantifying activity, has a varied and long history of being used in different cases from legal court cases, where the accused was acquitted on the basis of stylometric evidence, through to authorship attribution, which refers to conducting a computational analysis of the most frequently used words in the text because, as it turns out, “the subconscious ways in which authors use seemingly insignificant words is an extremely effective marker for authorship attribution” (62).

To achieve that, Eve uses Burrows’ delta, a mathematical algorithm developed in 1992 that is comprised out of two steps leading to “a multivariate statistical authorship attribution” (64). Firstly, one has to measure the most frequent words that occur in a text and then relativize these using a “z-score” measure, which is basically asking: “by how much does a word’s frequency differ from the average deviation of the other words?” (65). The smaller this total addition of differences is, the more likely it is that the two texts were written by the same author. However, in this study Eve is not concerned with identifying authorship, instead he seeks to “examine the different linguistic properties of texts written in a variety of linguistic genres by the same author” (66) – he is more interested in the process of classification rather than the result. Moreover, people have a characteristic pattern of language use- a so called “authorial fingerprint” –a nd this leads Eve to his final assumption that “authorship is the underlying textual feature that can be ascertained by the study of quantified formal aesthetics” (67).

He subjects this novel to such an experiment because he wants to see if Burrows’ delta will identify Mitchell as the author for each chapter (because each chapter spans across space and history, so the language used had to be tailored accordingly) and he does this by analyzing the frequency of the most common words: “the, a, I, to, of and in”. Even if these words are a conscious stylistic choice of the author because they are used when needed, they are thought to be out of our control. Therefore, such features are “conceived of as subconsciously inscribed elements of a text that is difficult for an author to modify, even if he or she knows that stylometric profiling will be conducted on that text” (69). However, Eve goes on to demonstrate that by only analyzing these six common words, the result shows that each chapter was written by a different author, meaning that Mitchell was able to manipulate the usage of these words consciously even if, at that time, he had no idea that in the future his novel would be subjected to such an experiment- a most remarkable feat.

Eve conducts another experiment on a different book because he wanted to see if another novel, Bram Stoker’s Dracula which is structured in a similar manner as Cloud Atlas, could fool Burrows’ delta into thinking that each chapter was written by a different author. This is by far a perfect experiment, but Dracula is made up of various diaries written by different characters (Harker’s diary from chapter 1 and 2, the Murray diary portions from chapter 4 and 8 and Seward’s diary from chapters 12 and 13), thus having an intradiegetic voicing. He begins by choosing thirty most common terms (a generous head start) and he notices that at this level there is a strong similarity between Harker’s diary segments: “these are clearly written in a similar, and to some extent distinctive register. Chapters 1 and 2 are identified as written by the same author […] but this is where the similarities end” (80). The Murray segments are misclassified because the results claim that they were written by two different writers. These experiments conducted in this chapter further prove the uniqueness of language encountered in Cloud Atlas and at the same time, a deep sense of connection.

Reading statistically and reading as a human still remain very different from one another. The tried-and-tested methods proposed by Eve open up new avenues of interdisciplinarity, but a question still remains: if close reading becomes an activity that works across disciplinary boundaries, what domain of expertise will remain within literary studies? Digital reading, as insightful and as helpful as it might be, strips away our human touch by making reading rather formulaic at times. However, such a study as Eve’s reveals how much hidden meaning is left to be discovered beneath the text and that a combination of telescopical and microscopical approach proves most effective for it unravels crumbles of substance which cannot be perceived by the unaided eye while still maintaining a human imprint.