top of page

How the Text Lost its Body

by Peter Vickers



In the 1980s when the web was in a protozoic stage of forums known as usenet, Mark V Shaney was a prolific and eclectic user. His posts on net.singles ranged from reminiscence on university: ‘I got a BA in computer science instead of Finnegans Wake,’ to reflections on intimacy: ‘The longer one "waits" to experience sex, the more important ones [sic.] virginity becomes.’ Some, believing him to be an obsessive eccentric (and thus somewhat among friends), responded animatedly to the cranky geek in long and involved discussions. Eventually the ploy was revealed: Shaney was a linguistic model known as a ‘Markov chain’. This statistical model of language emulated human communication by representing a current state and a list of possible subsequent states: if the word ‘Ronald’ were input the model would offer successors like ‘Reagan’, ‘McDonald’, ‘Weasley’, or ‘Corbett’ with associated probabilities. Shaney’s creator Rob Pike had used human posts on the forum to calibrate the probabilities of the state transitions and then, in an early form of trolling, let the system backwash the boards’ collective unconscious.


Rob Pike’s emulation of human communication through machines had been anticipated for at least a century. The Victorian author Samuel Butler conjectured in the neutral-topian Erewhon that the way of all flesh was the replacement of man with machine: ‘There is no security against the ultimate development of mechanical consciousness’ warned a prophetic section of Erewhon’s constitution. An ancient attendant concern haunted this anticipation: ‘Where does consciousness begin, and where end? Who can draw the line?’ Butler’s refutation of vitalism — the assumption that living organisms are irreducible to machines — belongs in that peculiar constellation of late Victorian works which sensed the foreshocks of the seismic transformations coming in the twentieth century. As part of those transformations the philosopher Gilles Deleuze proposed that machines and organisms were destined, in the ineluctable absence of a ‘security’, to become interchangeable cogs. He praised Butler’s speculative novel for “calling in question the personal unity of the organism” by presenting a futuristic New Zealand as an intermixing of machines and humans. Deleuze argued that if Mark V Shaney isn’t a human we ought not to not hold that against him, just like machines, with their trillions of calculations a second, ought not to hold our humanity against us.


Which is not to say that machines are ready to replace us just yet: contemporary AI still needs direction from humans. A YouTube search for “neural network generated” finds humans providing such direction: generated speech, J S Bach-style fugues, rap, self-playing Mario games and deep-dreamed images of Donald Trump are all part of the rich harvest. These generations are based on newer algorithms than Shaney’s Markov chain, with greater potential and more powerful learning capacities. The Markov chain limits itself with its amnesiac hope that only what has just happened matters — it can only look at the current state and make a decision based on that. The neural network meanwhile is emblematic of a fire and forget modernity which can observe, learn, remember and analyse data — sometimes without human supervision. So powerful are Google’s latest neural networks that they can translate languages they have never been trained on. The young algorithm can make inroads into some of the most recalcitrant problems of computer science by using a powerful definition which allows code to optimise itself. The structure of this system does bear some analogy with biological neurons: it is composed of interconnected nodes, and signals are sent when an activation threshold is reached. Like biological brains, non-biological neural networks are fast learners and can solve a large selection of problems. Experts in the field stop the comparison here by stressing that neural networks are at most a ‘cartoon’ of the brain. Unlike these neural networks (as far as we know), human cognition does not work by calculus optimization to create optimal paths between nodes on each iteration of the network. An inspired application of high-school calculus is what underpins computational neural networks — a process known as ‘backpropogation’ — is what enables them to program themselves in a way which appears spookily biological. Trained sufficiently, they will attempt any kind of data transformation. Harking back to Shaney, they can even be used for creating text in the style of any given source. Training for this is simple: set up a small neural network and feed a sequence of an author’s words asking it to predict the next one, and reward successful guesses. After just a couple of hundred cycles the network can try to be a metaphysical poet:


Tho lustrous my features have sinned

All of us fame him, cough his praise,

but if we break and ever God show his wrath:

I want nor death nor prison from detesting,

and will not blow nor cane, from my natural

Lord whose light can probe so.


Once a mountain for all: there his fall

of old team'st in his court of love.

In me basks the worm, his way wast

of nature none to spout by cruelty:

where we all, the wise elders

by that comparison sailed.


His sinne, dying lives, some falls shall hither be sweet.

My Soule, shall, as master did tease,

with pace, we know, love beget,

bare and feather itself: dived deeply.

Our cross did tear me which each way,

before her men change, and hence go thus.


Execrably Donne. ‘Cough his praise’ is hardly the divine sublimation of ‘batter my heart’, and is not, without much contextual work, allowable as an irony. ‘Some falls shall hither be sweet’ is more promising in suggesting the sexual Donne swaddled in conceits like lovers, racing through the Miltonic hop-scotch of his transgressions where the fall is a sort of group-think. It is uncomfortable to read the worm eating away Donne’s heart, and both of them as ‘wise elders’: our neural network is over-egging the postlapsarian pudding like an undergraduate at a poetry group. ‘Some falls shall hither be sweet’ is delicious but after Joyce can only feel looted. But then who would know if Donne had written such a line?


As of yet the neural network does not stir pages and rattle quills like its masters. Tritely it allows for manipulations of style: we could manipulate the training algorithm to reward pentameter over alexandrines, squeezing a bonus iamb of meaning from our favourite poets. Or we could mess with the memory of authors, and have T S Eliot begin his Pentecostal despair not with a dove descending, but by with G K Chesterton’s melancholic jellyfish.


I wish I were a jelly fish That cannot fall downstairs; Of all the things I wish to wish I wish I were a jellyfish,

Within. The window and the stairs,

And the silences of the stairs,

Where strife, man and merman, goes,

Intimating the green stars.


Can you see where the ‘Eliot’ neural network takes up the pen from Chesterton’s ‘I Wish I were a Jellyfish’?


Deleuze suggested we should view historical persons not as biographical and isolated but as intensities which, mutatis mutandis, would mean neural networks reduce a person’s work to a series of functions. This cybernetic conception encourages us to try our hand at establishing our own history of literature, restaging all the influences and intermixings and robberies. What if Eliot had preferred Herbert to Donne, or Pound had never read the troubadours? Compile a neural network on their influences and on their work and run a regression test to measure differences. Texts gain an extra dimension, neither vitalist nor mechanistic, of production: every book as a recipe for itself. This sort of playing with agency messes with our established beliefs. For Literature these retain some of the Victorian anxiety about the preservation of a spirit of man in good writing. Transforming text into functions makes it quantifiable, unmysterious, reproducible. Rendering these voices as piles of linear algebra does not do great things for their status. Authors are no longer ‘conceived of in terms of representation’, but instead as "effects" on text.


As of now the results are unlikely to be mistaken for the real thing. The short film ‘Sunspring’ was written by a neural network trained on hundreds of canonical sci-fi films. The plot is a melange of sci-fi tropes: colonisation, space travel, and personal betrayal, and watching it imparts a slightly drunken feeling, like half-remembering parts of several plots sequentially. This disorientation is a “soft” limit, as shorter expressions like jokes or names are realistically rendered, but longer passages often descend into a hazy confusion. One company attempting to surpass this boundary is Chicago startup Narrative Science. According to a company report, their software “Has a set of ideas it wants to convey” and “an improved understanding of context” over traditional natural language processing software. This allows it to produce readable accounts of baseball games and stock portfolios: their CTO predicts that journalism will be transformed by the technology, with 90 percent being machine written within 15 years.


This is concerning for those of us who want to read the work of human authors. But if writers must accommodate the transforming field it need not be destructive. It is possible to incorporate the transferability of data into their creative process. Clickhole’s head writer Jamie Brew has written a predictive Markov program ‘pt—voicebox’ which can feed prompt lines based on inspirational texts the user has uploaded. More personally, a writer could train a neural network to emulate their text, perhaps with a future ‘contextual’ understanding of software like that of Narrative Science. They could then compare their work with a neural imagining and combine the two, selecting the best pieces for inclusion. The Symbolist author Mallarmé defined the process of writing as the fragmentary approximation of one perfect and inaccessible ‘Book’, approximated by every work we write, and that this ‘Book’ completed and atoned for the books we have never written by charting ‘all existing relations between everything.’ The Symbolist agenda of charting an “unseen reality” is recast for modernity by these new developments. Take enough source material, and the neural network gives you the possibility of even writing unwritten books. Bouvard et Pécuchet or Kubla Khan could be completed, but so could De Quincey's unwritten masterpiece on botany, or Ford Madox Ford's non-extant but terrifying story of failed lunar exploration, a missing link between M R James and H P Lovecraft.


The neural network, like everything inhuman, will not kill the author. What the neural network ought to do is make us think about agency. Consider the transformational moment between the printed ink and the word: is what resides there only fully explicable to human thought? The literary theorist Roman Ingarden saw literature as an object which could exist properly only within human consciousness. At the same time he discerned that correlating the literary work with a category of traditional metaphysics would produce absurdities — for example thinking of it as purely ideal would make it uncreatable and immutable. He proposed a four-fold categorisation of textual meaning to counter this: the arrangement of sounds and words which is sequential and quasi-musical, the pattern of thoughts, sensory information, and the imagination where the world of the book where its characters and places most fully exist. This final category – a communication between the mind of the author and that of the reader – is necessarily imperfect, but presupposes two thinking subjects. The forms are obviously no issue; the idea of thoughts being patterned by machines might seem alien until we remember that algorithms run the world. But there is no transcendental idealist home computer that can sign off as a subject capable of ideational thought in the way we understand it. Ingarden would have the neural network productions as sub-texts, capable of some of the same effects as Literature but lacking the necessarily human capstone, like a MIDI version of Beethoven's Sixth.


Others have more Manichean fears. Cybernetics is the study of information flow in machines and humans. Its founding father Norbert Weiner advanced the Butlerian idea (by now a utopia) of machines and humans working in harmony in a future society. But his vision contained a repressed fear. The Human use of Human Beings (1950) sees humans and machines sharing a common goal: the control of information. Unfortunately, Weiner found too much of the upstart courtier in our robotic assistants, posing the possible threat of replacing the human organisation of society with themselves. This has been read correctly as a fear for the integrity of our form of consciousness, the one which arose in the Enlightenment and has been defined by having a handle of the flow of information. Ceding control of information to machines changes what human beings are. Without this control, one example of which is the act of authoring, all human actions would become imbricated in the external processes of a society guided by machines — and would remain forever structurally subordinate to their control. Weiner’s conclusion was that to prevent this the arts and sciences ought to remain diametric pairs. The arts would resist the rise of the machine by keeping people in control of political and aesthetic domains where they could through creation remind humanity what was essential to them. This would permit scientists to continue optimising their machines for the good of society within carefully delimited organs of the body politic. The founder of Cybernetics therefore reached a Victorian Humanist conclusion where the arts were roped in as the last line of defence against the foundering of society.


This paranoid sensibility is not a million miles from a modern technocratic platform. Many NGOs and think-tanks are devoted to developing sensible regulations of AI, including a major section of the University of Oxford’s Future of Humanity Institute. For the smaller remit of this article I would suggest that Literature accept these emergent technologies which offer possibilities for research and creation in ways we cannot yet easily quantify. Literature can be defined as an opening into new worlds: as these new possible worlds appear on our horizons we should be excited by their potential, not nostalgic for the time when only humans could write. A first step would be to accept the legitimacy of computational research and generation, and then to become familiar enough with it to defend its use against more conventional computing fields. The first computer theorist was Ada Lovelace, the daughter of one Lord Byron. We ought to celebrate that link.

PETER VICKERS reads English at Magdalen. He has a long-running feud with a one-eyed seal.

Art by Sophie Nathan-King

bottom of page