What have proven to be of especially limited consequence […] are theories that try to apply the universal representational capability of computation to the human world. […] Languages are not codes; programming ‘languages’ like Basic and FORTRAN and scripting ‘languages’ like HTML and JavaScript do not serve the functions that human languages do.
Welcome to the Word Factory
Why are university managers excited about AI? Detailing the industrial and intellectual conditions of AI’s uptake, Andrew Dean argues that the disavowal of aesthetic judgement has helped pave the way for the acceptance of natural language as data.
In an all-staff Zoom call in 2021, the Vice Chancellor of my institution introduced a bold new vision for the university, ‘Deakin Reimagined’. Innovative technology and investing in automation would mean more output with less input; academic productivity would rise and we would all be better off. We just had to wait – and believe. He ended the call by announcing there would be significant job losses.
A SharePoint site was opened the moment the call finished, which contained further information about the future of our positions. After scrolling through multiple organisational charts and diagrams made up of little bubbles, we eventually discovered just how widespread the job losses would be. Few areas had escaped the axe; some disciplines were slated to become ‘teaching-focused’. In a follow-up call within our school, staff were encouraged to consider voluntary redundancy. A calculator had been put on the SharePoint to this end.
Part of the fatigue of that moment came from the sense of déjà vu. Every change of administration in universities means a new system and set of priorities. Sometimes, it can signal a return to the status quo ante, complete with official amnesia about what happened in the interim. The previous revolution proves time and again to have been insufficiently radical, or founded on the wrong principles, or otherwise lacking in some crucial dimension. The wheel had turned and would do so again. Meanwhile, those who held onto their jobs often faced increased workloads, especially administrative and clerical staff – the new Stakhanovs.
The negotiation of the new Enterprise Agreement in 2023 led to the first university-wide strike in a decade. In what proved to be a year of renewed union activism, there were concerted cross-campus campaigns and a ‘week of action’ as a number of the state’s universities negotiated their agreements. Running in the background was the longstanding wage theft dispute. In June 2024, the NTEU released data showing that confirmed instances of wage theft totalled $203m across Australian universities. Universities have meanwhile put aside a further $168m to repay other suspected cases. Both the acknowledged and suspected incidents follow years of official denial from university managers.
How we use our time and how we are paid for it have been, in their different ways, at the heart of the redundancies, Enterprise Agreement negotiations, and disputes about wage theft. Workers must make more each year, so the logic goes, improving productivity across the ‘business’. This is in the nature of capital accumulation: more each year, every year, for less. Of course, that rising production in turn helps to justify inflated executive salaries and bonuses.
It is common to hear university leaders repeat the truism that ‘universities are businesses.’ Crucially, in Australia, public universities are not. There are no shareholders. Universities turn a surplus, not a profit, and seek to return any surplus to improving their operations. Governments fund the nation’s thirty-eight public universities in order to return public benefit in some form. The legislation that sets out the arrangements for my institution, for example, lists nine objectives that it aims to fulfil. All but one of these are phrased in terms of service and public benefit. The other describes business only in terms of making the best of what it has: the institution should ‘utilise or exploit its expertise and resources, whether commercially or otherwise.’
University managers, believing themselves to be running a business, view the resistance of university labour to wider changes in the economy as a problem to be solved. It once took a tutor two hours to teach a twenty-person seminar – and it still does. Estimates of marking have been proven to be woefully inaccurate (at least if we want to do the job properly). Accelerating research has the unfortunate downside of making it worse. ‘Investing in automation’ in this context is an attempt to speed up the labour-intensive activities that comprise our jobs: it makes good business sense, finally.
This explains why, officially at least, generative AI has been met with excitement by university managers. At the first all-staff meeting about generative AI at my institution, staff were treated to an interaction with a chatbot that could speak. We were encouraged to ask it questions, which it then answered in natural language. This was intended to be edifying for us, connected to the wider ‘opportunities’ that AI presents. Staff, though, seemed less convinced. Numerous questions went unanswered on the online system about the human jobs that the speaking chatbot was now inadequately performing.
For most university staff, the most obvious outcome of the public release of AI language generation in our working lives has been that our jobs have become worse. We have been confronted with immensely enervating issues relating to student integrity. In literary studies, machine writing capability has elided the slow push and pull that takes place between writing and thinking, and, with it, the importance of learning to think clearly in language. At the same time, for all the talk about automation, the lack of people on the ground in the months following the redundancies saw already overworked staff now also dropping off milk and post to the departmental office.
AI is less an existential crisis about the nature of human intelligence than it is another stage in the extraction of surplus value from those who produce it. Tech and managerial cultures, including in universities, justify large investments in speculative technology in the hope that, either now or in the future, fewer humans will have to be paid salaries, because machines will speak, listen, and write just like people were once paid to. These machines need not function particularly well; they merely need to do the work to some minimal standard. We may laugh at GPT attempting to emulate Elizabeth Bishop, but achieving the heights of human expression was never the point – the point was to automate and extract value from intellectual labour. The dream for our managers is that we might move from the era of the handicraft to that of the factory, from artistry to the ready-made.
In this sense, AI merely accelerates existing tendencies. The mass casualisation in higher education has relied – inaccurately and therefore illegally, as it turns out – on granular accounting for every kind of activity that one can undertake as an academic. There has been a system-wide effort to make academic labour fungible, such that every piece of work can be undertaken by someone else in a production line. Managers now dream of using AI to accelerate parts of this process, to substitute expensive human labour with synthetic intelligence systems.
For Marx in chapter 15 of Capital (vol 1), transformations in the mode of production in one sphere tend to lead to changes elsewhere in the same system. ‘Machine spinning made machine weaving necessary’, he writes, ‘and both together made a mechanical and chemical revolution compulsory in bleaching, printing, and dyeing’. When tasks that require judgment, deliberation, and instruction are capable of being sped up – or at least, when we stop caring about the gap between computer simulation and its human version – the remaining systems for academic production of all kinds will also transform. Machine language, in what the writer Laura Preston has called an age of ‘hyperabundance’, also requires machine reading, which in turn will likely require a revolution in research, teaching, and assessment.
None of this need improve the quality of work that is actually done, of course, nor indeed save time. Instead, it is a system of standardization and intensification. Machine spinning and weaving of cloth have led to the ecological and human catastrophe that is fast fashion – not to garments that are better suited to our bodies, or, of course, to a liberation from work. The production line leads to deskilling rather than new capabilities, uniformity rather than specification.
Welcome to the word factory.
Since the public release of widely accessible forms of language generation in 2022, scholars in literary studies have confronted with new urgency a number of fundamental questions about assessment, curriculum design, pedagogy, and research. I offered my own thoughts in this forum in early 2023. As I said then, student essays seem less and less capable of demonstrating student learning in our discipline. What do we think we are actually measuring when we assess via the essay? Likewise, the idea that we are teaching our students to write seems less convincing in an environment where one can create endless plausible synthetic prose with the click of a button. What is the relationship between writing and thought (a question which is clearly both important and unresolved)? And what is the place of university literary studies in all this?
The terms of the response to generative AI in Australian English departments will be profoundly shaped by the history of the discipline here. The cultural and media studies revolutions of the 1980s and 1990s transformed the study of literature on these shores more than in any other Anglophone literary context. As I have written in Australian Literary Studies, these insurgent disciplines became mainstream in Australian university English in a way they did not in the United States and United Kingdom (which by and large maintained more historical requirements and canonical syllabi). Here meanwhile the teaching of literature came to be dominated by new ‘critical’ approaches to culture. For its advocates, studying literature in this way, among its many other benefits, turned against the overwhelming elitism that the discipline had both inherited and transmitted. ‘Cultural studies has become popular,’ Australian literary studies scholar Simon During wrote in 1997, ‘in large part because students’ preferences have a growing influence on the curriculum’.
As During’s comments suggest, two seemingly unrelated though coeval forces proved to be decisive for the direction that the discipline took. The first was the consumer choice revolution in higher education in the 1980s and 1990s, following the Dawkins Revolution in Australia and the reforms in New Zealand. The overall effort was to link students, institutions, and employers, such that public aims would be fulfilled by students choosing institutions and courses that would lead them into employment. The second was developing syllabi that appeal to students. Assuming that privileging consumer choice would best direct public funds toward areas of economic opportunity, these theories of public management both dramatically overestimated the economic rationality of eighteen-year-olds and dramatically underestimated the ability for universities to respond to policy settings in their own ways. Populism in that sense is structurally embedded in our institutions: the most popular courses grow departments and disciplines. This transformed environment was ultimately agnostic about matters of content but highly interested in financial returns.
I was at the tail end of the generation of students in literary studies whose course of study had been transformed by the Cultural Studies revolution. I recall as an undergraduate in the mid-2000s first hearing about the bad work that ‘culture’ does. The force of this account, though, was limited by the fact that the lecturer had to explain what people once meant when they talked about the civilising effects of this ‘culture’ – symphonies, theatre, and the like – because few of us knew what he was talking about. We were mostly there because we wanted to watch horror films.
At the heart of these new ‘critical’ ways of studying English was a particular view of the nature of literary language, namely that the aesthetic qualities we discover in literary expression reflect social practices of classification – ‘taste classifies, and it also classifies the classifier’ as Pierre Bourdieu put it. It is in large part our societies, rather than anything inherent in the texts themselves, that make us believe Jane Austen’s fiction to be of higher value than J. K. Rowling’s. As such, the reasoning goes, efforts to describe the value of Austen often end up showing us more about how elites reproduce themselves than they do about the writing itself.
Curiously, the insurgent disciplinary formation ultimately came to share fundamental assumptions about language with the bureaucracies that housed it. The attitude embedded in our universities, as in all modern workplaces, is that language is akin to the memo, the form that John Guillory argues is fundamental to organisational structures of all kinds. ‘All of the writing we consider to be the most intrinsically interesting – literary or journalistic, scholarly or scientific – amounts only to a small percentage of the writing of modernity’, he notes. ‘Large numbers of people write, are even compelled to write, but they do not for the most part write poems or scientific papers; they fill out forms, compose memos or reports, send interoffice emails.’
On this view, language is not special – instead, it merely tells us things. As Guillory writes, bureaucracies dream of being able to use language in the same way that thermostats use temperature. Writing could be ‘thoroughly impersonal’, capable of ‘emit[ting] its commands in direct response to administration’. We might contrast this idea with the intensified language of poetry or certain kinds of fiction, for example, which tend to be more ‘intrinsically interesting’, to borrow Guillory’s words, but would not be suitable for telling someone that they need to submit their year-in-review document. While those who wish to reveal something about how power is distributed and societies regulated through readings of, say, Allen Curnow, may disagree about the purposes of administrative language, they are often as agnostic as any bureaucracy about the ‘intrinsic’ qualities of aesthetic objects.
The problem, as anyone who has ever filed a tax return knows, is that language tends to resist our attempts to say what we mean. Forms are never quite as clear as they would like to be, even as they organise language into sections and responses. The administrative age has gifted us not clarity but the argot of the senior manager, whose directives about strategy we now read with the same kind of attention to hidden meaning that some of us dedicate to poems. What does ‘sustainable environment’ really mean? Which of us is getting fired? The communicative function of language, we discover again and again, only goes so far before we find other possibilities. At its best, we can find a renewed capability to please, delight, and understand. ‘This is just to say’, we might think we are saying, until, suddenly, we are saying much more than that.
The approach to human language embedded in generative AI puts pressure on longstanding debates about literary language and aesthetic hierarchies. As an actor in canon debates, large language models could be said to have their view – and it is not the aesthetes who are winning. The computation of language relies on the idea that speech is communication: all language is data, all data can be processed, and all of this can be retranslated back into human language. Language is like chess which is like computer programming which is like image generation which is like music; the Enron emails are like Tintern Abbey. The key is the relationships between data, and, in turn, what kind of outcome the end user values. If the purpose of a game of chess is to lose every pawn, the machine will learn to do this and in fact be the best at it. If the purpose is to make every sentence of this article relate to fish, then the machine will do that too, if not flawlessly then at least quickly. (‘Much like a fisherman casting a wide net’, as ChatGPT told me.)
It may be obvious, but one of the seemingly intractable problems that has faced research into AI is conceptualising how computation can simulate natural language. To make machines speak, and to be able to speak to machines, we need to understand to what extent language is a series of codes and rules. The more that language is like a code, the easier it is to make a machine learn the rules of language and hence to speak like a person. If this is the case, we can code language processing as a series of rules and operations, even though this might take a lot of time and require a large team. But if language is not like a code, then it might resist computation, and natural language processing would remain forever just out of reach, no matter how much money and labour is committed to the task.
In a 2009 book, The Cultural Logic of Computation, David Golumbia endorsed the latter view:
According to Golumbia’s book, natural language processing would not happen via computer programming because there is a fundamental distinction between what ‘language’ means in both cases. Whereas one is a world of multiplicity, the other is an operational assemblage abstracted from machine code. It was in this spirit that Guillory once wrote that the ‘mysterious intransigence of language, which goes very deep, means that it will probably never be possible to reduce writing to rules, schemata, or the algorithms that run computer programs.’
There have been two important developments since Guillory’s and Golumbia’s comments: a revolution in the availability of data and the invention of new technologies for linking them. As is well known, large language models are based on the neural network, a technology that loosely models the human brain in its attempts to create connections between different data (which may number in the billions). Rather than hard coding human language practices, the machine draws statistical links between data and then optimises the process.
It turns out that literature is one of the more important sources for GPT. The training data is mostly derived from Common Crawl web archiving, but there is a significant volume of copyrighted materials such as published books. The top twenty sources include Jane Austen’s Emma and J. K. Rowling’s Harry Potter and the Philosopher’s Stone.
Until GPT, the best-known example of the neural network approach to computation was not in the realm of human language, but rather in the enormously complex boardgame, Go. Google’s AlphaGo was trained on a dataset of thirty million moves from games between human experts. The machine in turn followed endless decision-making loops to determine which move was optimal in given board positions. In this approach, the machine does not actually need to know anything about the game beyond a certain very basic starting point – human-derived theories are irrelevant. On move 37 of the second game against the world’s top player, Lee Sedol, AlphaGo played a move so unusual that human players would almost certainly never play it, on a line that experts almost universally agreed was poor – yet it was this move that won the game. It had found a pattern that we cannot see, buried in the deep structure of Go.
The question of how to make computers learn games has been resoundingly resolved: instead of handcoding rules, what is needed is more data and more reinforcement. We can create better players of games within hours than have ever existed in human history. Despite centuries of chess theory, all we needed, it turns out, was to feed in ever more human games and make the machine follow decision trees and play against itself ad infinitum. The gap between the best human players of chess and the top engines is now large enough to require a significant handicap on the computer player for competitive matches. That gap will never get smaller.
The same logic of data growth and reinforcement applies to natural language processing. As James Bridle records, ‘Frederick Jelinek, “the researcher who led IBM’s language efforts”, famously said that whenever he “fire[s] a linguist, the performance of the speech recogniser goes up.”’ There has been a vast improvement in machine translation in recent years, led by advances in machine learning. The new generation of computational translators is capable of working between languages that have no existing translation norms, or even extant translations. One could now create a passable translation of a newspaper article between Tibetan and Te Reo Māori without going through a third language first – at least if there is enough data supporting each language in the machine. There are likely no human speakers of both Corsican and Oromo, but the translation machine is theoretically as capable of performing well between these languages as between any other pair.
There is a perverse kind of equality in this approach to information – it is the equality of total arbitrariness. Language is data, not meaning; it is communication, not intensity. It treats ‘April is the cruellest month’ the same as ‘Deakin Reimagined will design an effective academic model to provide sustainable outcomes, informed by strategic plans and data’. To believe in the distinct nature of literary expression would in fact make AI worse at generating natural language. It is the memo, not the poem, that underlies how generative AI handles natural language.
I am sketching two related kinds of indifference, both of which can and have been described in terms of ‘communication’. The first is the indifference of language models to the particularities of language itself. The second is the indifference within university English to aesthetic discrimination, and the related turn toward theories of communications and media – a turn which occurred in the moment of new bases for funding university study. Generative AI is arriving in an institutional context in Australia already primed for its reception. Part of why it is difficult to argue against the incorporation of generative language into our pedagogy is that we are struggling to articulate core parts of our discipline in terms of aesthetic distinction.
In his 2021 book, A Defence of Judgment, Michael Clune writes that ‘for our political imagination, animated by the master value of equality, aesthetic hierarchy has become indefensible.’ That commitment is built into the history of the discipline here in Australia, which has in effect substituted vaporous concepts of literary greatness with syllabi that are meant to appeal to the student-consumer. Faced with our own redundancy, however, both literal and figurative, we are now encountering the inadequacy of populism as a justification for literary pedagogy. The language machine endlessly produces text from the cultural unconscious to satisfy the aesthetic median – and we do not know what we are meant to do about it.
As Clune argues, we have never really believed what we have been saying about the equality of aesthetic objects anyway. ‘The idea that the value of artworks is entirely a matter of subjective opinion, without any public standard, is so counterintuitive that even after a century it has succeeded not in repressing aesthetic education but merely in forcing it to wear the mask of hypocrisy’. That ‘hypocrisy’ is using political criteria as the mechanisms for evaluating works of art, but without acknowledging that we are doing so.
What I am saying here has some shared assumptions with what critics such as Rita Felski have recently been saying in their arguments for ‘post-critique’, a scholarly attempt to return the aesthetic to the front and centre of the discipline. Central to this body of work is a strategy of modelling the critic on the lay reader and their affective experiences of reading. Works of popular culture often have special place in her writing due to the affective intensities they enable – recalling the likes of Susan Sontag’s writing about music and art in the late 1960s. In the words of Helen Thaventhiran, there is a pattern in Hooked (2020), Felski’s most recent book, whereby ‘trained interpreters surprise themselves with their attachment to some aspect of popular culture.’ Wither the history of aesthetics; all hail the newly receptive critical lay reader.
I want to be clear, however: post-critique too is part of the problem. The thinness of its conception of literary history, along with its elaboration of reader-centred philosophies of value, mean that it offers no more satisfying a view of literary art than those founded on other kinds of relativism. It has occupied literary scholars not because of the strength of its prescriptions, but the power of its diagnosis: critique, as I have been suggested, is indeed not enough. In The Limits of Critique (2015), Felski writes, that her ‘aim is to de-essentialize the practice of suspicious reading by disinvesting it of presumptions of inherent rigor or intrinsic radicalism – thereby freeing up literary studies to embrace a wider range of affective styles and modes of argument.’ The first part of that sentence is necessary; the second adopts yet another populism that serves us no better than what it seeks to depose.
I remain convinced that we and our students want something more than what our bureaucracies and cultural markets have provided, more than what a reactive and prior commitment to aesthetic equality has ever been able to account for, and more than a massively elaborated theory of vibes can produce. We are not just computers, language is not just data, and expression is not just communication; we do more than intuit. As such, we need literary canons, aesthetic judgment, and close reading.
The opposite, it now turns out, is also true: we need to accept that if we believe in the equality of all aesthetic expression, then we are reinforcing the forms of valuation practiced by the machines developed by OpenAI, rather than contributing to a radically anti-elitist politics.
It is with all this in mind that I am making an argument that may feel unusual. The stronger one’s belief in the equivalence of all expression, the fewer qualms one might have about how literary studies internalises generative language. Though it was from the left that the challenge to establishment literary studies was mounted, the new orthodoxy aligns with the form that power has taken in the university. By contrast, the stronger one’s belief in the distinctive nature of aesthetic capability, the more capable one is of mounting a defence of the labour-intensive activities that comprise university English.
If mine seems an arrière-garde case, it is one that has been gained from many years of struggle in the university system. While battling to secure my colleagues’ jobs, our students’ futures, and the discipline as a whole across three countries and four universities, I have seen that though we are clear about the problems we face, we are less capable of making a case for why our work matters. Critique is well and good, but we need something to fight for. We also need to be able to sustain that fight through changes of government and wider political circumstances.
After all, according to a line of reasoning now at large on the political right, the state should not support much of the research that takes place in the humanities. This was shown in 2021 when acting education minister, Stuart Robert, infamously blocked six grants from receiving Australian Research Council funding, including three from literary studies. He said at the time that the projects ‘do not demonstrate value for taxpayers’ money nor contribute to the national interest’. Even if it was apparent to all concerned that the ultimate motivation for this decision was the Coalition government’s reactionary anti-elitism, his use of the language of market provision and growth gave plausible deniability. We, in response, need strong cases for the value of what we do.
None of what I have said is opposed to the idea that we should be able to work adeptly with digital materials and methods. A new age of literary expression and reading requires that our students have the tools to understand language generation and natural language processing. I have little doubt that literary studies courses will increasingly incorporate digital methods and that this will be good both for our students and our own research. While we should be clear that close reading is the method that we have for establishing value, digital approaches can help to situate our interpretations and engage better with literary meaning.
It is clear, though, that if we want to have space for literary pedagogy after generative AI, we will need to turn against the cultural populist line of thought embedded in literary studies. Without viable aesthetic bases, we will find it difficult to argue against computationalism as it pertains to, and ultimately seeks to absorb, our discipline. At the same time, it is in revisiting older questions about what draws us to literature that we also hold open the promise of reanimating the discipline. What is aesthetic capability? What is an aesthetic education? To what extent does literary studies really offer a criticism of society? What are we doing when we teach literature? What do our students need to know? And what separates human writing from that of machines? The answers to these questions will be guides for how we teach and write about literature in the years to come. They will also determine how we face the current – and next – crisis.