In his short story ‘The Library of Babel’, Jorge Luis Borges describes a world that consists solely of a vast, perhaps endless, library in which all possible books are stored on uncountable shelves in tessellating hexagonal galleries. Most of the books contain random combinations of letters and streams of gibberish, but among the nonsense the library contains every literary masterpiece ever written, as well as all possible literary futures. There may also be, at an unknown location, a Book of Books, the key to the whole library and thus to the universe:
In some shelf of some hexagon, men reasoned, there must exist a book which is the cipher and perfect compendium of all the rest: some librarian has perused it, and is analogous to a god.
In Christian theology, omniscience is one of the essential characteristics of the divine. It is not a state to which we humans can reasonably aspire. In the Summa Theologiae, Thomas Aquinas quotes Augustine of Hippo: ‘God does not see all things in their particularity or separately, as if He saw alternately here and there; but He sees all things together at once.’
All things together at once. There is something terrifying, something profoundly alien about this supreme power of knowledge: like the composite vision of an insect’s compound eye scaled up to make a panopticon of the whole universe, with no shadow or corner in which to hide. In Genesis, the serpent’s temptation – to eat of the tree of knowledge, and thus become godlike – and the resultant banishment of the first couple from paradise represents a taboo: the folly of attempting to transcend the finitude of human knowledge.
This ethos of accepting limitation as intrinsic to human knowledge underlies the influential current of epistemological modesty in politics and economics, from Edmund Burke to Friedrich Hayek. The latter argued in his 1974 Nobel Prize acceptance speech that the human mind is equipped only to devise strategies to advance one’s own interests as an economic actor; attempts to put a whole national economy under intentional control, either as a fully planned economy or even well-meaning state intervention in a market economy, are doomed to failure. According to Hayek, because the economy is so complex, any attempt to set prices or wages will result in unintended consequences and distorting effects, so they should be allowed to find their optimal level via competition.
Huge quantities of information on all aspects of society are collected and not only stored, but algorithmically processed, in real time. The so-called ‘cloud’ of geographically distributed data centres, combined with exponential increases in computational power, enable modelling and prediction of real-world events, economic and otherwise, that were previously unthinkable. In the realm of fintech (financial technology), enormous fortunes have been made on the basis of accurately predicting changes in the market value by the use of highly complex economic simulations. As technology transcends the limits of the ‘single brain’, something approaching omniscience seems to be hovering into view, and the race is on between corporations to get there first. The more data, the better the model; no wonder, then, that four of the top five publicly traded companies by market valuation are tech giants, with Facebook running a close sixth.
For the social psychologist Shoshana Zuboff, the meteoric rise of the tech industry brings with it worrying developments that should worry us – the general public – a lot more than they do, and the fact that most people are blasé about them is itself a matter of concern. The purpose of her book Surveillance Capitalism is to awaken the reader to a sense of ‘astonishment and outrage’ at Big Tech’s power grab and its effects on society. Zuboff likens the tech giants’ incursions into private life to the conquistadors’ colonisation of the New World by imperial fiat: by the time the Indigenous peoples realised what was occurring, it was already too late.
Digital surveillance has rapidly invaded domains that were previously regarded as private. When we carry a smartphone with us, we constantly transmit our location; if we wear fitness trackers, they announce every beat of our hearts; the various sensors and thermostats that enable the ‘smart home’ report back to their makers the routines of our daily lives, including the time we get up and the time we go to sleep; and every message – joking, tender or irritable – that we send to a friend, lover or family member is recorded in giant data stores that silently listen and never forget. In the case of virtual assistants like Siri and Amazon’s Alexa, the devices are literally listening, all the time, to every word spoken in range of their microphones.
These digital spies fulfil two purposes: the service to the individual for which they were purchased (or often, in the case of software, used for free), and the service rendered to the corporation that designed them. The latter is the harvesting of ‘behavioural data’ in order to construct an individual profile, in intimate detail, of a person’s: lifestyle, attitudes, daily routine, financial situation, purchasing habits, ethnic identity, aesthetic sensibilities, social relationships, gender and sexuality, and even the state of their physical and emotional health. This data is not actively submitted by a user but is inferred by the corporation’s analytic systems from a wide range of behavioural markers. These markers can reveal not just the person’s present situation but can also make predictions about some future action. Google and Facebook make money out of this knowledge by offering the vendors of products and services the opportunity to place an advertisement in front of their ideal customers at exactly the time when they are most likely to buy what those vendors are selling. Breadth matters as well as depth: the greater the number of people caught in a technology’s company analytic web, the more reliable its analyses become. Algorithmic systems can run comparative analyses based on one set of phenomena and compare them with the results for another set, and then another, hundreds and thousands and millions of times over until the most reliable behavioural predictors are found.
This all sounds laborious and painstaking, but none of it is performed by human analysts. Algorithms do not tire or get bored. One of the greatest achievements of machine learning is to identify patterns that humans have not been able to detect. Neither the suits in charge nor the data scientists who devise the algorithms need to fully understand the causal link between one detected behaviour and some other behaviour that can be algorithmically predicted as likely to follow it; all they need to know is that it works. The actual knowledge is too vast in its abstraction to be housed in a single brain, or even a collection of brains. It is impenetrable, ineffable; all things together at once.
According to its mission statement, Google’s purpose ‘is to organize the world’s information and make it universally accessible and useful’. But on closer inspection it turns out that this claim is only half-true. It certainly aims to ‘organise the world’s information’, but only a small portion of the data it collects is made ‘universally accessible and useful’. Zuboff distinguishes two ‘texts’ generated by Google: a ‘public text’ and a ‘shadow text’. The former is the data visible to an ordinary person searching the web. The act of searching, however, generates behavioural data about the person, a chronological profile of every word or phrase the user has ever entered on Google Search, linked with every other bit of data relating to that person that has passed through Gmail, Google Maps and the rest of Google’s apps; this is the ‘shadow text’. It is very useful to Google, as it is the basis of its enormous profits, but it is far from ‘universally accessible’. It is hidden from the individuals to which it refers, accessible only to Google and its corporate partners.
Google was the pioneer in this regard, but the other tech giants have adopted the same extractive approach to data. In a recent court case over the sharing of personal data with the notorious (and now defunct) consulting firm Cambridge Analytica, the lawyer defending Facebook argued that ‘the social act of broadcasting your private information to 100 people negates, as a matter of law, any reasonable expectation of privacy.’ In the rush to ‘bring the world closer together’ (Mark Zuckerberg’s gushing formulation), we have unwittingly invited a vast cohort of multinational corporations into what used to be private spaces: our homes, our relationships, our inner lives.
Public response to this massive invasion of privacy, however, has been rather muted. People are aware, to varying degrees, that the tech companies are not the champions of openness that they pretend to be, and that the trade-off for using free apps like Facebook and Gmail is putting up with adverts that show up with creepily good timing; but the predominant response is a weary shrug. We watch Black Mirror, we tut a little, and then we forget about it and go back to browsing Instagram. Is it really such a big deal?
One of the reasons to take seriously the accumulation of personal data is its tendency to leak. Information that might be harmless enough in one context can become compromising in another. If a health insurance company was able to access biometric data from a fitness tracker, it could use a ‘risky’ profile to deny coverage or jack up premiums. Zuboff points out that it is prohibitively difficult to determine exactly which third parties will end up legally entitled to access your data, because of the sweeping rights granted with a check-box click in ‘terms and conditions’ documents. Data security on the internet is an arms race between software developers and various types of attackers – from state-backed cyber-warfare to organised crime: breaches are common.
In his book Radical Technologies: The Design of Everyday Life, Adam Greenfield cites a case in which data recorded for innocuous purposes ended up having lethal consequences. In 1936, the Dutch Bureau of Statistics mandated the recording of demographic data for all its citizens, including a field for ethnic origin. Four years later, the Germans invaded, and this data fell into the hands of the Gestapo, where it was used to identify and locate over a hundred thousand Dutch Jews who were imprisoned and transported to concentration camps; only five per cent of these survived by the end of the war. An extreme example, but the use of data to target ethnic groups is not, unfortunately, confined to the past. In April, the New York Times reported that startup companies in China have developed facial-recognition technology, designed for use by the police, that detects members of specific ethnic groups such as Uighurs and Tibetans. If the software identifies a certain number of Uighurs congregating in a certain area, the system ‘immediately sends alarms’ to the police.
Big Data can be an instrument of power in other, more subtle ways, too. Zuboff argues that a technology company whose value depends on accurately predicting behaviour is incentivised to use all means at its disposal to ensure that prediction correlates with reality: ‘closing the gap between prediction and observation in order to approximate certainty.’ If the next step from measuring behaviour is predicting it, then the logical third step is intervention. Zuboff points to an article by Facebook researchers that describe the deliberate manipulation of the moods of two arbitrary segments of people who use Facebook by filtering the kinds of posts from their friends and acquaintances that showed up in their social feeds: people who saw happiness in their feed, on average, tended to respond with happy posts of their own, whereas those who were shown only sad and angry posts reflected those negative feelings back.
Zuboff traces an intellectual thread from the behavioural psychology of John B. Watson and B. F. Skinner – who, in his 1948 novel Walden Two imagined a utopia achieved via ‘behavioural engineering’ – to what Zuboff calls the ‘instrumentarian power’ of the technology giants’ behavioural modification. ‘Instrumentarianism’ is a neologism that Zuboff’s employs to draw a contrast to the totalitarian regimes of the twentieth century. Whereas totalitarian states use violence and ideological programming to achieve a body-and-soul domination of the human subject, ‘instrumentarianism’ is indifferent to the content of the soul, and controls behaviour via the ‘ubiquitous digital apparatus’ that she names ‘Big Other’ – another neologism, named to emphasise the contrast with Big Brother in George Orwell’s 1984, though, oddly, she does not acknowledge the use of the phrase ‘big Other’ for entirely different purposes by the psychoanalyst Jacques Lacan.
A more significant lacuna in Zuboff’s hefty tome is the work of the philosopher of sociology Jürgen Habermas, whose book The Theory of Communicative Action uses a similar metaphor to Zuboff’s conquistadors to describe the incursions of instrumental rationality: the ‘colonisation of the lifeworld.’ Yet Habermas, writing nearly two decades before the establishment of Google, more accurately describes the outcome of such incursions. Where Zuboff depicts a deadened, zombified consciousness that loses the ‘will to will’, Habermas argues that instrumental rationality’s domination of society results in not only alienation and demoralisation but also fragmentation, disintegration, and social instability. For Habermas, these social pathologies are caused by too much guidance from above, the bureaucratic state socialism of the twentieth-century Eastern Bloc and the market logic of western economies are two sides of the same instrumentalist coin.
Zuboff, however, is determined to draw a hard distinction between ‘surveillance capitalism’ and capitalism in general. She sketches a three-part periodic schema of modernity: a ‘first modernity’, epitomised by the Ford Motor Company, which more or less meets people’s needs in a virtuous cycle of hard work and consumption that leads to rising standards of living; a neoliberal ‘second modernity’ in which this equilibrium breaks down in wage stagnation, consumer debt and an ideology of market fundamentalism; and our present moment, a ‘third modernity’. The story Zuboff tells of this third modernity is of missed opportunities and unhappy confluences of historical forces. There was a moment when technological advances seemed to offer an ‘implicit social contract’ of a new ‘alignment of commercial operations with consumers’ genuine interests’. Apple’s innovations in design made digital technology much more intuitive to use, while Google’s search engine put information at our fingertips. Zuboff mourns the loss of this moment, which she sees as potentially emancipatory, had it not been stifled by the choice to pursue surveillance rather than consumer empowerment.
But what if the glimmer of a new social contract was never a real possibility, but merely the glossy sheen of clever marketing bullshit? What if the problem is not ‘surveillance capitalism’, but capitalism as such? That is the case that the new media entrepreneur Aaron Bastani sets out to make in his book Fully Automated Luxury Communism.
Bastani’s book fits easier in the pocket than Zuboff’s, but has more grandiose ambitions. Like Zuboff, Bastani hangs his narrative on a three-part historical schema, though on a much longer time-scale, marked by ruptures rather than eras: the first ‘disruption’ is the prehistoric shift from a hunter-gatherer society to agriculture; the second is the industrial revolution beginning in the eighteenth century with the agricultural revolution; and the third – following relatively hard on the heels of the second – is the advent of information technology, which Bastani characterises as inaugurating a replacement of scarcity with abundance as an organising principle for economics.
With rhetorical velocity and an optimism verging on the Panglossian, Bastani launches the reader on a whirlwind tour of emerging abundances. Advances in battery storage open the door to limitless solar energy; mineral resources that, becoming scarce here on earth, are in plentiful supply in the asteroid belt, or in Bastani’s characteristically airy phrasing, ‘the limits of the earth won’t matter anymore—because we’ll mine the sky instead’; the ecological problems and food shortages caused by livestock farming will be solved with synthetic meat; genome sequencing and gene therapies hold out the promise of vastly improved health; 3D printing will bring manufacturing out of the factory and into the home; and the abundance of information and processing power supplied by advancements in computational technology appear set to replace not just manual, repetitive jobs but automate skilled work in many sectors, white-collar as well as blue.
Bastani claims that these new abundances have the potential to liberate humanity, but cannot be reconciled with the way capitalism works. From the perspective of the current economic order, abundance presents not an opportunity but a threat, and so, Bastani tells us, ‘under conditions of abundance capitalism pursues a form of rationing in order to ensure profits’. As Paul Mason describes in Clear Bright Future: A Radical Defence of the Human Being, ‘faced with network effects, tech monopolies have designed their business models to capture…positive spillovers in the form of economic rents.’ From this perspective, the ‘implicit social contract’ that Zuboff saw in the new model of music distribution pioneered by Apple was merely the last gasp of a recording industry originally based on ownership of an expensive means of production, the vinyl pressing plant. Apple’s $0.99 price-point for buying a song was an arbitrary figure that could not, and did not, last. The retail model for purchasing individual tracks has been overtaken by a subscription model with unlimited streaming. For each play of a track, subscription services such as Spotify pay the record label around $0.004. That’s clearly not a viable way for any but a tiny fraction of the most popular of musicians to make money. As a result, the music industry has been forced to reorient towards live performance and merchandise.
It is certainly the case that technological advances such as automation are ‘disrupting’ existing companies and old business models. But will they ‘disrupt’ capitalism itself? Or will capitalism’s capacity for self-invention save it from collapsing under the weight of its own contradictions? Zuboff and Bastani approach this question from opposite angles: Zuboff is anxious to save capitalism from its own excesses and deformations, whereas Bastani wants to free the emancipatory potential of technology from the limitations imposed on it by the imperative of accumulation. As it happens, I read the two books in overlapping periods. Bastani’s book arrived in the post when I was part-way through Zuboff’s, and I alternated chapters for a while, using each as a stylistic antidote to the other. As a result, I found myself bringing the books into a kind of dialogue – if not with each other, at least with contravening intellectual currents, to produce a friction that I hope will prove productive.
In that spirit, I note that some of the central concepts in Surveillance Capitalism are clearly reminiscent of Karl Marx: the ‘means of behavioural modification’, the extraction of ‘behavioural surplus’. She uses this latter term to differentiate between data used for the purpose of improving the user experience, and the ‘surplus’ data that is used for surveillance and behavioural profiling. The question, however – to take Zuboff’s Marxian turn of phrase more literally than she does – is what, exactly, is the source of that value? Zuboff, in using the term ‘extraction’, treats the process as analogous to the mining of natural resources. For Marx, however, the source of value in the capitalist system is labour: a quantity of ‘socially necessary abstract labour time’ required to produce a commodity is split into a necessary portion (returned to the worker as wages) and a surplus (the profit taken by the owner of the company). Applying this logic to the technology companies is complicated by the fact that the amount of wage labour involved is small compared to the vast profits accrued. Relatively few software developers and data scientists are involved in building a system to capture behavioural data. One ungenerous response to Zuboff’s term ‘behavioural surplus’ is to reject it as incoherent, and to portray profits from behavioural data analysis as ultimately parasitic on the ‘real economy’, with the source of value deriving indirectly from the production of goods that are bought and sold as a result of the targeted adverts.
But this risks missing what is new in ‘surveillance capitalism’. Labour, for Marx, is not a mere synonym for work but rather describes a specific social relation between worker and property owner that enables accumulation via exploitation. What if the technology companies have discovered a new way of exploiting a different kind of social relation by introducing themselves as a mediator in a wide range of intimacies and interpersonal communications? On such a view, we find ourselves caught between two kinds of exploitative processes: alienated labour and alienated leisure. The latter, according to Hamid Ekbia and Bonnie Nardi in Heteromation and Other Stories of Computing and Capitalism, is really a kind of hidden labour ‘naturalised as part of what it means to be a “user” of digital technology’, constituting ‘a new logic of capitalist accumulation’. Whatever the objective reality of these economic speculations, this framing seems to tally with the subjective experience of twenty-first century life, for which the phrase ‘work-life balance’ is symptomatic, not as a lived reality but an aspiration always just out of reach. We have long been used to the idea that ‘work’ is separate from, and antithetical to, our real lives. But the former increasingly encroaches on the latter, in ways ranging from the insidious to bare necessity: short-term contracts; on-call rosters; Uber shifts; and the staccato, peremptory triple-tap of Slack notifications. The distinction between work and non-work is eroding.
At the same time, our leisure time is permeated by the digital. Anyone who frequently uses Google Maps to find their way around has probably seen the message ‘we need some human help!’ accompanied by a smiling face, asking for opinions or ratings of this or that location. It is unusual only in its frankness. Most of the intrusions feel more like obligations than distractions, because they are hooked in to our real lives of relationships and social bonds: notifications from friends bleep and buzz; profiles must be fed and nourished with photos, opinions, and reactions. Our relationships, from the casual and peripheral to the most intimate, are increasingly mediated by communications technologies and an idiom of interactions that was entirely unknown a decade ago: like, mute, friend-request, crying-with-laughter-emoji, mute, unfriend, block.
And now this mediated leisure encroaches on official work time, too. What office worker does not spend a portion of the work-day tending to their social feeds, like a mother bird regurgitating a little experiential data to feed her nest of tweeting hatchlings? There are now two sets of corporations vying for our time: the ones that profit from our labour and pay us a wage, and the ones that pay us nothing, and sell our leisure as behavioural data. The ‘colonisation of the lifeworld’ has reached unprecedented depths of penetration. Eat, love, work; we are always prey.
If the technology companies have indeed discovered a way to generate value from social relations other than wage labour, then the implications for theories of post-work emancipation from capitalism are profound. Though Bastani disavows technological determinism, there is a sense in Fully Automated Luxury Communism that the advances he describes as being just around the corner – transition in various domains from conditions of scarcity to those of abundance – represent a fruit ripe for the plucking by judicious political interventions. As to whether asteroid mining and algorithmic legal advice is a feasible near-future prospect, I must defer to the judgement of those with relevant expertise. But I would like to dwell for a moment on the idea of automation.
Something can be called automatic when it operates without intervention. The word brings to mind machines, but might it not also apply to the operation of the market economy? There are, of course, many wills at work in capitalism – from those straining to increase profits, crush competitors, and find cheaper supplies, to those fighting for better pay and working conditions – yet the system as a whole is not determined by the will of any single actor within it. Hayek comments that Adam Smith’s metaphor of the ‘invisible hand’ would be better phrased as an ‘invisible or unsurveyable pattern’, a ‘spontaneous order’ beyond intentionality and ‘the limits of our knowledge and perception’. An essential characteristic of the market economy is that it is simultaneously totalising (a single numerical value – the price of a commodity – incorporates all the social factors that facilitate and impede its production: scarcity of raw materials, the intensity of consumers’ demand, weather conditions impeding distribution, availability of labour, import tariffs, the ability of workers to organise, and so on) and automatic (a company can set the price for its products, but if it miscalculates, it will be punished: too high and it will lose sales to competitors, too low and it will fail to make ends meet, so that the price is ushered inexorably to a provisional sweet spot until one or more of the factors change). This quality of totalising automation is both capitalism’s virtue and its vice. Its logic is necessarily not only inhuman, beyond the cognitive power of the human mind, but inhumane: it knows nothing of justice, human flourishing, duty, sacredness or mutual obligation, except inasmuch as any of these presents an opportunity or a cost to the production of commodities.
It may seem paradoxical to suggest that Big Tech represents a challenge to automation. But the amassing of behavioural data, and the predictive powers that the big technology companies are thus arrogating to themselves, threatens to short-circuit Hayek’s ‘spontaneous order’. Viktor Mayer-Schönberger and Thomas Ramge suggest in Reinventing Capitalism in the Age of Big Data that in today’s economy, the data available is so rich and complex that it offers significant efficiency advantages over the price system in conveying economic information (Hayek, no doubt, is turning in his grave). As Zuboff notes, the whole purpose of ‘surveillance capitalism’ is to reduce uncertainty. We might repurpose Ekbia and Nardi’s neologism and call the logical end-point of this process a totalising heteromation, in which not only is supply intentionally and perfectly adjusted, but demand is also manipulated via behavioural modification, and all humankind dances to the tune of whichever of the technology companies is able, like Borges’ librarian, to construct a data centre that acts as ‘cipher and perfect compendium’ to the whole world.
Zuboff is justified to portray this prospect in dystopian terms. But her prescription – outrage, resistance, state regulation – is inadequate, as are the recent calls to ‘break up’ the big technology companies. The dangers presented by ‘surveillance capitalism’ derive from capitalism itself. If some emancipatory political project is to repurpose new technology to serve humanity’s needs, it will need both a decisive break from the market economy and the top-down socialist projects of the twentieth century. On this point, Fully Automated Luxury Communism is disappointingly vague: it presents a utopian horizon of technologically-enabled liberation and plenty, yet advocates a relatively tepid programme of social democratic state intervention in the short term, without a credible path linking the two.
One reason for this might be Bastani’s political commitments as a media outrider for the Corbyn-McDonnell ‘socialism with an iPad’ project. The problem with state-run initiatives is that governments are driven by their own imperatives that are, in their own way, as divorced from actual human needs as corporations are. I had some first-hand experience of this as a contractor for Malcolm Turnbull’s pet project, the Digital Transformation Agency, launched with great fanfare as way of reorienting government services to ‘user needs’. The rapidity with which the needs of departmental fiefdoms and politicians’ public relations came to eclipse those of citizens was dispiriting to watch up close.
Bastani claims that electoral politics can ‘catalyse a shift’ to popular involvement in shaping the structures of society, but the historic achievements of the Labour Party are testament to the opposite dynamic: people organising to intervene directly on their lives and working conditions led to effects in the political sphere. Evgeny Morozov comes closer to identifying the domains of struggle in the age of data technology, and potential ways of democratising both tech and the economy: decentralised economic planning, the design of non-market mechanisms of production and distribution, and digital ‘feedback infrastructure’ to facilitate a democratic, rather than market-oriented, process for discovering and solving problems. The kind of epistemological modesty that will not only ward off the tyranny of the central committee, but also liberate us from the yoke of the market and the data centre, rejects both the hubris of omniscience and mystic reverence for ‘spontaneous order’. This would recognise that no one is better placed to determine people’s needs than they themselves, as citizens rather than as consumers. Technology may be useful on the path to such a situation, and it would be a waste of opportunity only to see it as a threat or an obstacle; but nor is it a panacea. There are no easy short cuts to radical social and political change. The revolution will not be automated.
Jorge Luis Borges, ‘The Library of Babel’, trans. Anthony Kerrigan, in Ficciones (New York: Grove Press, 1962).
Adam Greenfield, Radical Technologies: The Design of Everyday Life (London: Verso, 2017).
Hamid R. Ekbia and Bonnie A. Nardi, Heteromation, and Other Stories of Computing and Capitalism (Cambridge: MIT Press, 2017).
Jurgen Habermas, The Theory of Communicative Action, trans. Thomas McCarthy (Boston: Beacon Press, 1984-7).
Friedrich Hayek, ’Prize Lecture‘ in Nobel Lectures, Economics, 1969-1980 (Singapore: World Scientific Publishing, 1992).
Karl Marx, Capital: A Critique of Political Economy, Volume 1, trans. Ben Fowkes (London: Penguin Classics, 1992).
Paul Mason, Clear Bright Future (London: Allen Lane, 2019).
Viktor Mayer-Schönberger and Thomas Ramge, Reinventing Capitalism in the Age of Big Data (New York: Basic Books, 2018).
Evgeny Morovoz, ’Digital Socialism? The Calculation Debate in the Age of Big Data‘, New Left Review 116 (March – June 2019), pp. 33–67.
Paul Mozur, ‘One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority‘, The New York Times (7 June 2019).