The Californian Ideology
In the closing months of 2022, OpenAI, a research laboratory based in California, United States, publicly released ChatGPT. ChatGPT is a prototype chatbot interface that leverages large language models to generate answers to prompts which users themselves provide. Enthusiasm over the human-like responses of ChatGPT in niche online communities led to the software becoming an internet and media sensation. In less than a week the tool surpassed 1 million users. Use cases multiplied and commercial prospectors began to think of ways to harness this software to reduce overheads, induce sales or create new kinds of businesses.
While artificial intelligence (AI) and society researchers have cautioned about proverbial public road-testing of immature technologies, the cascade of proclaimed easy breakthroughs creates pressures for people to adopt this product lest they lose out on first-mover advantages. Proponents provide standard promises of refinement later, although experience tells us that “if an unreliable linguistic mash-up is freely accessible, while original research is costly and laborious”, future innovation will be driven by the “bad money”.
This story about the power of generative AI models shares features with other computational developments in the prevailing global algorithmic culture. Yet, where blockchain-based technologies like crypto and non-fungible tokens (NFTs) appear primarily about creating a new class of assets, ChatGPT and other generators are promoted as poised to “diversify and intensify the creative powers of human labour”, perhaps even to the extent that we are “on the verge of qualitatively transforming the way in which we work, play, and live together” to invoke Richard Barbrook and Andy Cameron’s essay, The Californian Ideology.
Although writing in the mid-1990s prior to the dot.com crash, Barbrook and Cameron were prescient observers of how extraordinary capital investment into information and communication technology – guided by optimistic beliefs about private gains and public benefit – created conditions ripe for rapid, iterative product development. For example, Microsoft is notorious for shipping unfinished software packaged with its Windows operating systems, with the result that early users effectively act as unpaid software testers for the multi-billion dollar company.
Captured in adages like “fail fast” or Mark Zuckerberg’s directive to “move fast and break things”, rapid iterative development in and through the public was deemed an acceptable business risk – a necessity even in lean manufacturing thought. The first benchmark was “good enough”. Fixes would follow if the technology proved viable. Returning to ChatGPT, Sam Altman, a co-founder of OpenAI, is forthcoming about the limitations of the technology: “It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness”. But this narrow focus on practical elements of generative models may distract from more foundational critiques that have immediate impacts on the prospect for collective political life.
Writing in the Age of Computational Reproduction
Arising from the close association between the US software industry and US universities, there is an interest in how ChatGPT might shape knowledge production and immaterial work. Given that the release of ChatGPT coincided with the end of the US fall semester, there was much discussion about how generative language could alter essay assignments in the humanities and social sciences by lowering the threshold for fabricated writing and cheating more generally. More dramatically, Stephen Marche declared that “the college essay is dead”, a cry that is reminiscent of claims about the potential negative impact of Wikipedia decades prior. Academic publishers are also thinking about the potential ramifications to their routine practices.
Within the wider software development community, Stack Overflow – a website for programmers to ask and answer software-related questions – has suspended ChatGPT-generated submissions at the time of writing. This is mostly because ChatGPT’s production of code is not particularly good. This means “the average rate of getting correct answers from ChatGPT is too low”. Like members of the academic community, there is a twin worry about devaluation of the service if junk answers flood the site and how generated language models threaten human communication by drowning it in nonsense. Given the centrality of Stack Overflow for software production around the world, junk answers have knock-on effects for safe, effective production. There is wisdom to the decision taken by the Stack Overflow moderators.
Admittedly, much of the exaggerated reaction in higher education is closely bound up in a nested conflict over the appropriate role of faculties that are frequently and readily dismissed for not being sufficiently vocationally oriented. For some, ChatGPT illustrates how science, technology ,engineering and mathematics (STEM) fields have made the humanities obsolete. Yet as the case of Stack Overflow underscores, acquired judgement remains a keystone in trusted professional communities, the very kind of disposition humanities disciplines seek to cultivate throughout their curriculums.
Even people pointing out the best examples of use cases are creating a selection effect – encouraging a skewed perception of the baseline utility of generative AI. This effect is magnified when these selections are shared on social media platforms designed to cater to popular engagement, not quality. Due to this social influence, mediocre ideas dominate in the short term. This is simple reproduction in which aesthetic instruments are reified as pure epistemological inquiry.
That the text outputs of ChatGPT seem plausible is another matter, albeit of a different kind that speaks to qualitative aspects of social change. As researchers point out, generative models are prone to elementary errors in fact, logic, and reasoning. Indeed, the output of these “sounds right even when it is making things up”. Perhaps this is because our expectations around writing have greatly changed. A serious, impartial style is thought to be substantive, even if there is little substance to the text itself. Whether from the rise of the administrative state leaving people disenchanted, or from the proliferation of legalistic hedged disclaimers, there is little room left for poetics in formulaic writing. When last did you read a policy brief that left you impressed with the author’s style?
Eclipse of Reason
One core problem with large, generative language models is that they are flawed in design and compromised during data collection. In addition to inherent inaccuracies from sampling effects due to the costs from working at this scale, typically only certain economically powerful countries and multinational corporations can fund the development of these models. Well documented inequalities associated with institutional prejudice and social filters shape who can work with these products, as well as the tacit assumptions that guide decision-making. The datasets are compromised in many other opaque ways too, especially when it comes to presumed ideological impartiality. Many of these kinds of issues have been identified in the various proprietary algorithmic risk-assessment tools deployed in US courts.
Presuming that most of these issues can be adequately addressed with inputs curated, ranked and vetted with justice foregrounded, generative large language models are still little more than sequence predictors. As Gary Marcus has cautioned, generative AI platforms are “unreliable and could create an avalanche of misinformation” because in a real sense these systems are a “pastiche” of prior human linguistic actions. “It merely predicts the statistically most probable next word in a sentence based on its training data, which may be incorrect”, writes Edward Ongweso Jr.
Recombinatory sequences of words may have trace elements of what Jacques Derrida called a human signature in his book, Of Grammatology – there is, after all, a labour process that created these words – but they lack the full intent, context, and meaning of the texts that subsequently came to be incorporated into the datasets used in the modeling process. For that matter, there are limits of sympathetic critique when language models are so disconnected from speech acts and situations. Ultimately, “AI reinforces a different definition of what reality is and how it can be understood”.
Generative models are confined to the interplay of text, parameters and associations in their datasets. The premise is that current data about the world is sufficient to understand the world in the future. This is a grave error that leads to an unhelpful mixture of mimesis, regurgitation and the uncanny valley of meaning. It also leaves little room to account for the role of the not-yet-known or the unknowable. In that respect, the technology could be said to favour conservative temperaments as generative models encode the past assumptions of (and preferred readings by) the powerful as a kernel for textual outputs.
Let me conclude with this point: historical inequalities are already an evident problem in training datasets used for AI, including generative language models like ChatGPT. This is another reason why it is an error to reduce patterned prejudice to matters of implicit bias and other kinds of cognitive effects. Doing so allows people to claim that changing their language will lead to equality. But the primary cause of contemporary social inequalities are hierarchies established by a private property regime which legally justifies the domination of the many by a few. Injustices cannot be remedied through utilitarian inclusion. If ChatGPT is really a “preview of progress” as Altman claims, then it is important to evaluate the subtle ideological properties that give rise to the generated text that works against the possibility of establishing a post-capitalist society.