Models are cultures

Traditional software is culturally-unaware, but deep learning models can grasp more of human meaning

When we train a machine learning model, on a corpus of human-produced media, fine-tune it on specific examples, and give it feedback on its interpretation of these things, we are doing something similar to the production of a culture.

Historical human cultures were formed by a mixture of exchange, innovation and divergence. They gave rise to different languages, philosophies, artistic styles, economic systems, and forms of social organisation. Each new generation learns about the world from a mixture of personal experience and the stories and explanations given to them by the culture they grow up in.

Pre-literate cultures had very narrow bandwidth for communicating their ideas between generations, and used myth and storytelling to encode the information that they valued most highly. These stories were densely interwoven mixtures of factual history, conceptual world-view, and ethical guidance, maintained and evolved by poets, priests and prophets. Everything else had to be learned and passed on by practice and observation.

With the rise of literacy, it became possible to pass on some kinds of knowledge simply by writing it down. Even here, we rely on narrative, artful composition, and a good turn of phrase in order to make the information seem relevant to the people who must read it. As with myths, we mix the plain facts with the normative meanings that we give to these things. Literature doesn’t just tell us facts of history or science, but how people like us feel about things, interact with each other, and decide how to live and act. It’s still the case that literature leaves large gaps, where tacit knowledge is required for real understanding.

Large Language Models are, in a sense, hyper-literate: the only things they know are the things they learned from written sources (or, in the case of image or multi-modal models, from visual images). Compared to humans, they lack direct knowledge of “what it feels like” to do something, the tacit knowledge that can only come from personal participation.

But compared to regular software applications, LLMs have a vastly greater general knowledge of human concepts. A typical software application is conceptually narrow and abstract—it only knows about things like files, windows, fonts and so on. Perhaps a complex application like Photoshop knows about brushes and fills and layers, but this knowledge is painstakingly encoded by programmers, and is strictly limited to the narrow tasks that Photoshop is used for.

ML models are something different. The process of training produces a system that can manipulate symbols representing a broad range of concepts in a reasonably accurate way. Accuracy is sometimes about getting the right answer, such as the response to “what is 25 multiplied by 82”, but more often it’s about giving the appropriate response to a prompt. Appropriateness often isn’t objective, but depends on our culture. If I ask ChatGPT for advice on impressing my new work colleagues, that advice might be good if I’m in France, but bad if I’m in Japan, or Paraguay, or Tanzania.

Software developers aren’t used to thinking about things in this way—we like to think of our software as a universal model of the problem domain, with a thin layer of localisation on top, to use the appropriate language, date format, or currency symbols. In truth, almost all software is written in pseudo-English, and even if an individual programmer uses pseudo-Swahili or pseudo-Japanese to name their variables, they are still working with libraries, APIs, and operating systems whose core concepts are canonically named and described in English. This is only rarely commented on, but it’s a strange quirk of history that could easily be otherwise.


ChatGPT is fluent in multiple languages. Does that mean that it’s also fluent in multiple cultures? Jill Walker Rettberg looked at this question and concluded that although the model is multi-lingual—it can understand questions and give answers in a wide range of languages—it is monocultural, and that culture is Anglo-American. This is partly because the training data is predominantly American, partly because the “cleaning” and curation of the data set is more likely to filter out non-mainstream-American material, and because the “value-alignment” process was conducted by US-based contractors who instilled American value judgements. Her main conclusions:

I was surprised at how good ChatGPT is at answering questions in Norwegian. Its multi-lingual capabilty is potentially very misleading, because it is trained on English-language texts, with the cultural biases and values embedded in them, and then aligned with the values of a fairly small group of US-based contractors.

This means:

  • ChatGPT doesn’t know much about Norwegian culture. Or rather, whatever it knows about Norwegian culture is presumably mostly learned from English language sources. It translates that into Norwegian on the fly.
  • ChatGPT is explicitly aligned with US values and laws. In many cases these are close to Norwegian and European values, but presumably this will not always be the case.
  • ChapGPT frequently uses US genres and templates to answer questions, like the three paragraph essay or standard self-help strategies.

By weighting the training data toward different cultures, it would be possible to shift the cultural bias. Because training data is scarce, this is unlikely to be achieved by creating non-English data sets that are larger than the English ones. However, non-English sources of high quality could be given higher weighting, and this could be reinforced by culture-specific value alignment. Foundation models will probably still rely on English-language and Anglo-culture data sets for scale, but fine-tuning and reinforcement learning can teach them to use different cultural concepts.


Aligning generative models to national or regional cultures may come to seem like a natural state-building project. In the same way that nation states in the early 20th century wanted to have their own state broadcasters, in order to promote and enrich their national cultures, they may want their own LLMs for broadly similar reasons. The web mostly didn’t work like this: much of the web’s content is available only in English, many interactive web applications are also English-only. Even where translations are available, the conceptual scheme is Anglo-American, or even more specifically Californian in many cases. But LLMs are much more polymorphic: they can change shape to match cultural norms far more easily than rigidly-programmed traditional software can.

This might sound fantastical at first, but this analogy is a pretty good way of grasping what the UK government is trying to do:

The government acknowledged the recent breakthroughs in large language models, the technology behind chatbots such as OpenAI’s chatGPT—a sensation since its launch last year—and Google’s Bard, which has yet to be released to the public. It said it would establish a taskforce “to advance UK sovereign capability in foundation models, including large language models.”.

Nation states without their own culturally-aligned models will find that their citizens are depending on the models of others, in much the same way that nations without strong native cultural institutions find that their citizens are increasingly influenced by other cultures. Of course, this is often no bad thing—cultural mixing is part of how culture evolves in the first place. But cultures which cease to produce their own works can gradually slide into obsolescence, a threat which seems to demand a response.

We could also imagine subcultural models, which reflect the particular virtues and values of a community of practice, a religious or spiritual community, or an artistic or aesthetic movement. This tweet thread contains a thought-provoking example:

Mikael Brockman 🥸
@meekaale

In this scenario, a Vatican-led LLM that is deeply aligned with Catholic teachings, particularly scholastic and neo-Aristotelian thought, could offer certain advantages over conventional postmodern morality, which is often adopted by corporations such as OpenAI and Google.

At an even more general level, such models might give real depth to the idea of “corporate culture”. A model trained on the particular values of, say, Bridgewater, might behave very differently to a model trained on the values of some other firm. Whether we want this kind of thing to happen may be an empirical question. Perhaps we prefer our value-neutral computing platforms—I can think of many reasons why we might. But, so long as models are pluralistic and user-controlled—far from a reliable assumption!—then we should be happy with the prospect of computers that are able to express themselves in terms that we find more meaningful, and that are able to grasp the meaning of our requests of them more easily.

The Moonlit Garden is the personal website of Rob Knight.