Can we train moral senses with video games?

Aug 04, 2016

A few days ago I pondered this question, in response to the claim that people don’t exercise moral choices very often and so are confused when required to do so:

Following Betteridge’s law, the answer to the title of this post is probably ‘no’, but I am hoping to persuade you that the answer might at least be 'maybe, in the right circumstances, subject to the desires of the participant and the structure of the game, and depending on how you measure moral sensitivity anyway’.

For years, we have had video games that allow people to do things that would be considered harmful, rude, criminal or distasteful in real life - just as we also have games that allow people to do things that would be heroic, admirable, funny or altruistic. A lot of games sit in an ambiguous zone where we’re permitted to do horrible things (viz. shooting people, blowing up planets, lying, stealing etc.) in the service of some higher cause that makes it OK (saving the world, saving the princess, stopping the villains from whatever villainy they’re up to now).

It’s not possible to play Doom without shooting aliens, at least not if you want to win or even survive for more than a few minutes. Likewise, a principled opposition to football will make you a pretty dreadful FIFA 2016 player. Playing these games involves suspending whatever behaviours you might normally practice in order to do extraordinary things, and this is precisely the point of playing.

What these games do is incentivise certain behaviour. If you don’t shoot the aliens, you die. If you don’t kick the ball, the other team’s players will kick the ball and eventually they will kick it in the goal and then they will keep kicking it in the goal until you lose 36-0 and the newspapers write about how you’re really bad at football.

Still, these aren’t really complex moral choices. The aliens aren’t real, and they’d kill you and your friends without a moment’s thought, so killing them seems perfectly justified. Try as I might, I can’t really construct a moral dimension to football simulators either. We respond pretty mechanically to the incentives in these games, or we choose not to play them, but we’re very rarely conflicted about any of it.

There are some games which - intentionally or otherwise - create moral conflicts as part of the game. You might be given an incentive to do something that you consider to be immoral, or conflicting incentives that require you to balance harms against each other. Many of these are similar to the (in)famous trolley problem, a set of philosophical thought experiments in which it is possible to save a varying number of lives by killing another person, either by action or inaction.

These thought experiments are interesting because they can reveal our unconscious thought processes - we discover that there are circumstances where we think it’s OK to let someone die so long as we didn’t take any action to cause it, or that if we have to choose between one person dying and five people dying, we might choose to kill the one person to save the five, or that we can only do so depending on the method (pushing a person off a bridge vs. redirecting a train, for example). What we get out of this is self-knowledge. I often find that I’m surprised by my own reaction to trolley problems - “I did not know that I would think that!”.

Role-playing games often use moral decisions to add “flavour” to the player’s character - you’re explicitly being asked to decide if your character is the kind of person who would kill innocents just because they’re in the way, or would take the more difficult path simply to avoid harming someone. These can work well, but many game worlds are sufficiently suffused with random violence that these choices can seem unimportant (sure, I spared this particular innocent but 5 minutes later got attacked by a group of bandits who I hacked down before beheading their leader… Am I still the good guy?).

I think most people think of moral choices in games as being about consequences. The choice - as in the trolley problem - is presented in stark terms: do this and beloved character X dies, do something else and some other less-important people die, or you risk failure in your mission, or you break an oath sworn to the President or something. You can go into “moral choice mode”, have a bit of a think, then choose your path and go back to playing the game, in a world which is slightly different as a result of your earlier choice.

What I find most interesting are games where the moral choice is made interesting by incentives. Instead of the trolley problem, I’m thinking more of the Stanford Prison Experiment.

Let’s take a game like Rome: Total War. In this game, you’re the leader of a faction (if you’re playing as the Romans, your objective is to match or exceed the achievements of the real Roman Empire). So, you start out in Rome, doing as the Romans did, capturing territory and subduing the local populace, first with garrisoned troops and then with bread and circuses. You can build temples and arenas to improve the happiness of the population, and adjust the tax rate to what they will bear. If the populace becomes unhappy, they may riot and after a couple of turns the province will revolt, with the only way to put down a riot to garrison more troops or cut the tax rate. There’s a formula for happiness in which higher populations and higher taxes make people less happy, and better temples/arenas and bigger garrisons have the opposite effect.

At first, all is well. You build temples and arenas and the people love you. The troops march on - to the Rhine, or Persia, or some frontier where they can be usefully expanding the empire, that being what empires need their troops to do. But peace and prosperity brings population growth, especially if you expand your farms for all that lovely tax revenue. While population growth is uncapped, the size and number of your temples and arenas is limited, so you end up cutting taxes to buy off the dissent. This works for a few years. Then you need to garrison troops. Then more. Before long, you’re having to weaken the frontier just to keep order in some backwater province miles from anywhere strategically important. This is no way to run an empire!

It turns out that the only good strategy here is to wait until the province passes some critical population threshold, then whack taxes up as high as they’ll go and withdraw the garrison. Riots ensue. Then more riots. Then a full-scale revolt, as some jumped-up pleb declares himself governor. Then, well, you move the army in and easily crush the rebels. And then, you kill about half the population. It takes twenty years for the numbers to recover enough for anyone to think about rebelling again.

None of this is in the game manual or tutorials. The “escalating disorder” problem was clearly a game mechanic designed to make imperial expansion more difficult, and deliberately provoking rebellions so that you can massacre your own people was probably not the solution that the game designers imagined. But it’s the one that they incentivised, and it turns out to match how at least some real-world emperors and dictators have behaved throughout history. One can read about this in a book and believe that these people must have been unspeakably evil, possibly insane or possessed by some bloodlust. The notion that it might have been a rational strategy only comes to light by playing the scenarios oneself.

So, does that mean it’s OK to kill civilian populations? Well, no - obviously! What it shows is that evil can come from incentives and that video games are the only way to expose ourselves to many of these kinds of incentives safely. Games provide us with a superstimulus of incentives - all of the rewards and punishments for behaviour are highly effective, so much so that the term gamification has been coined to describe the practice of using game-like mechanisms to influence real-world behaviour.

What one gets from an experience like this is the following sequence of thoughts:

  1. I want to win the game
  2. If I do this, I’ll be more likely to win
  3. Hmm, this is pretty distasteful
  4. But if I don’t do it, I’ll lose! And, like, I’m meant to be a Roman Emperor here, c'mon!
  5. Hey, this is why the Roman Emperors were terrible people - they had the same problems I do!
  6. OK, I won. But I did kill an awful lot of innocent peasants, and I feel a bit morally conflicted about this
  7. Wow, this game made me feel morally conflicted. This is a good game!

It’s clearly a stretch from here to say that if I was ever faced with a comparable real-world situation that my game-playing background would cause me to see through the incentive structure that wants me to do bad things. But I think it might make me a bit more aware of the possibility, and the way incentives and situations can be structured to make the bad choice seem inevitable or justifiable when it really isn’t. And, as I said at the beginning, that’s really all I’m setting out to argue.