Talking About Games: On Moral Criticism
This was supposed to be a short piece.
“Art is anything you can get away with.”
—Marshall McLuhan, Understanding Media: The Extensions of Man
I. We Deserve Better
The problem with such a goal is self-reinforcing. Short. What is short? Do we need a shortcut to understanding something’s importance? Simply listen to how people talk about it. Discussions of politics and religion may fracture relationships because their topic is impactful. Medical professionals and attorneys switch from lethargic to animated depending on how many lives or livelihoods are at stake. But while these are tangible matters with tangible consequences, they aren’t the only things that drag fury from their advocates. The catty duels waged by literary critics often outlast real wars, leaving battlefields that yield fewer prisoners. Their marshals estimate that the ground gained or lost is critical, perhaps more so than the more temporary inches traded by the more formalized workings of state.
Why? Because they reckon they’re fighting for the soul itself. For the culture. For the shape of our collective imaginings. They see how their medium, their art, can persist for centuries or millennia, can liberate or indoctrinate, can speak forth reality more assuredly than the levers of the real.
Yet our oldest excavated form of art, only beaten by the developmental requirements of illustration, language, and song, the board game, is trapped in such an infancy that the idea of working up a froth over some cardboard and plastic is almost entirely alien. And why not? Can you imagine competing philosophers composing bitter epistles over Jenga? Numerologists huddled in cold monasteries to calculate the value of Jumanji? Any reasonable person reduced to ascribing bitter, sordid motivations to the creators of these artifacts, apart from the distorted funhouse mirrors of Kickstarter comments sections?
No. And why not? Because board games have failed the only definition of art that matters — that we talk about them as though they are actually, truly, honestly art. But to evaluate them as such, as artifacts that pierce flesh to inject their substance into the very veins of the soul, we must view at least some of them as bearing moral import. Guiding hands, wayward revels, correctives, misplaced footings. As long as they are only permitted to be playthings, that is all they will remain.
“The more one understands one’s own reactions the less one is at their mercy.”
—William Empson, Seven Types of Ambiguity
II. Three Synonyms
There is no such thing as a true synonym. Broad synonyms, yes. General synonyms. Ballpark synonyms. But when it comes to words, the slenderer the delineation, the more the gulf of meaning between two terms becomes essential.
Take three words: morality, ethics, and law. To some, these are synonyms. In particular circumstances they might be used interchangeably, as in the sentence, “This is why everyone hates moral/ethical/legal philosophy professors.” My own education began in bioethics, a field that spends half its time worrying about future technologies being utilized improperly and the other half worrying about the slivers of space that exist between a medical professional’s personal morals, the ethics of their profession, and the law.
I’ll clarify. When polled, there are a lot of ways that people define these terms. Morality might have something to do with religion, ethics is less hereditary, law is, well, law, and so forth. Telling answers, but still stuck in the vernacular rather than using precise language. By contrast, when someone like a bioethicist talks about these three concepts, it’s useful to place them on a line between individual and negotiated. There’s some temptation to use a Venn diagram, with overlapping bubbles for each. But Venn diagrams simplify overlaps in one way, whereas our imaginary spectrum acts as a different sort of simplification. Here the concepts are about tendencies rather than hard rules. Morality tends to manifest at the individual level. Perhaps it’s hereditary or based on religion or culture. But these aren’t its most important traits. Rather, its defining attribute is its personhood: morality is how a person figures they should act, and because of that, how they figure other people should act.
Ethics, meanwhile, takes us a step toward the far end of the spectrum. It’s still individual, even to the point that it’s driven by individuals. But it’s also more negotiated, often between groups of people in a shared profession or other organization. Law, of course, takes this further: it’s almost entirely negotiated by collective groups, and only represents ethics and especially morality by approximation.
This isn’t to say that laws can’t be individual. Judges, monarchs, and other legislators may impact laws in ways that are deeply rooted in individual feelings. Nor is this to say that morality can’t be negotiated. A person’s morality, like their identity, is always being negotiated with their environment. Rather, the point is illustrative. A medical doctor might obey the law to the letter, swear an oath that includes particulars not included in law, and have private feelings about specific procedures, patients, or best practices. These don’t overlap simply, as in a Venn diagram. They exist in tension with one another. Sometimes this is confusing. Other times it’s clarifying. At absolute best, this tension can function as iron on iron, sharpening our appreciation for each category.
“Okay,” you’re saying, “but what does this have to do with board game criticism?” Nothing yet. Because first we need to talk about authorial intent.
“What we term our character is based, indeed, on the memory — traces of our impressions, and it is precisely those impressions that have affected us most strongly, those of our early youth, which hardly ever become conscious.”
—Sigmund Freud, The Interpretation of Dreams
III. Intentionalism & Objects
Authorial intent is sticky. At one level, it presents a tremendous temptation. “Why don’t we just ask the author?” Very tidy, that question. Who would know the meaning of a work better than the author who spent countless hours laboring over it? Never mind that the author might be dead or deranged. That isn’t a problem we see too often in our modern golden age of board games anyway; let mortality be a problem for literary analysts, not board game critics.
Further, the past two centuries have produced plenty of methods for investigating an author’s intentions. As a historian, some of these appeal to me while others seem closer to crackpottery. “Context” is one of those ideas that appears with reliable frequency, although good luck finding many who agree on what counts as sufficient. The New Critics argued that a poet’s “tradition,” their body of work examined in publication order, was as close to context as one could desire — if indeed context mattered at all. Quentin Skinner spun hermeneutics into an entire catalog of methodologies, often varying by the subject’s religion, geography, or era; a sedimentary construct of one context built painstakingly upon other contexts. Marxists saw in every work of art the encoded sins of their creator’s day. Sigmund Freud wrestled long-deceased authors onto his couch to decipher their dreams and diagnose their innermost turbulences and attractions. Of all these, the most reliable would seem to be the author’s explicit statement. In board games, we have access to a tremendous array of material. Design notes and design diaries. Footnotes and bibliographies. Interviews and anecdotes.
Except here’s the about-face: I’m not a strong intentionalist, nor do I think it behooves most board game critics to rely too heavily on context and authorial statements when evaluating a work. This isn’t to say such things don’t matter. They do! But they shouldn’t be the final word on what a game means, especially morally.
I’ll explain. A formative moment in my approach to history and criticism arrived as an undergraduate when we were offered extra credit for sitting through an hour of Nazi propaganda, followed by a second hour of discussion. The film, an anti-Semitic tirade, was deeply distressing, both in terms of the images it portrayed and the language it employed: starving children in the Łódź Ghetto, Kosher slaughterhouses slick with black blood, caricatures of Jews as a thousand-year rat infestation. Some were unable to stomach it. There was no ambiguity that what we were viewing was evil.
Yet the entire experience was profoundly moral. The professor spoke of her family’s experiences in Germany and Poland during the Shoah. How they went from respected among their neighbors to pariahs. How they lost friends and family. How they were betrayed, how some of them fled, how others were consigned to the camps. How all of these atrocities were done by people who had come to regard them as vermin. How the same thing could happen here, in the United States. Could happen anywhere, because fear is as primordial as the shiver that runs along your spine when something rustles in the dark.
Before we continue, let me propose three questions that should be asked about any authorial statement. First, is the author telling the truth? Second, would the author even know if they weren’t? And third, can that author’s work be used contrary to their intentions?
In the case of that propaganda film, the answer to these questions proves illuminating. First, the director would likely have claimed that his account was truthful, because the purity of his nation demanded that a threat to its existence be rooted out at all costs. Second, he likely would have excused his embellishments and even outright lies as essential to arriving at the core truth of his documentary — and that’s only if he were sufficiently self-aware to understand that he was embellishing or partaking in a lie in the first place. And third, although the director’s work was used in accordance with his intentions for a time, it had come to be employed as a direct repudiation of his life’s work. In all cases, the context mattered. The authorial intent was clear. But the moral function it now served was an inversion of its original purpose.
In other words, a designer can announce what they mean their game to say. But it is the audience, in the process of interacting with the game, who determines whether or not it succeeds — or if it says something entirely different.
“Probably more people have thought Hamlet a work of art because they found it interesting, than have found it interesting because it is a work of art.”
— T.S. Eliot, “Hamlet and His Problems”
IV. Hopeless Subjectivity
When I wrote about The Cost as a moral game, one repeated response caught me off guard. Wasn’t this all subjective?
Certainly! And it’s easy to see how that concern arises. We live in an age that flirts with relativism while still striving to determine equitable rules and norms. I often joke about solipsism, especially in a communications context — that we as individuals are isolas of meaning who yearn to come together in impossible understanding. Moral critique must be subjective because everything is subjective.
To a degree. I’ve spent plenty of time writing about subjectivity as a virtue of critique. Now let me take the opposite stance.
I’ve heard a few developers make the argument that every game is somebody’s favorite game. It’s a nice thought, very sincere, but also completely false. It’s undeniable that individual taste plays a significant role in determining which games a person will like; de gustibus non est disputandum and so forth. But such extreme relativism would mean that there are no crowd favorites, no useful advice for new designers, no titles for which the prevalent criticism is “this isn’t for me.” This isn’t the same thing as conventional wisdom, which should often be kicked against, or saying that oddball games are inherently bad because they don’t conform to norms. Rather, we’re talking again about how some techniques are more useful than others in the craftsmanship of design. Negotiation will be appropriate in some situations and not in others. Chance can prove thrilling in some games while fostering powerlessness in others. Certainty can prove competitively enlivening or stifling. Context matters. The thing’s fashioning betrays at least a portion of its function.
In other mediums, we call this an “objective correlative,” a term invented by Washington Allston and popularized by T.S. Eliot. The idea is that certain things, whether objects or characters or even narrative chains of events, serve to spark particular emotions. A cherished possession that goes missing; a loved one assumed lost who returns at the last moment; an infant’s discarded shoe; the low rumble of thunder on the horizon, growing louder and closer. These evoke loss, relief, worry, apprehension. Not universally, but close enough that they become reliable tools for poets and narrators. Why do the same stories repeat? Because they work.
The same goes for board games, both as playthings and as moral objects. Interacting with them may spark feelings and impressions that are entirely subjective, like ships washed up on the shores of our isolas. But these subjective inklings often point toward more objective ideals — not wholly objective, but cumulative, broadly applicable, general. Carefully expressed, these can result in best practices, mechanisms that work well in some genres but not others, even moral goals that ambitious designers may elect to pursue. Subjective thoughts, but not isolated thoughts. Thoughts that are able to communicate something meaningful to minds beyond the self.
And perhaps even reach out and touch them.
“The artist usually sets out — or used to — to point a moral and adorn a tale. The tale, however, points the other way, as a rule. Two blankly opposing morals, the artist’s and the tale’s. Never trust the artist. Trust the tale. The proper function of a critic is to save the tale from the artist who created it.”
— D.H. Lawrence, “The Spirit of Place”
V. Bridging the Spectrum
There are two principal goals, then, to moral criticism. The first is simple: to examine whether a game’s means are worthy of its ends, and in the process show you the contents of my soul. To say, “This made me feel something,” whether clarity or outrage or anything else.
A warning: such speech is always bare. Someone always replies with, “But that same feeling didn’t occur in me.” Firstly, there’s no guarantee it should; these are subjective matters. Secondly, there is nothing noteworthy when nothing happens. When I tell you my car was struck by a meteor, you might as well blurt that yours was not. We know your car remains intact, because intact is the default state of cars, un-meteor-struck, the default state of emotions, unmoved. It’s in the process of revealing one’s cause for emotion that perhaps you too will see the reason and be stirred.
Because this is the second goal of such a critique: to move along the spectrum we proposed earlier. This is the process by which individual becomes negotiated. When we talk about what makes a game good, whether as a plaything or a moral object, we cannot help but form objective correlatives between us. Here is what works and what doesn’t. Here is what makes a game accessible, or playable, or able to communicate something important. Some generalities are already becoming more widespread, such as which colors aid the color-blind, that appropriate diversity can benefit a game, that historical games should be cautious of erasure, and so forth. These all arise from moral arguments.
In other words, moral criticism is about constructing a bridge of persuasion. Here is why something matters to me; here is why it ought to matter to you, too. The more robust the bridge, the sturdier it becomes. And although we live alone on our isolas of meaning, those bridges can stretch close enough that we can shout to each other.
In the next installment of Talking About Games, we’ll be talking about scope and relevance in historical game design and critique. But no need to wait — supporters can already find it on Patreon.