Talking About Games: Critique Criteria
Words are weird. I still remember in the second grade when my father insisted I refer to a particular bodily function as “urination,” while my friends called it “going pee.” When I inverted those terms, both groups became upset at me for being gross. The meaning was unaltered. Neither word was particularly crass. But there I stood, excoriated for my choice of vocabulary. My lifelong terror with linguistic solipsism had begun.
Following up on our previous conversations about the meanings and importance of negativity and criticism, today we’re looking at three more concepts. This time, however, these are the broadest possible traits that should be found in any critique — the bare minimums, you could say. Although as you’d expect, we’re peddling in ambiguities.
Nothing allows more people more communicative power than the internet. Too bad nothing allows more dunderheads to pretend they’re experts than, you guessed it, the internet. Gone are the days when reading somebody’s opinion meant they were at least smart enough to con their way into a position at the local paper. Or inherit it. Look, the common thread here is that the internet may be new, but dunderheads are forever. It’s just that the first happens to elevate the visibility of the second.
Because it’s often easier to define a thing by what it isn’t, expertise should not be taken to mean experience. Don’t get me wrong, experience is often part of the package. But there will always be value in the newcomer, the person who stands up to declare that not everything new is worse than everything old, there’s a reason all the kids are playing this Fortnite thing, and actually Citizen Kane looks like butt. Fight me, gramps.
So if expertise isn’t experience, what is it? Put simply, it means the critic knows what they’re talking about. In board game criticism, they’ve played the game in question, they understand how it functions (or fails to function), and they can express clear insights and experiences about it. If this sounds obvious, it should. How many reviews are actually first impressions pieces again?
Expertise should manifest in multiple ways. To borrow an outdated dichotomy, we can think of it as thematic and mechanical. In the first case, a critic should understand the subject of their investigation. What is the game doing? Does it succeed in its aims, or fail, or even work at cross-purposes? How does it feel? Are there irregularities or idiosyncrasies that require explanation? And so forth. The second case is more nitpicky but no less important: a critic should understand their chosen format and strive to improve their craft. Sufficient lighting and sound editing are mandatory for video and audio. Spelling and grammar are mandatory for the written word. Is that elitist? Since we’re literally trying to establish that certain people are experts, I guess so. The good news is that there are grammar editing programs for scrubbing your mistakes and helping you sound like a grammar editing robot. You too a good talker can be, human.
II. Objectivity and Subjectivity
I didn’t share my childhood tale of woe for no reason. The moral of that story was that sometimes words have different meanings, sometimes different words have the same meaning, and sometimes those same meanings are actually very different meanings. To an Ivy League-educated surgeon, “go pee” sounds crass because it deviates from medical precision. To a group of kids, “urinate” sounds wackadoo because those are the words of someone who intends to strap you to a mortuary slab. Same meaning. Different meaning.
By the same token, we often stumble when we talk about a little thing called “objectivity.” There’s this idea that certain people should be objective: impartial, fair, unbiased, disinterested, and so forth. And while these aren’t completely possible, because everybody lives in reality and therefore has their own upbringing and values and friendships and perspectives, they’re worthwhile goals. You don’t want your doctor trading out diagnoses based on your political beliefs, your news station deciding that particular groups can’t handle the facts, or your school teachers deciding that portions of the curriculum are only for the kids who pray to the proper deity. At some level, anyone who works in a position of authority ought to strive for some measure of objectivity.
The same goes for critics. While it isn’t possible to be entirely unbiased, a certain degree of disinterestedness makes for good critique. A critic who’s taken money from a publisher has an obvious conflict of interest. If they’ve influenced a design, they should probably disclose that. If their involvement was significant, they should recuse themselves. There are degrees and fuzzy lines — again, we’re talking in ambiguities, and there are even discussions to be had about whether review copies or purchased copies generate more bias — but it’s reasonable to expect objectivity.
Journalistic objectivity, anyway. Philosophical objectivity is an entirely different thing.
Oh, they’re related. If we could entrap these two forms of objectivity on a daytime television show and subject them to a live DNA test, everyone would gasp and jeer to discover that Bertrand Russell was the accidental love-child of René Descartes and Immanuel Kant. At a broad level, it works like this: whenever you have two things interacting with each other, the subject is the one observing while the object is the thing being observed. Sound familiar? Of course it does; it’s the foundation of how we talk about sentence structure. The boy looked at the rock. Subject: boy, object: rock. Verb: who’s to say?
Here’s where our two objectivities intersect. Within that linguistic pattern, most people identify with the subject. After all, the subject is doing something, while the object is, well, an object. The subject is the doer, the object is having something done to it. The subject brings perspective, the object just sits there.
But the subject is also a doubter. This is where those dead philosophers come in. Way back in the 17th century, René Descartes was an anxious kid who suspected the entire world around him was a theme park designed by demons to force him into awkward interactions with people he’d much rather not speak to, a feeling that was undoubtedly mutual. His only certainty was that he was a thinking being — a doubter, because if he doubted his existence (which might be fake), then he must be real, even if that made him the only real thing in the universe. “I think therefore I am,” he said. Except he said it in Latin, which sounds cooler.
Jump forward a century to Immanuel Kant, who had a much easier time making friends than Descartes. As such, he tweaked his predecessor’s theory. Kant decided that while it was true that everybody lived a subjective experience, certain things could be known beyond the prison of one’s own skull. Like, say, tautological truths (“bachelors aren’t married”), mathematical proofs (2+2=4), or deductions. These a priori details, unlike their a posteriori counterparts, could be shared between thinking minds with great accuracy, even though those thinking minds’ personal experiences were totally different.
Go another century and we meet Bertrand Russell, who took this concept even further. Where once the subject was the realm of knowable reality and objects were the embodiment of the maybe-false world, Russell proposed an inversion: objects are real, but our perception of them is mutable, either true or false, and nobody knows but Bertrand Russell. The crucial linkage between them was what Russell called a “fact,” when one’s subjective belief struck upon an accurate reflection of the object. In this worldview, the desirable quality wasn’t subjective, but objective — unchangeable and unbiased. In other words, subjectivity and objectivity had undergone a switcheroo. When it came to statements of value, subjectivity and objectivity had effectively become contronyms, words that mean their own opposites. Like how “cleave” means both to put something together and to chop it apart, or “critical” can be a necessary statement or a rude one. Of course truth is objective; of course truth is subjective.
What does this mean to a board game critic? Two very different things. First, as we discussed earlier, it behooves a critic to strive for objectivity in the journalistic sense with basic good practices like disclosure, not making up their mind about a game until they’ve played it, striving for accuracy when it comes to rules, and being careful about giving positive coverage to their pals.
At the same time, it’s also crucial to be subjective, because we aren’t talking about tautological truths, mathematical proofs, or ontology. We have no laws, in the scientific sense, to appeal to. Don’t believe me? Okay, smarty-pants, name a single objective virtue in board game design. Games should be short? Don’t agree. Player elimination is bad? Not always. Roll-and-move is terrible? Okay, besides that. I’ve had approximately one billion conversations about how I should be more objective, and every single one of them has gone like this:
Basement Dweller, Stained with Old Cheese: “You misrepresented this game! You should be more objective!”
Me, a Reasonable Human Being: “Prithee, good sir, what would you prefer me to have said in mine critique?”
Basement Dweller, Growing Corpusculent with Indignation: “You missed out completely on [some random very subjective thing he happened to like]!”
Me, Turning to the Audience in the Fashion of a Greek Monologue: “So you see, anyone who tells you that philosophical objectivity is actually possible in criticism is in fact too foolish to recognize that their own tastes and biases exist and are influencing their thought processes.”
Of course, all this talk about subjectivity isn’t to say that there are no good games, no bad games, or that particular indicators can’t help us draw a line between the two. It’s just that answering the question of quality isn’t as easy as labeling something “fun” or “good.” Really, a critic should strike such simplifications from their vocabulary entirely. After all, their task requires deep examination, not filler words.
When discussing historiography, one of the first questions prospective historians are tasked with answering is, “If our best histories are really just stories we’re telling to help illuminate the present, how can they be true?” The answer generally lies in proximity. No historian will never wholly recreate what happened however many centuries ago. But hopefully, by being honest and thorough, one can approach truth. This takes work. Sometimes it requires new interpretive lenses that weren’t available previously. Sometimes it requires invalidating our assumptions, or questioning previous conclusions, or asking basic questions all over again. Nobody said it would be easy.
A similar ethos is helpful in criticism. When it comes to matters of taste, there is no universal metric of quality. Instead, truthiness is found through description, dissection, and discussion. Recall last month’s discussion of how a critic functions as a surgeon, picking through the details of a thing in order to seek out its heart. In that light, there are two possibilities for failure: failing to be comprehensive and failing to be defiantly subjective. By being roundly opinionated and as expressive as possible, one can impart an accurate reflection of the critic’s experiences with the game in question, even when the critic’s feelings don’t perfectly mirror those of their audience. From my own experience, there is no higher compliment than when I write something positive or negative, but with enough clarity that a reader recognizes that the game in question would prompt the opposite reaction from them. The grand irony of such cases is that our gut reaction might be to shout about how badly we disagree, when this is proof that the critic has done their job well.
And there are any number of valid lenses to examine a board game through. Even something as simple as asking, “Did this game do what it set out to accomplish?” is a mode of examination. Another common one is to think about games through a decolonial mindset. Others include asking what emotions the game elicits through play, whether it accurately reflects history or adapts a source work, or whether it engages with or comments upon our broader culture.
These three traits — expertise, subjectivity/objectivity, and examination — aren’t meant to be exhaustive. This is the starting point. And by way of example, sometime in the next month I’ll try my hand at looking at one of my favorite games through an entirely new lens.
In part eight, which is already available for Patreon supporters, we’re talking about Tom Felber’s farewell note to board gaming.