Game Design as the Last Refuge of Art, Part I
If you value human expression, ignore modern video gaming at your peril.
BREAKING: COMPUTER NERD COMPLAINS ABOUT AI DISCOURSE
A spectre is haunting our culture, my friends. No, not communism — I’m sure that’ll take a few more months at the least. I’m talking about video gaming and its rapidly evolving sociocultural impact.
I need not remind you that the world is changing at a greater pace than ever before, and in a commensurately unpredictable manner. The one-two punch of economic downturn and generative AI has already exposed dramatic shifts in humanity’s cultural awareness, again compelling us to reevaluate our direction in the present and our aspirations for the future. Like many of you reading, I find that taking refuge in great artwork is the healthiest and most effective coping mechanism during eras when popular aesthetic sensibilities are so thoroughly debased. This time, however, those sensibilities are both debased and ludicrously politicized. Terrific.
I must admit: as a long-time student and practitioner of the information sciences, the discourse surrounding technological advancement in general and generative AI in particular has become almost overwhelmingly tedious. I try to stay techno-optimistic, but that’s a tricky balance to maintain when the tenor of one’s field is set by influencers who transparently lack even a foundational knowledge of their own supposedly world-altering innovations. I’ve therefore settled into my natural role as a kind of friendly skeptic who espouses belief in a positive future for AI technologies even though I have essentially zero faith in the current industry’s ability to get us there — far from intriguing my scientific curiosity, the vast majority of industry press releases brazenly insult my intelligence. I find myself in the frankly unbelievable position of being a hobbyist programmer for whom actually existing AI does not improve on my tooling or workflows.
But the worst part isn’t the breathlessly hyperbolic press releases or the credulous fossils writing op-eds thereabout. It’s the helpless-bordering-on-pathetic acquiescence of my peers. It’s the dudes in fin-tech declaring the end of human programmers as they vibe-code hideous, bug-ridden Web apps. It’s the project managers crowbarring worthless chatbots into every digital product. And above all, it’s the mind-boggling insistence that human creativity is being made obsolete or, worse yet, that this is somehow a positive development.
Aggravatingly, that speculation isn’t entirely baseless. I am consistently startled by the density of folk who claim recreational, intellectual, or even spiritual fulfillment from consuming AI-generated facsimiles of artwork. See, I locate the appeal of art in its human aura, which I consider necessarily absent from machine-generated content (we’ll talk about this in detail next week, by the way). It’s one thing to lament the fact that most people have a strictly transactional relationship with artwork and don’t think critically about it, but those are the same tired refrains that Nietzsche and Schopenhauer were trotting out 150 years ago. I’m more interested in what comes next. Whether or not machines can one day imagine and create in a truly humanlike manner, we’re staring down a future — at least for awhile — in which machines are broadly assumed to already be so capable.
VIDEO GAMES AND THE EVOLVING MEANING OF ART
What does “art” even mean in 2025? There has surely never been a wider gulf in how people interpret that question. For folks like you and me, who read and write provocatively titled essays on Substack, the provisional answer is probably close to what Sparshott had in mind. Allow me to quote him directly in order to preserve his delightful alliterative flourish.
There is really no doubt about what [great works of art] are for. If they were not made for [our delectation]… at least that is what they are produced and promulgated, preserved and prized for; they are expected to provide worthwhile experiences merely in being listened to, looked at, or read.1
In other words: who the hell cares about objective standards? Like Justice Stewart’s recognition of obscenity as that which could titillate his Episcopalian libido, the aesthetically inclined Westerner tends to recognize art as that which can stimulate his nebulous fantasies of bohemianism. Said archetype has so thought for at least as long as MGM has invoked the phrase ars gratia artis to justify the expense of filming actual goddamn lions for the sake of their five-second studio ident.
But today, in our age of consumer-facing content generation algorithms, the old intuitions no longer suffice. Does the man who generates an image of a weeping, Ghiblified fentanyl dealer for social media expect it to provide worthwhile experiences in its being looked upon? If so, then worthwhile to whom, exactly? And precisely how so? Are we to preserve and prize it? Are we meant even to remember it the following day? If so, toward what end? If not, why are we bothering?
The technological reproduction of artwork will continue to proliferate whether or not the Western AI industry manages to yank its head from its own cavernous ass. Even in the most bearish case, in which the bubble bursts and causes another winter for AI research, we’ve already had our tantalizing first taste of the AI narcotic. Like any dragon-chasing fiend, we’ll come crawling back to our enablers before long. And even if the bank forecloses on the trap house altogether, unregulated alternatives will still be available on the Chinese grey-market.
It’s become almost clichéd to say, but the sociocultural institutions that drove the past half-century are now aflame beyond wit of firefighting, and their ashen ruins are unlikely to slow the relentless march of technology. The modal consumer will not or cannot draw any thoughtful distinction between algorithmic generation and his own imagination. What will remain of authentic human expression when our collective artistic heritage is digested and regurgitated for his dubious benefit?
Why, video games! That’s what. Even in this dissolute age of Minecraft-inspired theater terrorism, high-minded and intellectually fulfilling video games have not gone anywhere. On the contrary, they are released in greater numbers each year, and to greater acclaim. I’ll grant you that brilliant works of ludonarrative art like Baldur’s Gate 3 and Disco Elysium have not seized the popular imagination to the same degrees as Fortnite or Minecraft, but so haven’t LeGuin or Pynchon caught the popular imagination like J.K. Rowling or Stephenie Meyer. To this day, I still hear otherwise insightful pundits dismissing the cultural importance of video games on the basis that the most commercially successful ones are not intellectually stimulating. This is like dismissing the cultural importance of cinema on the basis that “everything’s a Marvel movie now,” or dismissing that of literature because Twilight sold 160 million copies.
I mean, for God’s sake, folks — Disco Elysium won three fucking BAFTAs. And why shouldn’t it have? The premise of customizing a protagonist’s psychological profile before the story begins is, as far as I’m concerned, an inherently compelling premise. More importantly, it could only possibly work in an interactive medium that actively engages its audience. And most pertinently of all, it’s a completely novel dramatic concept whose successful implementation represents the synthesis of many hard-working creatives’ passionate ideation over several years. Given all that, perhaps you can understand why I don’t think o3 constitutes AGI.

At this precarious moment in history, humankind needs more than contrarian punditry if we intend to preserve any meaningful aspect of our creative heritage. Over our next couple of installments, I’ll argue that interactive media in general and video games in particular represent a final, impregnable bastion against the despoiling horde of machinic grift that would embrace bed-rotting slop culture as the promised destination of transhumanism. But before we get into all of that, we need to set the record straight on the near-term future of commercial video gaming.
TWILIGHT OF THE STUDIO SYSTEM
I’ve thought about Ryan Rigney’s reportage from this year’s Game Developers Conference almost every day since late February.
“The doors are locked, and the interior lights are off,” begins a harrowing description of Ubisoft Entertainment’s hastily shuttered San Francisco office. “[B]ut outside there are three lamps angled in to illuminate posters for three Ubisoft games. How long will these lamps remain on? Probably longer than XDefiant [the subject of one such poster], a game that was killed off barely six months after release.2”
Ubisoft’s tribulations are not remotely unique in today’s high-level game development landscape, but they are noteworthy for escaping containment and courting significant attention from the non-gaming mainstream. Game production has been almost universally acknowledged as a commercially viable enterprise throughout the new millennium, but its meteoric growth over that timespan has predictably exposed the industry to a glut of bad faith. As with most commercial media industries of today, big gaming’s C-suites now select for cynicism and avarice rather than any sincere commitment to artistic integrity, or even to customer satisfaction.
This was inevitable. The data don’t lie: for well over a decade now, the most commercially successful games have consistently privileged expensive spectacle over engaging design, except insofar as that design can be made psychologically habituating to children and other vulnerable consumers. The problem for the industry is that the dominant strategy is clearly failing: Ubisoft released four separate nine-figure flops within less than a year’s time, and the decent performance of Assassins Creed: Shadows seems to be the sole reason why Ubi retains any investor confidence whatsoever. Additionally, regulators throughout the world are clamping down on many of the industry’s most reliable money-printers like microtransactions and gambling mechanics. It’s only reasonable that many commentators would point to these developments and others like them and predict the forthcoming collapse of video gaming.
But I remain optimistic, because I see positive ground for the industry’s disenshittification among the effluvious remains of its decaying status-quo. We’ve seen this sort of thing happen before.
THE NEW GOLDEN AGE
When Ben Hur saw its theatrical release in November of 1959, I imagine the suits at 20th Century Fox must have laid their fists to the drywall as they prepared to begin principal photography on Cleopatra.
“Damn it to hell,” burbled a Scotch-drunk studio executive through flapping jowels, probably. “They must have cast ten-thousand extras in this bastard. How on Earth will we get those drooling tractor-pilots back in their theater seats with a goddamn Cleopatra flick?”
“Ooh, what if we cast twenty-thousand extras?” offered a soon-to-be-promoted colleague at the expense of his mortal soul. And it was so. How’d that pan out?
“Never for an instant does it whirl along wings of epic élan; generally it just bumps from scene to ponderous scene on the square wheels of exposition,” wrote Time magazine in a hilarious 1963 review of Cleopatra that could easily describe Ubisoft’s last dozen single-player releases. The American public was generally delighted by the expensive spectacle, but not vociferously enough to justify repeating the profligate expense. Meanwhile, European cinemagoers snootily turned up their noses at rates seldom before seen. Then as now, existential questions had to be asked about the future of the medium at hand.
I believe that video games are soon to undergo a similar sea-change to that undergone by cinema in the seventies. I refer specifically to the thermostatic shift in cultural perception that spurred a renaissance of auteur-driven cinema, answering the sixties’ studio epics with ars gratia artis like Taxi Driver and Eraserhead. Even comparably bombastic productions like The Godfather pivoted toward the high arts of direction, cinematography, and score to produce their spectacles in a manner uncommon to the buried studio system.
In today’s commercial gaming sphere, it is now far more common for daring, low-budget indie titles to turn huge profits than is the case for the AAA industry. The high-profile games that one sees advertised in public may reliably bring in hundreds of millions in revenue, but they barely break even or indeed lose money off the backs of similarly inflated budgets. Meanwhile, the means of video game production grow cheaper and more accessible each year, and the independent scene regularly produces noteworthy commercial successes with dramatic profit margins.
The AAA video gaming industry is soon to collapse under the weight of its farcical largesse. Like many dedicated patrons of the craft, I interpret recent decisions to again hike the standard price of new game releases as a craven attempt to rebrand games as luxury products by pricing out the non-wealthy. It’ll probably work in the short term, and we may soon bear witness to a commercial gaming juggernaut whose audience is composed mostly of the moneyed LinkedIn class and its addled, illiterate spawn. And before long, they too will lose interest and move along. The rest of us will look on for awhile in bemused disgust, and then return to whichever affordable independent release best captures our interest. We will be spoiled for choice.
And when the masses no longer acquiesce to the runaway corporatism of high-level game design — when the wool falls from our eyes and we see video gaming for what it is and what it may still become — the existing order will be cast by the wayside like so many of modern history’s other failed institutions. This is the unfolding opportunity that awaits the young Scorceses and Coppolas of the video gaming medium.
Next week, we’ll talk about how I envision the practical realities of this future, specifically as they pertain to the inevitable changes in mass perception that will characterize it. We’ll conclude with my theses on why video games represent their own category of artistic achievement and why that leaves them in the best position to resist the ensloppification of popular culture.
Til next time <3
Francis E. Sparshott, The Theory of the Arts, Princeton University Press 1982, p. 3.
i.e., it was such a pathetic commercial failure that it didn’t even make budgetary sense to keep running the servers.
My tepid take on the AI alarmism, particularly re. art, is that commercial art has always served as a form of conspicuous consumption, i.e. as a way for wealthy individuals and institutions to signal status. This was very much the case in the Renaissance: many of the greatest and most costly works of art ever were commissioned by the church as an elaborate flex, and meanwhile wealthy and powerful bankers etc. were funding artists and scholars not just because they loved art and learning, but also in order to one-up each other and show that they could. Being a patron of the arts was high-status because art was costly, and it was costly because it required enormous amounts of human labour and talent.
On the other hand, AI art in any kind of commercial project is already seen as a mark of extremely low status, which signals that your product (whatever it may be) is of very low quality, because if it was high quality, wouldn't you go to the trouble of paying for real art? Even if people hadn't already formed a very hostile anti-AI consensus on basically visceral grounds, which pretty much guarantees that use of AI art commercially will hurt sales, AI would still end up being seen as low status simply because it's so cheap.
It's like with lab-grown diamonds vs mined diamonds - consumers actively prefer the mined diamonds not in spite of them being more expensive, but BECAUSE they are more expensive, and they're more expensive because they required backbreaking labour to extract. On some level, your average western diamond-buying normie likes the fact that children die digging these things up.
Anyway, a very good post all round. I like the cinema comparison, and share the feeling of cautious optimism for the future. Smaller, younger studios are doing incredible things, and it's great to see. You could not pay me to play a Ubisoft game, but on the other hand you couldn't tear me away from KCD2.
I'd been hoping the AI tech bros would be slapped down as quickly and as humiliatingly as the NFT peddlers (and yes, there is a massive overlap between these two groups). Sadly that hasn't come to pass.
There are plenty of legitimate use cases for AI in many fields but the relentless focus on deploying it in the creative arts is, IMO, a giant bummer.
I saw a Reddit post yesterday with some jerk promoting his 'AI-Powered Newsletter Generator for Creators' which promised to write your newsletter for you to 'save time'.
Handy, no?