For AI in Game Production, Transparency is Paramount
Going into 2026, progress is inevitable. Slop is very much not.
MACHINE DREAMS
Just about every creative industry is in a weird-ass place when it comes to the role of generative AI lately. If you ask me, the weirdest ass of them all belongs to wherever the games industry has found itself in 2025. In between great games marred by avoidable communications failures and forgettable games languishing in the slop pits, it’s been a banner year for controversy. I’m not the least bit surprised by that, of course — since video game production is a fundamentally interdisciplinary artform and a huge driver of commercial profit, it was never going to escape the morass of quarreling that now follows large language models and automated content generation wherever they tread.
And now that the S&P’s most dominant corporations are all using datacenter capex to settle history’s most expensive dick-measuring contest with TPUs instead of rulers, it also comes as no surprise that industry figureheads are trying their damnedest to convince the public that resistance is futile. Prominently, Epic Games CEO Tim Sweeney made headlines last week for decrying Steam’s AI disclosure policy on the supposed basis that AI “will be involved in nearly all future [game] production,” a take that seems tailor-made to piss people off. Relatedly, it’s the nature of AI disclosures and the importance of transparency more generally that I want to focus on this week.
As I’ve mentioned before, I can hardly bear the competitive shit-flinging that characterizes what passes for discourse around generative AI, so I try to keep my distance and hardly ever talk about it in my writing. That’s not for lack of opinions, though. Given my informatics education, my proximity to the software industry, and my general distaste for the cultural footprint of corporate managerialism, I can’t help but overthink it all. I try to constrain this publication’s focus around celebrating the artistic achievements and horizons of game design, but I can hardly deny that there’s a pretty significant intersection between that subject and the unignorable specter of AI in the games-adjacent creative industries. So, let’s talk a little about it so that I can get it off my chest as 2025 nears its end, and then we’ll return to something more culture-positive next week.
ENHANCEMENT VS. REPLACEMENT
We’re living through an era of extraordinary tension between consumer, creative, and managerial incentives, and that tension is particularly stark in the video games industry. AI might as well be the contemporary figurehead of that tension which, by my reckoning, arises from the none-too-subtle distinction between enhancing the creative process and using it to replace human creativity altogether. Consumers in general, and gamers not least of all, clearly recognize this distinction even if they don’t always articulate it.
It’s well understood by now that AI-adjacent tooling is commonplace in game production because it can have a demonstrably positive impact on developer experience and, by extension, creative productivity. Indignation didn’t really set in until the rise of wholesale asset generation that was sold as a replacement to that process. Thing is, there’s a substantial and relatively intuitive difference between technology that enhances the creative workflow and technology that regurgitates an inference over training data to produce a facsimile of that creative workflow’s product. To better illustrate it, I come prepared with a recent example.
Embark Studios’ ARC Raiders was released at the end of October to considerable acclaim, and it’s notable for innovating with one incarnation of AI tech while stoking controversy with another. As a quick summary, a machine-learning approach was used to train the game’s many-legged robot enemies to move naturalistically across unpredictable terrain, and the game also includes a trove of text-to-speech voicelines that were based on the work of real actors but implemented with generative AI.
I read the former as a clever and valid application of machine learning descended from approaches that game designers have used for ages. It would’ve been absolutely miserable to manually calculate and implement all that skeletal animation (if such a thing were even possible), and the effect usually looks pretty great in action. Besides, a machine learning-based approach like this can’t just be lazily generated. It involves the same process of ideation, iteration, and refinement required of any software development project. And unless I’ve drastically misunderstood the nature of Embark’s specific approach, it didn’t give upper management an excuse to reduce headcount and it didn’t play fast and loose with anyone’s intellectual property. It’s a case of AI as it was understood before GPT 3.5 that enhances the creative vision behind the product, and I see no reason not to get behind it.
Meanwhile, the AI-generated voicelines tend toward the uncanny and preposterous, and they add essentially nothing to the experience. One wonders why on earth Embark made the decision in the first place while surely aware in advance that it would piss people off and presumably cause a non-trivial reduction in sales from the archetypes of gamer who quite rationally refuse to support this kind of thing. The voice actors on whose talent it was trained apparently consented to and were compensated for this, but I don’t suppose for a moment that they were given practical latitude to refuse. And above all else… I mean, for the love of God, at least half of these generated voicelines sound like shit. The intonation is off, the timbre isn’t believable, and I’d expect less wooden delivery from a middle-school theater production. Despite three years of ceaseless hype and a trillion dollars invested, the machines remain decisively incapable of performing human emotion at anywhere near the level of a voice actor. I’ve seen more believable acting from Microsoft’s social media feeds.
Writers elsewhere have made compelling points about how this offputting implementation of GenAI is conspicuously at odds with the premise and spirit of the game itself, i.e., of humankind resisting displacement by unfeeling machines. So too does the game’s pithy AI usage disclosure on Steam, which evasively claims that the game “may” use AI-based tools and that such cases reflect “the creativity and expression of our own development team.” Yeah, I’m not buying it, chief. It strikes me as a barely concealed effort to minimize consumer displeasure at a decision that management knew in advance would be unpopular. But at least the game is pretty great in general terms, and at least the team made a pretense of supporting the flesh-based creative process.
Oh, well. Guess we’d better talk about Activision’s latest nonsense.
AGAINST A SLOPPIFIED GAMES INDUSTRY
So, Call of Duty: Black Ops 7 isn’t looking too hot. As an avowed supporter of human creativity, gameplay innovation, and reasonable production budgets, I’m certainly not going to pretend that I was expecting much from the latest installment of gaming’s most creatively sandblasted IP. BLOPS 7’s pivot to AI-driven development wasn’t a big surprise either, given the practically existential investment that its primary stakeholders have made in the technology. But, good God, I didn’t expect it to be quite so shameless. I won’t relitigate the whole story, but suffice it to say that some official assets in this $70 video game are frightfully reminiscent of the laziest generated garbage one sees on social media, and it’s not a good look. A cursory search through YouTube turns up reams of evidence that textures, animations, and perhaps even whole cinematics were entirely prompt-generated. Now, let’s take a deep breath and have a look at the game’s AI disclosure:
“Our team uses generative AI tools to help develop some in game assets.”
Good God, that must be the most loaded sentence I’ve read in all of 2025. The word “some” alone is working more overtime than the average triple-A game developer.
Here’s a little food for thought: given that Activision-Blizzard and its parent company Microsoft are so unrelentingly GenAI-positive in their earnings calls and press releases, why weren’t they bragging about the specifics of BLOPS 7’s AI-driven approach to development and singing the praises of their supposed innovation in the field of enterprise game design? Well, if Occam’s razor is any indication, they’re very deeply proud of the profit margins it’ll supposedly enable but not particularly proud of the product that they’ve ultimately produced. And I must say, the game’s mediocre sales and increasingly anemic player count suggest that they were absolutely right to be disappointed in the product itself. Footage of BLOPS 7 is a gruesome, Orwellian window into the harrowingly dull future that industry executives envision for high-level game development, so it’s encouraging to see that it’s provoking measurable displeasure even among the core audience.
Here’s the thing: for all the pronouncements that GenAI is inevitable or that it enhances productivity, major studios and publishers clearly recognize that most of their customers don’t fuck with machine-generated slop, and slop is the inevitable result of replacing human-based creative labor with machine generation. This would certainly explain why BLOPS 7’s AI disclosure was so desperately malnourished. It feels skin-crawlingly duplicitous, and there’s already at least one prominent example of someone getting a whole-ass refund on the disclosure’s account.
I’m not here to argue that modern AI is an entirely worthless technology. It’s already demonstrated some net-beneficial use cases under narrow, well-defined conditions. But that’s precisely why transparency about its use is so important: it gives us as consumers the information necessary for distinguishing between tech that makes better games and tech that cynically increases profit margins at our expense and at the expenses of those games’ creators. Recent history has repeatedly shown how a dearth or absence of transparency is indicative of a studio or publisher with something to hide, and it’s absurd to expect the general public to embrace a technology about which they’re being openly lied to.
Conversely, meaningful efforts at transparency reflect a genuine interest in using that technology for mutual benefit. Progress requires experimentation, after all. A lot of good-faith experiments into AI usage will inevitably fail, and it’s entirely rational to question whether or not a given result was worth the trade-offs. So, before we wrap up, let’s look at a game that’s doing almost everything right.
A CASE STUDY IN EXCELLENT TRANSPARENCY
Good news, nerds! I’m talking about Shadow Empire for a second consecutive week. Bear with me, I promise this will be interesting even if you’re not into geopolitical roleplaying.
To briefly recap, Shadow Empire is a strategy game built atop a complex simulation that I like very, very much. There’s no need to painstakingly reexplain why for our purposes here, so suffice it to say that it’s an ambitious work of creative and innovative game design largely made by a single person. It was released in late 2020, well before GPT 3.5 and the corresponding avalanche of generative tech that assaulted the popular culture like so many Visigoths at Rome’s gates. It’s also an undeniably niche product targeted at a limited audience, and it had a commensurately limited budget. So, when it experimentally introduced machine-generated character portraits a few years post-release and provided a conspicuous option to toggle them off, I was intrigued far more than I was disappointed.
Why? Well, for one, because I could tell right away that I liked the human-made portraits better and was glad that I could swap back to them with zero friction. It also demonstrated an interest in using the technology to meet the practical barriers of solo-development halfway rather than as an exercise in pleasing shareholders. But above all else, VR Designs was uncommonly straight about the specifics of the AI strategy and comprehensively documented where and how it was used. As usual, you can read it on the game’s Steam page if you’re so inclined — it’s a bit too long to reproduce here.
Now, would I have preferred that VR Designs and Matrix Games contracted one or more human artists to put together the requisite assets from scratch? Yes, of course. I want to see more creatives getting paychecks, period. Also, I find the original human-made artwork from 2020 to be substantially more grounded and thematically appropriate. Besides, it mercifully lacks the uncanniness that pervades machine-generated portrayals of human beings no matter how many teraflops we cram into the AI industry’s cavernous gullet. At least I know that no personnel were replaced to fill in these holes, that it’s not making calls to GenAI tools in-game or harvesting my data, and that I can shut it off or mod it out at my leisure. For those reasons, I’m sure as hell going to keep playing Shadow Empire, and I’ll look forward to supporting the dev by getting the forthcoming DLC whenever it ends up coming out. If and when somebody releases a scratch-made mod for the other artwork, I’ll throw them a few bucks, too.
The schism between investors and skeptics in the AI debate remains plainly visible, but I increasingly believe that the defenders of human creativity are winning the battle of ideas. For every GenAI-evangelizing executive, one can point to an oppositional figure with a demonstrable connection to the boots on the ground. Obsidian didn’t use GenAI “at all” in developing The Outer Worlds 2, despite the all-in approach of its parent company. PUBG creator Brendan Greene is an avowed supporter of the revolt against it despite the similarly totalizing approach of major investor Krafton, Inc. Statements like Tim Sweeney’s are hard to square with observable reality, and I have no reason to believe that the human-driven creative process is to be replaced in the foreseeable future.
As far as I can tell, the likes of machine learning, AI-assisted coding, and the partial automation of repetitive workflows are all here to stay, and I’m okay with that as long as it keeps driving innovation, benefiting the medium of game design, and otherwise enhancing the creative efforts of human artists. I’m not entirely opposed to AI as it’s variously understood in the software industry, but I’m categorically opposed to the devaluation of human creativity, and the two very often go hand-in-hand these days. I’ll not put my support behind projects that flagrantly reject the sanctity of human creativity, but I’ll stay open to compromise where it can reasonably be had.
Ultimately, I’d rather have folks with disparate ideas working together in good faith toward mutually beneficial resolutions. But unless and until the hype recedes to background levels and forces the discourse’s loudest voices back down to earth, those of us who just want to make and play good games will have to do our best to exercise the agency we’ve got. Toward that end, we should continue to reward earnest transparency and continue to speak out against efforts to quietly normalize development approaches that disrespect the medium. Money talks — eventually, the predictable backlash to such disrespect will no longer be worth whatever nebulous gains are to be had from propagating it.
Phew. Gosh, thanks so much for reading through all of that. We’ll be back to a happier subject next week. Nuclear annihilation, perhaps. Didja know folks are still making Fallout games on Black Isle’s classic engine? See you soon.



Wasn't aware of the ARC Raiders thing, as it's not really my genre. I have to say, it does look like a pretty cool game art-style wise, and I don't disagree re. using AI to optimise robot leg animations, that would be a miserable task for a human. The use of AI voice lines though... why? Of all the things AI can do quite well, voicing still isn't among them.
What bothers me more is the likelihood that AI is going to be used at every level without proper disclosure, because invariably when you talk to people who use AI for work (particularly for creative work), they have kind of fooled themselves into thinking they're not REALLY using AI, and it's actually still totally their own work; therefore, they'll find ways of convincing themselves that they have nothing to disclose (not least because there are obviously career incentives to pretend you are more capable than you actually are, and the market is hostile to AI, meaning you will make more money if you downplay AI contributions).
This bothers me particularly in a medium like gaming, because as you know I tend to assess games as art, and art is about expression and communication, and part of that is the sense of personal connection you often feel with an artist.
E.g. I have never met the people who made Darkwood, but nevertheless I feel a sense of connection with them, because they have shared something unique which meant something to me. If I learned that Darkwood was a product of AI (obviously impossible, given its age), the magic would be gone, and I'd probably find myself hating it.
In the longer term though, as I've said before, I think that human art will be valued at a premium, not in spite of the fact that it's difficult, expensive and requires sacrifices, but because of it.
Great job, this is a breath of fresh air. We desperately need nuanced conversation on this topic. I’m both a creative and someone who’s worked on systems that incorporate AI, so I feel I can fairly clearly see the promise and the pitfalls of this technology. And I, personally, would never begrudge a bootstrapped solo or indie developer for using AI to realize a vision that wouldn’t have come to pass otherwise (AAA studios are another matter). Nor do I assign some huge moral weight to using (or not using) these tools, not even in the name of pure efficiency. I do think there’s a problem when quality is sacrificed on the altar of efficiency, however. This is, indeed, the fundamental issue upstream of the sea of slop—mindless generation, often for its own sake, without any human input, without regard for the final output, without understanding that AI is not a substitute for talent, and all at the expense of the audience that has to wade through it and those who create more intentionally (whether or not they use AI as part of their process).
I continue to hold out hope for the day in which non-technical people without access to startup capital (myself included) can leverage these tools to finally make their dream game and bring it to market without submitting to the whim of a publisher or having to dance for investors. But I worry that by now, the well has been so thoroughly poisoned by the slop spammers and the state of the current discourse that audiences will categorically dismiss such games right off the bat because the expectation that it’ll be crap has been set in stone. At that point, what’s the point? And it’s a shame, really; imagine what a Sid Meier or John Romero could do if they were just starting off and had access to such tools.
Nor do I see much of a solution, because this democratization of brainpower and creativity (however flawed and ultimately inferior to the real thing), of which slop is an inevitable side effect, is kind of the point of these tools to begin with. We may well all be at an impasse. But I do know this: whatever solution we try, if there’s even one to begin with, will come from these types of measured, grownup conversations. Thanks for doing your part and contributing to improving the discourse!