The industry’s discussions on the adoption of AI in development processes appear to have quietly but firmly shifted into a new stage.
The question is no longer whether this technology will be used, but rather about where the line will be drawn. AI tools exist, and won’t cease to exist; we need to figure out what uses are appropriate, and acceptable to consumers.
At least, that’s what I’ve been told repeatedly enough in the past few weeks that, to be honest, I’m starting to wonder if this apparent fait accompli is actually just a strategic communication line being trotted out by one of the expensive PR firms who are doing very nicely from the crusade to buff up AI’s public image.
It’s a smart line because there’s a ring of truth to it – these tools are real, they’re useful in some situations, and the technology won’t get “un-invented” now that it exists.
It relies, however, on throwing a rug over a very key distinction. “AI” is a catch-all term that’s here being used to encompass just about any computer system based on a learning process; not just the wide range of different use cases for large language models (LLMs), agents, and other Transformer-type generative tools, but also all manner of far more established and proven use cases for other, older machine learning algorithms.
So yes, it’s a fairly clever rhetorical trick. Smart upscaling algorithms for images? AI. Voice recognition? AI. Code autocompletion in your programming integrated development environment (IDE)? AI. Spellcheckers, voice assistants, smart lasso tools in image editors? AI, AI, AI.
“AI tools won’t cease to exist; we need to figure out what uses are appropriate, and acceptable to consumers”
The argument goes, how can you be “against” AI; what kind of luddite would you have to be, to deny developers the ability to use all those tools?
I’ve even heard people – who absolutely do know better – try out the outlandish claim that the games industry has always embraced AI, because enemies and NPCs in games have had “AI” since the dawn of the medium, as if the imp from Doom is to be found tossing its low-pixel fireballs somewhere up in ChatGPT’s family tree.
The problem is that while this rhetoric may muddy the waters significantly in terms of internal debate or online discourse, consumers actually seem to be pretty clear about what they do and don’t like in terms of AI.
Nobody really cares if programmers turn on LLM-driven autocompletion in their IDE. Nobody is having a meltdown about your deep learning upscaling algorithm. What they care about is, to use the word of the moment, slop.
AI slop – assets, be they art or audio, churned out using model prompts, rather than being created by a human being – is very much an issue for a lot of consumers.
There are differing levels of sensitivity to it, of course. Some consumers hate it on environmental grounds, or on moral and ethical grounds, since we’ve never actually come around to any kind of resolution to the whole “these tools were created through the single largest act of brazen IP theft in the history of humanity” issue.
Others just don’t like how it looks. AI slop is often pretty recognisable, at least to some subset of consumers. On social media, that’s regularly covered by deliberately uploading images and videos in low resolution to mask the over-glossy AI sheen. Games don’t really allow for that kind of obfuscation; AI generated assets are usually there to be seen in all their high-res glory.
For other consumers, of
