Major investor is 'shocked and sad' that the games industry is 'demonizing' generative AI

https://lemmy.world/post/44340551

Major investor is 'shocked and sad' that the games industry is 'demonizing' generative AI - Lemmy.World

cross-posted from: https://lemmy.world/post/44340504 [https://lemmy.world/post/44340504] > Our actions and voices do make a difference! Keep AI out of games and reward original creative work.

I mean, AI in games can be neat.

As a specific example, consider Rimworld mods that generate conversations for characters, flesh out bios, make portraits based on their in-game traits. For free, on lightweight community finetunes that run on your PC.

…I like that. I like how it’s tightly integrated and a good fit, yet also “optional flavor,” not the foundation of a game.

What no one wants is AI Bro bullshit like:

…A group discussion about how the games industry can "capitalize on shifting trends in customer engagement.”

No thanks, I don’t want all of the descriptions and dialogues to be low-quality semi-plagiarized nonsense blabber just to fill space. I don’t want modders spending their precious time massaging slop to be fairly relevant when they could just make bespoke content instead.

If a part of a game isn’t worthy of human attention, let it be boring or non-existent or an afterthought.

Right?

AI companies stole the collective knowledge, creative juices, and artistic endeavors of the internet, which was shared freely to expand human knowledge and artistry.

Now they wonder why we aren’t willing to pay for their repackaged and pillaged slop…

I see a world where knowledge slowly gets hidden behind the choke hold of AI answers and paywalled sources. How do we hold the line?

No thanks, I don’t want all of the descriptions and dialogues to be low-quality semi-plagiarized nonsense blabber just to fill space. I don’t want modders spending their precious time massaging slop to be fairly relevant when they could just make bespoke content instead.

That’s the thing. It can’t be bespoke content, unless it’s a quest mod. Rimworld situations are so dynamic they rarely fit the “mould” of something written ahead of time. Hence the placeholder dialogue you often see in base Rimworld is already autogenerated “nonsense blabber just to fill space”

That… and have you ever used small LLMs finetuned for writing? While not perfect, it’s nothing like the slop you’d get out of, say, an OpenAI model. The finetuning datasets are open, and some of the base model datasets are open, too.

I could see that becoming the standard. Games ship with a switch in Settings that turns on/off the LLM features, with a place to either put in your API key or point it to a local model.

I mean, an paid API key shouldn’t be default. It shouldn’t even be an option, if you ask me. It should default to a community “horde” of folks playing the game, and prompt you to host an LLM and/or generate some responses for other users if you wish to.

Kinda like the Fediverse. Or the AI Horde, but for a specific game: aihorde.net

I really don’t want one more drop of traffic redirected to OpenAI. They’re like a cancer in the machine learning community.

AI Horde

Tumors can get cancer?

have you ever used small LLMs finetuned for writing?

No. I don’t outsource things I like to do.

My experience with the ‘look how amazing it is at writing’ is being exceedingly bored by the prattling on without substance.

Like sure the style and structure can be less obviously bad, but it is still ultimately senselessly padding out a short prompt into a mountain of words that say no more than what the short prompt conveyed in the first place.

If I want to dwell on some imagery, I can and have set down a book and just contemplated what I read and let it fill my mind. I don’t need a ton of words to force me to linger.

It wouldn’t be long monologues. It’s short bits of conversation, or maybe 1 sentence descriptions.

Again, throw everything you think you know about chat models out of your head. Throw everything related to multi-turn conversation and prompt engineering out.

The prompt would look like a mess of programming variables: Rimworld skill levels and passions, traits, injuries, clothes and their state, logs of events, maybe a plot of entities around them. It would condense a bunch of information down (to, say, some reasonable quip of dialogue this character would say,) which is what text modeling was supposed to do before these stupid chatbots came in and spammed everything up.

I get the sentiment that, sometimes, imagination is better. I like to read, or write out stories stuck in my head.

…But sometimes I’d rather play a game.

I just meant in response to the “models for writing”, that they are verbosity engines and even in a reading scenario where one might want to pointlessly dwell on something, we don’t need wall of text to do so.

But sure, you can have a short dialog for background characters, but not sure I would care about the flavor text being ever so slightly bespoke versus the usual short throwaway lines. By the time you’ve fed all those stats, factoids, and events into the model., feels like you’ve already done way more than writing a couple lines of throwaway background dialog.

I think it could be cool for background NPC dialogue in big open RPGs like Skyrim. Imagine if townsfolk could have realistic conversations and interactions like bartering over goods, etc. Nothing major or plot-dependent, obviously. Just something more natural than a handful of repeated, scripted and prerecorded phrases.

I would compare it to ray-tracing. Ray tracing means the artists don’t have to plan out every single light beam, and the result is actually more realistic than if they had. A tactfully used LLM could mean they don’t have to script out every line of background dialogue and also achieve a more realistic result.

Imagine if townsfolk could have realistic conversations and interactions like bartering over goods, etc. Nothing major or plot-dependent, obviously. Just something more natural than a handful of repeated, scripted and prerecorded phrases.

There’s already a in-development Skyrim mod for that. Many sandbox games have mods for exactly this.

I haven’t tried the Skyrim one though; haven’t been in the Skyrim scene for awhile. And TBH, some of the mods use pretty sad or sloppy LLMs by default.

Ah cool, I’m not surprised. The Skyrim modding community is insane.

Think the critical thing would be to identify “background content” so that you don’t spend forever trying to tease out actionable info from a background character.

That’s the biggest thing is that while LLM can do ‘flavor text’, it’s not very good at making sure that characters convey specific relevant detail reliably to a player.

I don’t know about ‘more realistic’ though, LLM game demos can often go pretty out of character. Like a medieval setting NPC discussing coding. Or in one the character talked about how they had just came in from an outside walk, but they were chained in a dungeon cell. Another character talking about how the developer wrote them this way. Keeping an LLM “on the rails” of a scenario can break down.

Major investor is ‘shocked and sad’ they’re not seeing returns on AI.
Shocked and sad he is going to be broke.
That is not very likely, I guess
They are really disappointed in us.

Yeah, I’d say that’s one of the reasons they don’t like it! Others include the use of artists’ work without consent, environmental issues, the quality of AI output, and the feeling that automating culture production can only result in what is now commonly called "AI slop

Summed it prefectly why people hate AI in culture. AI can be very useful in science, medicine, engineering, and similar professions. When the AI is built upon very specific data set. There is no conscious reasoning behind why the AI did what when it makes art.

Generative AI is just slop. It takes previous works and repackages it what the code says. When people make art, there are hundreds of micro decisions that people make. Those micro decisions are gone when AI makes it. Gabi Belle did a great video of why they hate AI art. youtu.be/QtZDkgzjmQI

AI has already ruined music

YouTube
AI is generally only considered useful in professions people aren’t actually familiar with. AKA it isn’t in its current form to actual experts in anything.

“Generative AI is great at doing everything I suck at, but it’s completely terrible at the things I actually know!”

Too many people think that this and do not seem to understand that it is pretty shitty at everything. Well, except getting people to kill themselves, I guess. It’s pretty good at doing that.

The silver lining for the AI companies is that there’s a lot of real humans getting real money that are also really shitty at what they are paid to do.
Queue the serial killer telling me that I don’t know what I’m talking about and that they could get people to kill themselves so much better and easier.
With the way AI companies seem to avoid liability for everything. Fantastic way of becoming a serial killer. Can we workshop some serial killer names?
ShotGPT? Anthraxic?
I was watching Ryan hall and his little AI bot the other day. It occasionally goes off the rails.. Weird how he keeps trying though. Sometimes a bit entertaining, but if something I was using was malfunctioning that much I would not consider it a useful tool.
Part of the probleem is how broad the term ai is, and how narrowly it is used. People just mean autoregressor llms and maybe diffusion models, while the term ai is much broader than even machine learning (for instance formal reasoning), which is again broader than backpropagation with gradient descent (for instance boosted trees) which is again broader than generative ai (for instance classifiers and deep learning). All of these are definitely useful in science and engineering and have been for decades, although llms are now beginning to find uses as well.
Coincidentally, Hollywood is pretty good at portraying every profession except the one I know!
I don’t think they were talking about GenAI with that, and AI (aka ML models) built on specific data sets for a specific purpose can be quite useful. Expecting an LLM to do anything other than language processing well, on the other hand, is insanity.
Good. They can go have a pity party circle jerk at Satya’s house lol

keep AI out of games

Good luck, its here to stay, get used to it lol.

Anyone who thinks the average developer isnt using AI heavily in their code is delusional, its been baked into every major IDE for like 2 years now.

Its in there, its permeated every layer of game dev, it works when you use it right, and the only time people care is when you make it obvious (IE including it in your final art of the game)

But no one even blinks an eye at all the other layers AI is used in unless you announce it.

You should just assume every game you play made after 2024 has chunks of it that are AI generate. The plot, writing, code… its in there, and you prolly haven’t even noticed.

Good luck, its here to stay, get used to it lol.

So are we. Get used to it.

You should just assume every game you play made after 2024 has chunks of it that are AI generate. The plot, writing, code… its in there, and you prolly haven’t even noticed.

Oh, we’ve noticed that AAA game quality is shittier than ever, trust me.

Yeah its permeated way more than AAA.

But trying to convince game devs to not use AI is about as likely to succeed as convincing them to stop using their IDEs.

What will actually happen is everyone is going to just stop announcing they are using it, and every month that goes by it’ll get harder and harder to tell.

Why are you people so obsessed with your “AI is inevitable” party line?

Because its the truth, we already are well over a year past the line of it becoming inundated into normal life.

Its not even inevitable anymore, we are past that point.

Its already here and actively in use, and there’s literally nothing that can stop that.

The real thing to hate on is using it poorly or wrong and wasting resources.

Its 100% viable to run this stuff in an eco friendly and sustainable way. We have the technology.

Datacenters have been around for decades now, AI isnt special.

But sustainable energy practices, recycling coolant, and impact on local populace, thats the problem, and its a solvable problem right now

Its already here and actively in use, and there’s literally nothing that can stop that.

And this means people can’t complain about it because…?

The real thing to hate on is using it poorly or wrong and wasting resources.

Okay, so exactly the thing we were already doing.

In this thread? No.

In general? Sure.

This thread is about people hating on AI in general, which is stupid.

But Im all for hating on wasteful non-eco-friendly data centres.

While people may be opposed even in theory to more tame things like a little code completion, there’s plenty of room to very obviously notice GenAI slop.

If people use LLM to generate text, they tend to make too much text, and it shows in how offputting it is. LLM may be able to generate a modest text without notice, but people will put in a two liner and get pages of garbage back and use that.

And of course famously the GenAI textures are generally offputting. Maybe you can have ‘generic metal texture’ and no one will notice, but try for specific details and it generally gets caught.

It is possible that human output that is similarly crappy gets mistaken for GenAI output, but oh well, slop is slop either way. It’s just that GenAI extends the slop to unbelievable magnitude.

While people may be opposed even in theory to more tame things like a little code completion, there’s plenty of room to very obviously notice GenAI slop.

I mean there’s the regular “can you really sell code you don’t own” kind of thing going for it. The companies have stolen all sorts of data; voices, music, raster, vector, video, books, film. It’d be shocking if they also haven’t scraped all the code that’s out there on the web.

Some of that is perfectly fine to alter, and sell. A lot of it isn’t. There are plenty of FOSS licenses that are restrictive in the sense that you’re free to use it and change it, but you can’t alter the license of it, and in many cases not sell it.

So when an LLM produces code based on that, what applies?

Then there’s obviously broader problems with ex-developers turned vibe coders coming out of the woodworks talking about how they can’t code anymore. I’ve people at my company joking about this, and the notion scares me. The idea that they’ve outsourced their thinking and problem solving skills to the point that they’re incapable of doing now it is terrifying.

I don’t know why anyone would willingly do that.

Well, unless you declare AI consumption fair use, only public domain is fair game, since every single license requires at least attribution. The courts regrettably seem to be buying the line that they are merely “learning” like a human and therefore exempt from the rules. All this ignoring that if a human reproduces something they “learned” close enough, they are on the hook for infringement, and in the AI scenario the codegen user has no sane way to know if the output is substantive and close enough to training material to count, since the origins are so muddled.

I just don’t understand the “real” developer to vibe coding scenario. Like, it really sucks, even Opus 4.6, at being completely off the leash. I don’t understand how anyone can take what it yields as-is if they ever knew how to specifically get what they want. I know people that might be considered “coding adjacent” who are enthusiastic at seeing a utility brought to life, though usually they haven’t that is not quite what they wanted and get frustrated when it doesn’t work right and no amount of “prompt” seems to get the things to fix it. They long were intimidated by “coding”, but LLM is approachable. Many of these folks “scripted” far more convoluted stuff than many “coders”, yet they are intimidated by coding.

I just don’t understand the “real” developer to vibe coding scenario.

Software developer of 17 years here.

For any given project, even tremendously optimized and easy to maintain, is about 90%~95% easy boilerplate code anyone can understand.

With an existing project with already hundreds of examples of how to write that boilerplate code, I can point even just sonnet 4.5 at that, give it the business rules required, and tell it “go do that, use the code base as an example” and it’ll pretty much always get it correct first try, with the occasional small thing wrong that is an easy fix.

Once the LLM has an entire codebase to build off of as an example, its efficacy skyrockets.

Add in stuff like LSP feedback, linting rules, a .editorconfig like, an AGENTS.md, and it will be very effective.

Then I can handle that last little bit of 5% of actually important code, allowing me to put way more of my time and energy into the parts that really matter (security hardening, business rules, auth, etc)

I still spend 8 hours on a task but before itd be:

  • 4 hrs boilerplate
  • 4 hrs important part

Now its:

  • 30min boilerplate
  • 5 hrs important part
  • 2.5 hrs adding in rigorous integration tests for corner cases too

Its about removing all the mental overhead of all the annoying boilerplate easy stuff, like having a lil junior dev I can hand off all the simple tasks to, so I can focus on the “real” work.

That is where real productivity shines with these tools.

I’ve seen and managed to avoid so many boilerplate heavy projects, I suppose it’s skewed.

But yes, I find it good at boilerplate, but I consider that short of “vibe” coding, as even if prompting I’m doing it in specific context to avoid having to dig back into its sea of codegen to get at the important parts. I might have it spin up to a whole specific file at a time, but I’m not going to let it roll a whole project at once.

I agree, people tend to use it very poorly.

That largely stems from it still being a fairly new tool and, to be honest, quite unintuitive how to use it well.

Theres a lot of fundamentally bad ways to use AI that feel natural due to the way an LLM creates the illusion of thinking.

For example, one of the first things you learn in prompt engineering is dont correct the mistakes of an llm, this is unintuitive but it inherently reinforces the llm to make more mistakes.

Instead you have to go backwards in the history and edit your prior statement to “pre” correct it before it made the mistake, and regenerate.

Its a subtle thing but makes a huge difference in it producing stupid useless garbage vs actually not half bad output.

Pretty much every “trick” to it is unintuitive like this, so thats why so much of what you see AI producing from people in the industry is garbage, Id estimate like 95%+ of people just straight up are using it very wrong, becoming frustrated, and producing sloppy tier output.

Which is a big waste of resources atm. More work has to go into education on how to use this stuff efficiently so its not wasting resources and slop levels go down.

Man you guys really don’t like consent do you?

Only now in the year 2026 do people suddenly give a shit about a specific tech used to make a product.

The thing is, yall habe been consuming stuff that ruins the environment for decades.

Every movie you watch with hyper realistic animations and vfx churned through enormously more water and power to put a mustache on Dr. Strange’s face than you might realize.

The concept of server farms using up large amounts of power and water isnt new and on the scale of tech that uses it, AI isnt even the largest offender.

You should go look up the sorts of data centers that power the Google search engine

Again, you people have zero concern with consent. No one wants your ai bullshit. Literally no one. Go onto the next tech-scam already. The bubble is popping.

The only time people care is when alerted to it.

I gaurentee you about 80%+ of games you play developed in 2024 and on, are “AI assisted”

And you almost definitely, right now, have consumed content by now that had AI involved in its creation, and you enjoyed it, and you didnt even know it used AI.

The anti-AI shit is going to be viewed 20 years from now as cringe millennial boomers who were scared of AI and simultaneously claimed they hated it while apso consuming its content unknowingly.

Its gonna have the same energy as self proclaimed vegans who enjoy parmesan on their salad, and are shocked to find out parmesan is a cheese.

You are already actively consuming and enjoying AI generated content, and have been for many months, unknowingly.

Lol, sure dude. Balatro and Valheim are totally using AI to make their games.

You guys simp over something that is destroying the environment and peoples lives, and you don’t give a single solitary fuck.

When this bubble pops, you AI fanboys are going to look like the same people who thought crypto would replace the dollar, and who thought NFTs would be a revolutionary new technology.

If localthunk used vs code (very likely since balatro is wrotten in lua), then its very likely they have autocomplete turned on.

And if they have autocomplete turned on, then yes, a non trivial amount of balatro is written by an AI.

Its been like this for like almost 2 years now, thats what I mean when I say its deeply integrated into developer tools now.

If you use default settings and dont manually go in and disable a bunch of stuff, you 100% have AI written code now, simply pressing tab to accept an autocomplete is all it takes now, its built into basically every major ide.

Now… if localthunk is based and uses neovim though, then its very vety unlikely they have AI generated code in their game (you have to manually install an extension to enable such stuff and go find it)

But afaik Neovim is the only “mainstream” IDE that has AI autocomplete as “opt in” instead of “opt out”

And most people dont even know the fancy autocomplete in most IDEs is AI, they just think “wow my IDE is so good at autocomplete suggestions”, because its really fast. So people don’t clock that as a whole ass LLM query that ran under the hood to figure out the autocomplete code.

Source: I conduct coding interviews at my company often and almost every dev I have interviewed has had AI autocompleye turned on, and I had to ask them to toggle it off at the start, and many are shocked to learn “wait, thats AI?!”

Thats the basis for my statement that this shit is in everything, devs are using AI generated code without even knowing it half the time

Only now in the year 2026 do people suddenly give a shit about a specific tech used to make a product.

Tell us you aren’t in FOSS spaces without telling us.

You should go look up the sorts of data centers that power the Google search engine

You’re on fucking Lemmy, you think anyone here is using Google?

Hahahahahahahahahahahahaha
“Won’t someone please think of the poor shareholders!?”

Investors don’t care about games as art, they carr about games as a vehicle for making money.

If they are pushing for AI in games, it’s because they think it will make them money, not because they think it will be good for games.

Mm, few things get me as excited as investors being sad. Cry harder, baby.

GenAI sucks.

And no matter how they gaslight us, it continues to suck.

Greedy fucker “investors” selling their book is literally one of the greatest informational problems in the modern age - they’ll do everything in their power to mislead others, from plain old lying and appeals to emotion to buying and turning into propaganda outlets traditional news media and funding projects and even institutions to spread misinformation purelly to push up the profits of their “investments”.