Thought of this today - sh.itjust.works

OK, it would still be immensely stupid, a waste of resources and land, and probably not even work properly at all - but for a moment, imagine if devs/companies used ai to optimize their games/software/websites. That seems like something it should be used for, if we were in a Star Trek universe. Anyway back to reality, here’s your ad for SlopCoke!

“Optimization” isn’t a simple problem an llm could solve, (if llms didn’t create more problems than they solve). What’s the #1 thing making software inefficient today? It’s that we don’t have a good system for creating cross-platform apps except for through the web so half of all desktop and mobile apps just load a chrome window. We already have wildly efficient video compression codecs with hardware acceleration. Idk what you’d even set the “AI” to the task of doing
Optimization is the kind of thankless annoying task that it would be great to put a robot on. But it’s also usually as much an art as a science, so an LLM would struggle to do it right, or at all. You could use an LLM to comb through a bunch of data and give suggestions of common bottlenecks or performance spikes, if you had a good way to generate and collect that data at scale.

thankless annoying task

I’m probably just a masochist, but I’ve always been the kind of dev who loves working on optimization problems. In that sense, I suppose the AI bubble has been a real boon for me, since now there’s far more messy, inefficient code out there than even before…

Unfortunately, LLMs tend to be really bad at this. They spit out beginner programmer that can search stack overflow a lot type code.

In one example I saw, it did some very expensive processing before a check to see if that processing would even be applicable, and this was a vibe coded project intended to be an “accelerator”. To the vibe coders dismay, even when it “worked”, it was noticeably even worse than the thing it was supposed to make faster.

In pursuit of autonomous development, they tend to stop if the thing barely passes the tests at all. After doing the work to give it specific enough tests to let it retry until passing, they spend thousands of dollars of retry after retry and you are lucky to even get one barely working pass in the end. To try to have it iterate for optimization is going to be way more expensive, especially since it is thoughtlessly trying stuff without a theory of why a difference would be more optimized or not.

Not that I know a thing about programming or game dev. But, from my ignorant position, I wonder if it would be possible to use LLMs to improve character dialogues in games. Imagine being able to type what your avatar wants to say, instead of picking a pre written dialogue option. And then the NPCs will reply accordingly.

Of course that poses some unsavory questions regarding voice acting, but I wonder if that would be a good application for it. I wouldn’t want any VA to get screwed with something like this

Anyway this is the part where you can reply and let me know how ridiculous this idea sounds and why

Some studios have already tried this. See Where Winds Meet for one of the most high-profile recent examples. I haven’t played the game myself, but my understanding is that the results are… weird. Not surprisingly, users pretty much immediately figured out how to coax unintended, game-breaking behaviors out of the AI NPCs.

But silly bugs aside, I think the main issue here is cost. So far we’re only seeing features like this in games with aggressive monetization, and that makes sense. LLMs are expensive to run. Getting good voice actors isn’t cheap either, but that is usually a fixed cost; you pay them once and that’s it. With AI, you’re paying for every single line of dialog uttered for as long as your game exists.

There are also no-zero setup and maintenance costs where you have to design specific guardrails to keep the AI from acting out of bounds. “Don’t give the player free loot, don’t use profanity or slurs, don’t discuss politics or sensitive topics with the player” et cetera. Of course players will always find ways around that, so now you’re playing a constant game of whack-a-mole trying to get this thing to behave the way you want. You’ve created a situation where you’re constantly paying for costly AI compute and you have to keep an “AI whisperer” on payroll. Suddenly paying a VA doesn’t seem so bad.

I see. Heh I’m not surprised people trying to push the boundaries of what it’s supposed to do… Time will tell, ideally you would want to keep voice acting as much as possible. Maybe in a future there is a way to properly blend both, also running the ai locally. But yeah I can see your point.
Yeah using it for NPCs seems like an OK idea… But I love the fact that a human is acting the character too. There’s just so much lost removing humana from everything. Pale, blank existence.

To mirror what others have already said, LLMs are actually really fucking bad at this.

For starters, all of the most common optimization problems are already silently fixed for you by the compiler. Modern compilers are crazy smart, and can optimize suboptimal code without the developer even having to think about it. That means that the remaining optimizations are already the ones that machines are bad at handling, and LLMs just don’t know how to close that gap.

I confess, I’m the kind of dev who lives for a good optimization challenge, and since I’m required to use an LLM at work, I’ve set them loose on a few of these problems myself. The results are not good. More often than not, optimization tends to be a big-picture endeavor where you’re thinking about how things fit together and if the project can be restructured to avoid bottlenecks. AI seems to be the weakest at big-picture stuff, so it tends to hone in on small details that aren’t going to give you much improvement. What they will do, however, is add lots of verbose code in pursuit of chasing a few extra milliseconds here and there, which is sub-optimal in its own way.

Readability and maintainability are both their own separate dimensions of optimization because they speed up the process of debugging and future development. AI does not optimize for these AT ALL. It often tries to compensate for this by filling the code with comments, but this actually makes the problem worse when your codebase becomes cluttered with hundreds of useless annotations like this:

//Initialize the app

InitializeApp();