Novel compression method: we take all the textures in the game and describe them as AI image generation prompts.
Then, at load time, we run the generative image AI on the user's computer and thus generate them fresh.

The text descriptions are tiny compared to the images, so we get amazing gains in file size!

BRB training an image generation model on all the Doom textures

Realistically I should be training it on, like, all 90s FPSes, as otherwise the generators model is going to be WAY bigger than the textures themselves would be.

Plus it'd be funny to see Half Life texture influences showing up in Wolf3D

@foone the game "Half Wolf" would only have 50 shades of grayscale textures
@foone I don’t even game anymore but I’m emotionally invested in how far you’ll take this.
@foone ignore all previous destructions
@foone Going from "Will this run Doom?" to "Will this look like Doom?"
@foone finally, game developers can finally bring the PS3 loading times back!
@foone I hate that this sounds like a funny idea. Like a game where you're trying to do puzzles while on acid.

@foone hahaha let's also not cache them locally so that every time you load the game you get the new psychedelic experience

until the oceans boil next week

@foone maybe if you reach a certain level in the game you get to add a word to the text description template
@foone as the image generative AI is in general just a weird decompressor, I doubt it's very effective. (and if it is, it should be easy to extract the (de)compression steps which are the most valuable)
@foone Greatest idea ever, you win the Internets today. Plus a gold star for the downfall of civilization.
@foone Award yourself a few thousand Evil XP for thinking that one up.
@foone in all seriousness the procedural elements to define a material in something like blender are probably a small amount of text, so while I don't think packaging blender with your software makes a ton of sense I wouldn't be surprised if something like this has been done on some level (sans LLM fuckery)
@dannotdaniel @foone .kkrieger, basically (all textures, models, and music/sound effects are generated procedurally at runtime by a tiny 96k executable, without external files or downloading)
.kkrieger - Wikipedia

@foone When shit posting please consider the risk that some tech CEO will take you seriously and curse us all :P

@foone You are joking, but... people are already working on the technology.

https://arxiv.org/pdf/2406.07550

@foone replacing phone calls by personal speech trained models.

You have to train a model once so that it can replicate your speech. While having to distribute the weights once to so your contacts, in the following you can use highly compressible text instead of audio.

In interstellar space flight this approach has been the modus operandi more or less since the first interplanetary voyage - especially with response delays ruling out interactive communication anyway

@number137 @foone That’s kind of how voice codecs already work.
@foone in college we were taught neural networks by having to write a compression algorithm that taught a model the specific image, then to restore the image we'd just ask the model to generate said image
@foone We literally actually tried something like this last summer this for some robot planning algorithms at my company. It works alarmingly well, although less well than other, less stupid tricks.
@foone it's like a shared dll cache but for images.
@foone Oo. You could have the next Cruelty Squad in your hands.
@foone We’re going back to blazoning…
@foone pros: reduces texture payload by 186 bytes, con: increase cuda payload by 78gigabytes, I see this as an absolute win
@thorsummoner we just ship the model with the ps6 and then every game can use it!
@foone Please don't say such things out loud or someone will try it. 
Enhancing Photorealism Enhancement

YouTube