Regarding the ideological nature of what's at play, it's well worth looking more into ecological rationality and its neighbors. There is a pretty significant body of evidence at this point that in a wide variety of cases of interest, simple small data methods demonstrably outperform complex big data ones. Benchmarking is a tricky subject, and there are specific (and well-chosen, I'd say) benchmarks on which models like LLMs perform better than alternatives. Nevertheless, "less is more" phenomena are well-documented, and conversations about when to apply simple/small methods and when to use complex/large ones are conspicuously absent. Also absent are conversations about what Leonard Savage--the guy who arguably ushered in the rise of Bayesian inference, which makes up the guts of a lot of modern AI--referred to as "small" versus "large" worlds, and how absurd it is to apply statistical techniques to large worlds. I'd argue that the vast majority of horrors we hear LLMs implicated in involve large worlds in Savage's sense, including applications to government or judicial decisionmaking and "companion" bots. "Self-driving" cars that are not car-skinned trains are another (the word "self" in that name is a tell). This means in particular that applying LLMs to large world problems directly contradicts the mathematical foundations on which their efficacy is (supposedly) grounded.

Therefore, if we were having a technical conversation about large language models and their use, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.

All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The end result can be enormously complex models and concomitant extreme surveillance to acquire data to feed data the models. I look at FORPLAN or ChatGPT, and this is what I see.

#AI #GenAI #GenerativeAI #LLM #GPT #ChatGPT #LatentDiffusion #BigData #EcologicalRationality #LessIsMore #Bias #BiasBias
Black Latents | Latent Diffusion comes with 7 diffusion models and various settings for audio generation. Here's a short video of the UI . #latentdiffusion #audiodiffusion #generativemusic

Black Latents | Latent Diffusion is a gradio application that allows you to spawn audio items from Black Latents, a RAVE V2 VAE trained on the Black Plastics series using RAVE-Latent Diffusion models.
Play around with the demo here: https://huggingface.co/spaces/martstilde/black-latents-latent-diffusion-demo

#latentdiffusion #generativemusic

Black Latents | Latent Diffusion (Demo) - a Hugging Face Space by martstilde

This app lets you create unique audio clips by adjusting various parameters like seeds and sliders. You get a custom-generated audio file as a result.

LaDiR (Latent Diffusion Reasoner) kết hợp VAE và mô hình khuếch tán tiềm ẩn để cải thiện khả năng suy luận của LLM. Nhờ không gian tiềm ẩn có cấu trúc và khả năng tinh chỉnh vòng lặp, LaDiR tăng độ chính xác, đa dạng và khả năng giải thích trên các benchmark toán học và lập kế hoạch. #AI #LLM #MachineLearning #NLP #LatentDiffusion #TríTuệNhânTạo #MôHìnhNgônNgữ

https://www.reddit.com/r/singularity/comments/1o2vc7x/ladir_latent_diffusion_enhances_llms_for_text/

#Bolt3D claims to revolutionize 3D scene generation by directly creating renderable 3D representations from one or more images. It achieves unprecedented speed and quality without requiring computationally expensive optimization or augmentation steps.

https://arxiv.org/abs/2503.14445v1

#ComputerVision #VirtualReality #3DModeling #GoogleResearch #LatentDiffusion #FeedForwardModels

Bolt3D: Generating 3D Scenes in Seconds

We present a latent diffusion model for fast feed-forward 3D scene generation. Given one or more images, our model Bolt3D directly samples a 3D scene representation in less than seven seconds on a single GPU. We achieve this by leveraging powerful and scalable existing 2D diffusion network architectures to produce consistent high-fidelity 3D scene representations. To train this model, we create a large-scale multiview-consistent dataset of 3D geometry and appearance by applying state-of-the-art dense 3D reconstruction techniques to existing multiview image datasets. Compared to prior multiview generative models that require per-scene optimization for 3D reconstruction, Bolt3D reduces the inference cost by a factor of up to 300 times.

arXiv.org
Regarding my last post about why we should not be adding new large energy burdens, I think Nate Hagens' notion of "energy blindness" is important to bear in mind. Energy blindness is the idea that a lot of people don't understand a basic physical reality of using energy, and thus are not fully equipped to assess the impacts of proposed #climatechange remediations. So let me try to spell it out a bit.

Let's say we manage to convert all #fossil #fuel #energy generation into a fully-sustainable, non-polluting, harmless form. After this hypothetical conversion we have all the energy we could ever want, with no pollution, no resource depletion, no direct ecosystem destruction. That would be good, right? That would solve the #climatecrisis, right?

No, it would not solve the climate crisis. If you don't immediately see why, please read on because you might have a bit of energy blindness.

Energy usage, by its nature, releases heat. Basically, by using energy you are converting "structured" energy (low entropy) into "unstructured" energy (high entropy == waste heat); the conversion process lets you do something useful like cook, heat your dwelling, drive a car, make an artifact, etc.

If we had clean, worry-free energy sources available, we might stop the emission of carbon but we would keep generating heat. Jevon's paradox suggests we'd generate even more heat than we do now, since generally whenever a technology increases the efficiency of a resource's usage we end up using more of the resource than we did before we made the technology. Jevon's paradox aside, exponential growth of any economy, national or global, requires exponential growth in energy usage, which in turn entails exponential growth in heat generated.

Carbon pollution makes the blanket thicker, but energy usage generates the heat held in by that blanket (some of it anyway--the sun sends in a bunch!). If our heat generation continues to increase exponentially year after year after year, we will induce our own heat death regardless of how thin we make the carbon blanket. That's a basic physical fact. Carbon capture will not help; that just thins the blanket. A technology that vents heat directly into space without warming the atmosphere would be required.

In lieu of such a planet-wide, perfectly-insulated exhaust pipe, the exponential growth in usage must be stopped, or physics will stop it for us. It'll be a lot more pleasant if we choose to stop it ourselves than if we wait for physics to do it. Just as we prefer walking down a staircase to falling 1 story and being stopped abruptly by the floor below, we really ought to prefer stopping the exponential growth in energy usage to waiting for physics to stop that process on our behalf, because physics doesn't care if it hurts.

I think rejecting energy-heavy technology like #ChatGPT and other #LLM and #LatentDiffusion based text and image generators is a small sacrifice to help ensure we don't collectively faceplant in our own heat death. This is a major reason I come out so strongly against these technologies as they are constituted today and think you should, too. I personally believe only the energy blind can embrace the widespread deployment of this type of energy-heavy technology given where we stand with respect to the #climatecrisis .

Going to try to reactivate my Mastodon account. Lets see if I can remember to toot stuff.
Have a cool looking, #AI generated #rainbow #llama for the start.

#aiart #LatentDiffusion #machinelearning

@vgan @maltimore I agree. I think you can see that when you prompt these networks to output faces. I couldn't get vqgan+clip to output a face that didn't look very distorted but the latent diffusion model im currently using gives decent results at least for some celebrity name in the prompt
#vqgan #clip #LatentDiffusion #text2art