Title: Not Waving I

Description: Exploring the hidden materials unveiled by diffusion models when upscaling and regenerating photographs of known physical phenomena.

more info: https://jeroenbocken.com/works/not_waving_-%F0%9D%9F%99.html

#DigitalArt
#Upscaling
#photography
#synthetic
#DiffusionModel

Title: Not Waving II

Description: Exploring the hidden materials unveiled by diffusion models when upscaling and regenerating photographs of known physical phenomena.

more info: https://jeroenbocken.com/works/not_waving_-%F0%9D%9F%9A.html

#DigitalArt
#Upscaling
#synthetic
#DiffusionModel

Title: Not Waving III

Description: Exploring the hidden materials unveiled by diffusion models when upscaling and regenerating photographs of known physical phenomena.

#DigitalArt
#Upscaling
#synthetic
#DiffusionModel

@jeff and don't forget to shower curses on all the #LLM and #DiffusionModel providers which made this happen (after you've donated to your local instance admins of course)

Steerling-8B, the first interpretable model that can trace any token it generates to its input context, concepts a human can understand, and its training data.

https://www.guidelabs.ai/post/steerling-8b-base-model-release/

#AI #InterpretableAI #DiffusionModel #DiffusionModels

Steerling-8B: The First Inherently Interpretable Language Model

We release Steerling-8B, an 8B-parameter causal diffusion language model that is interpretable by construction — its predictions are routed through concepts you can measure, audit, and control.

Guide Labs

Alibaba’s new Qwen‑Image‑2.0 diffusion model can render Chinese calligraphy with near‑perfect text, edging out most generators except the quirky Nano Banana Pro. It shines in multimodal generation and image editing. See how open‑source fans are reacting. #QwenImage2 #AIImageGeneration #ChineseCalligraphy #DiffusionModel

🔗 https://aidailypost.com/news/qwen-image-20-renders-calligraphy-nearperfect-text-ranks-behind-nano

@ApostateEnglishman
I read about that study where a latent #DiffusionModel is reconstructing visual experiences from human brain activity – you mean that? It's very interesting. The study correlates measured #brain activity with AI layers. The authors use a technical "system-to-system" language because there's a significant gap in understanding both the human brain and the inner workings of the AI. That's not a consistent theory, it shows that we don't have a clue yet how either works. (1/2)

ByteDance Seed công bố mô hình Stable-DiffCoder-8B-Instruct, mở đường cho AI tạo văn bản/lập trình bằng kỹ thuật diffusion. Mô hình đã được đăng tải trên Hugging Face và nhận nhiều sự chú ý từ cộng đồng.
#AI #DeepLearning #MáyHọc #ByteDance #HuggingFace #CodeAI #DiffusionModel #CôngNghệ

https://www.reddit.com/r/LocalLLaMA/comments/1qpm48y/bytedanceseedstablediffcoder8binstruct_hugging/

Finally got SDXL running!

I looked into the error
"modules.devices.NansException: A tensor with NaNs was produced in VAE"
and here’s what it means:

Briefly, how an image is generated with a diffusion model: The text encoder interprets the prompt, the UNet "dreams" iteratively in the latent space from noise into an image structure, and the VAE translates this latent vision into visible pixels.

A tensor is simply a multi-dimensional array of numbers, basically the data structure where the model stores all its calculations, like colors, intensities, and intermediate results of the image.

In this case, the VAE experienced a numerical instability: the latent tensor contained invalid values (NaNs), so the dreamed image could not be decoded correctly. In short: the model was still dreaming in the latent space, but the numbers “exploded” along the way (e.g., division by zero, overflow, or undefined operations).

#StableDiffusion #SDXL #AIArt #DiffusionModel #VAE #LatentSpace #Tensor #DigitalArt #CinematicArt #Kunst #KI #AI #DigitalIllustration #StilizedRealism #UrbanFantasy #Motion #DramaticLighting #FilmStill #AICommunity

🚀 LLaDA2.0 được ra mắt! Phiên bản flash: kiến trúc MoE 100B, mini: MoE 16B, cả hai đều được fine‑tune cho ứng dụng thực tế. Hỗ trợ llama.cpp đang được phát triển, phiên bản trước đã có sẵn. #AI #LLM #LLaDA2 #DiffusionModel #trí_tiện #mô_hình_ngôn_ngữ #công_nghệ

https://www.reddit.com/r/LocalLLaMA/comments/1p6gsjh/llada20_103b16b_has_been_released/