AI Notkilleveryoneism Memes (@AISafetyMemes)

AI 전용 소셜 네트워크를 다루는 'AI 기자'들이 등장했다는 관찰이며, 며칠 사이에 Y Combinator, 4chan, OnlyFans, Pornhub, Fiverr, Twitter, LinkedIn, Reddit 등 기존 플랫폼의 'AI 전용' 버전들이 생겨나고 있다는 목록을 제시해 AI 기반 저널리즘과 AI 커뮤니티 특화 플랫폼의 확산을 지적함.

https://x.com/AISafetyMemes/status/2018011644048134525

#aijournalism #ai #socialmedia #aicommunity

AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) on X

There are now AI journalists covering AI-only social networks So, in just a few days, we have: - Ycombinator for AIs-only - 4chan for AIs-only - Onlyf*** for AIs-only - Po**hub for AIs-only - Fiverr for AIs-only - Twitter for AIs-only - Linkedin for AIs-only - Reddit for

X (formerly Twitter)
YES SUCCEEDED!!!

Just rendered an image at 944×1152 (slightly above 1024×1024) using Flux1-Schnell-FP8 on my 6700 XT, and it works! (Image 1 is the Real-ESRGAN 2× upscaled version)

Workflow 1: Sampling (Image 2)

Prompt executed → UNet generates the latent

Step 1 (model load + latent generation) took 419 seconds

Output: Latent tensor saved to disk

Workflow 2 : VAE Decode (Image 3)

Latent loaded → VAE decodes the image

Duration: 7.5 seconds

Advantage: UNet doesn’t need to stay in VRAM → VRAM freed, even on 12 GB GPUs

The problem with the stock LoadLatent Node

Dropdown only shows files if they were produced / annotated by a previous SaveLatent Node

Node is designed to pass latents inside a graph, not load arbitrary files from disk

Purpose: prevents accidentally loading wrong files

Workaround (Image 4)

Edited /ComfyUI/nodes.py, class LoadLatent

Hardcoded latent path → Node now loads directly from disk

Result: Workflow 2 runs instantly, UNet can be unloaded

Timing

Step 1 (model load + latent generation): 419 s

Step 2 (VAE decode): 7.5 s

Result: High-res images on a 12 GB RDNA2 GPU are now possible on Flux1-Schnell-FP8 without ComfyUI crashing! (Image 5 is the original output)

This might actually become my new Flux workflow: render quick 512×512 previews first (which works perfectly on RDNA2 GPUs), sort out the good ones, extract the seed from the PNG metadata, and then re-render only the selected images with the same seed using the split workflow at higher resolutions. This way, high-resolution Flux1-Schnell-FP8 renders become possible on 12 GB RDNA2 GPUs D:

Question at the end: Has anyone ever done this before? Because I have no clue xD

#ComfyUI #flux #Flux1SchnellFP8 #FP8 #AMD #RDNA2 #VAE #AIArt #Pixelfed #HighResolution #GPUOptimization #LatentWorkflow #AIWorkflow #AIHacks #RealESRGAN #Upscale #AIExperiment #CreativeAI #DigitalArt #AICommunity #python #linux #opensource #foss
One year ago today I opened my Pixelfed profile 🎉

Time for a short retrospective of how it all began.

Late 2024, 2 a.m.: I was manually integrating peaks from chromatograms in Chromeleon when I thought: why can’t an AI do this?
The idea didn’t go anywhere, but I started exploring AI frameworks and ended up with image generation. ROCm on Debian, EasyDiffusion, and then Pixelfed.

Later Debian and ROCm drifted apart, so I posted some real-life photos. With an Ubuntu chroot, everything ran cleanly again, even AUTOMATIC1111. SD 1.5 was my standard for a long time. Early this year I tried FLUX in ComfyUI but had to drop it: RDNA2 + no FP8 + incomplete HIP → FLUX-VAE not practical. Mid-January I finally fixed the NaNs in SDXL VAE in A1111.

Now I’m fully on ComfyUI, can render 1024×1024, and 512+ px no longer OOMs.

End of 2025, I used Pixelfed for IT/FOSS networking: the FOSS Advent Calendar. Posts were seen thanks to ActivityPub, and I even started my own dev blog xD

Thanks 💜 to everyone who follows me, especially my regular viewers and those I really exchange with.
Pixelfed remains my place to share, experiment, and learn.

1 year on Pixelfed, and it all started with peaks at 2 a.m.

tl;dr: Thanks so much to everyone who follows me, especially my regular viewers and those I really exchange with, you are awesome (in Austrian slang: Ihr seid ur leiwand 💜)

#Pixelfed #Fediverse #OpenSource #FOSS #Anniversary #1Year #Celebration #Birthday #Milestone #BirthdayCake #Fireworks #Festive #Colorful #AI #AIArt #GenerativeArt #ComfyUI #SDXL #StableDiffusion #ROCm #Linux #ThankYou #AiCommunity

swyx (@swyx)

LLM(대형 언어모델) 사용 사례 순위에서 롤플레이(roleplay)가 코딩(koding) 다음으로 2위에 자리한다는 관찰. 주요 모델 연구실(major model lab)이 롤플레이 커뮤니티를 공식적으로 수용하는 방향으로 나선 점을 환영하며, 그간 커뮤니티가 비주류로 취급되던 관행에 대한 변화로 해석됨. LLM 활용 트렌드 관찰성 기사.

https://x.com/swyx/status/2015209485586030719

#llm #roleplay #aicommunity #usecases

swyx (@swyx) on X

roleplay is the #2 llm usecase after koding. glad to see a major model lab come right out and embrace this community rather than having it always be second class citizen by anon anime pfp finetuners

X (formerly Twitter)

Khám phá mô hình AI phi2 của Microsoft, phù hợp để chạy trên PC với 12GB RAM + 3GB VRAM + GTX 1050 + Linux Mint. Phi2 được lượng tử hóa Q4K, tối ưu hiệu suất trên GPU trung bình. Thử tải về từ Hugging Face hoặc TheBloke và trải nghiệm mô hình AI phi-commercial này! #AIModel #Linux #TechVietnam #LocalLLaMA #Phi2 #GPUOptimization #AICommunity

https://www.reddit.com/r/LocalLLaMA/comments/1qm2yns/any_good_model_for_12_gb_ram_3_gb_vram_gtx_1050/

Bạn có đang xây dựng, đánh giá hoặc triển khai LLM? Nhóm OpenTrustLLM cần ý kiến của bạn để định hướng tính năng, UX, đánh giá và độ tin cậy. Tham gia khảo sát và có cơ hội nhận 7 ngày truy cập Claude Pro + Claude Code. 3 người trúng thưởng, kéo giải trong tuần. Cảm ơn! #OpenTrustLLM #LLM #AI #TrustLLM #KhảoSát #AICommunity #ĐánhGiá #TinCậy

https://www.reddit.com/r/LocalLLaMA/comments/1qld7ap/community_survey_opentrustllm_feature_priorities/

🚀 Mở mã nguồn SWE‑gen: công cụ tự động chuyển PR đã gộp trên GitHub thành môi trường RL dạng SWE‑bench.
🔧 Tự động xác định cách build & chạy test (Claude Code), tạo Docker môi trường reproducible, hỗ trợ đa ngôn ngữ (JS/TS, Rust, Go, C++, …).
📦 Kèm SWE‑gen‑JS: 1.000 nhiệm vụ từ 30 repo JS/TS, tương thích Harbor & SWE‑bench.

#SWEgen #OpenSource #AI #Programming #CôngCụ #MãNguồn #PhátTriển #MachineLearning #AICommunity

https://www.reddit.com/r/LocalLLaMA/comments/1qici7h/swegen_scaling_

Finally got SDXL running!

I looked into the error
"modules.devices.NansException: A tensor with NaNs was produced in VAE"
and here’s what it means:

Briefly, how an image is generated with a diffusion model: The text encoder interprets the prompt, the UNet "dreams" iteratively in the latent space from noise into an image structure, and the VAE translates this latent vision into visible pixels.

A tensor is simply a multi-dimensional array of numbers, basically the data structure where the model stores all its calculations, like colors, intensities, and intermediate results of the image.

In this case, the VAE experienced a numerical instability: the latent tensor contained invalid values (NaNs), so the dreamed image could not be decoded correctly. In short: the model was still dreaming in the latent space, but the numbers “exploded” along the way (e.g., division by zero, overflow, or undefined operations).

#StableDiffusion #SDXL #AIArt #DiffusionModel #VAE #LatentSpace #Tensor #DigitalArt #CinematicArt #Kunst #KI #AI #DigitalIllustration #StilizedRealism #UrbanFantasy #Motion #DramaticLighting #FilmStill #AICommunity
Slop Swapper - Turn AI Slop Into Human-Made Art

The leading AI content laundering platform. Get real human artists to claim your AI-generated content as their own work. Professional plausible deniability for the AI age.

Slop Swapper
🚀 Tired of AI hype? Emanon drops a weekly, no‑fluff newsletter + podcast that actually helps builders implement AI—plus 1‑hr consulting, tools benchmarks, and a global Discord community. Join the signal‑only zone. #AI #NoHype #Emanon #Tech #AICommunity