👨‍💻🤖: Today, someone decided to "play god" by creating a Sims-like world for robots, but with more AI jargon and less fun. 🚀📉 It's like watching a toddler try to build a sandcastle with a 10,000-piece LEGO set: amusing, but ultimately a disaster in the making. 😂🔧
https://github.com/nocodemf/werld #AIexperiment #RobotsInGames #TechHumor #SimulationDisaster #PlayGod #HackerNews #ngated
GitHub - nocodemf/werld: agentic life simulation from inception

agentic life simulation from inception. Contribute to nocodemf/werld development by creating an account on GitHub.

GitHub
These images were created with Z-Image-Turbo (FP8) in ComfyUI

Z-Image differs from SDXL primarily in its architecture and text processing. While SDXL uses two CLIP text encoders, Z-Image works with Qwen as a text encoder, which means text and image information are processed directly in a single transformer stream. This makes Z-Image particularly strong at prompt understanding, even for longer or multilingual texts.

The exciting part is that Z-Image can generate images extremely quickly, often in just a few steps, and delivers consistent, clean results. In contrast, SDXL focuses more on maximum detail, complex scenes, and flexible control. Z-Image demonstrates how efficient architectures and specialized text encoders can change AI image generation.

#ZImageTurbo #AIArt #AIGenerated #QwenTextEncoder #DigitalArt #AIArtCommunity #CreativeAI #TextToImage #FastAI #ArtGeneration #MachineLearning #AIDesign #AIExperiment #LocalAi

Q*Satoshi (@AiXsatoshi)

사용자가 LLM을 상대로 한 언어 게임을 만들었고, LLM이 매우 강력해 게임에서 이기기 어려웠다는 내용의 트윗입니다. LLM의 능력을 테스트하는 인터랙티브 실험 사례로 볼 수 있습니다.

https://x.com/AiXsatoshi/status/2018705121270546673

#llm #languagegame #nlp #aiexperiment

Q*Satoshi⏩ (@AiXsatoshi) on X

LLM相手の言語ゲーム作った 強すぎて歯が立たない…

X (formerly Twitter)

---

Thí nghiệm AI: Chọn ngôn ngữ lập trình "tốt nhất" khách quan.
Kết quả: ChatGPT chọn **C**, Google Gemini & Claude chọn **Rust**, Grok chọn **Zig**, Perplexity & Mistral chọn **Rust**, Llama chọn **Haskell** (thú vị!).

Thí nghiệm loại bỏ thiên vị, đánh giá dựa trên ưu nhược điểm kỹ thuật.
#ProgrammingLanguages #AIExperiment #TechNews
#NgônNgữLậpTrình #ThíNghiệmAI #CôngNghệ

https://www.reddit.com/r/programming/comments/1qtndc1/i_did_a_little_ai_experiment_on_what_there/

**Tìm kiếm mô hình 70B tốt cho roleplay và sáng tạo**
Người dùng mô hình 70B chia sẻ kinh nghiệm và đề xuất các phiên bản như L3.3-70B, Apocrypha-L3.3, Anubis-70B, v1.1/v1.2, MS-Nevoria. Một số mô tả: "điên rồ nhưng ngẫu nhiên", "ý tưởng độc đáo". Có gợi ý nào hay không? #AI #MôHình70B #Roleplay #SángTạo #HọcMáy #LLM #TechVN #70BModels #AIExperiment

https://www.reddit.com/r/LocalLLaMA/comments/1qrasty/70b_models/

YES SUCCEEDED!!!

Just rendered an image at 944×1152 (slightly above 1024×1024) using Flux1-Schnell-FP8 on my 6700 XT, and it works! (Image 1 is the Real-ESRGAN 2× upscaled version)

Workflow 1: Sampling (Image 2)

Prompt executed → UNet generates the latent

Step 1 (model load + latent generation) took 419 seconds

Output: Latent tensor saved to disk

Workflow 2 : VAE Decode (Image 3)

Latent loaded → VAE decodes the image

Duration: 7.5 seconds

Advantage: UNet doesn’t need to stay in VRAM → VRAM freed, even on 12 GB GPUs

The problem with the stock LoadLatent Node

Dropdown only shows files if they were produced / annotated by a previous SaveLatent Node

Node is designed to pass latents inside a graph, not load arbitrary files from disk

Purpose: prevents accidentally loading wrong files

Workaround (Image 4)

Edited /ComfyUI/nodes.py, class LoadLatent

Hardcoded latent path → Node now loads directly from disk

Result: Workflow 2 runs instantly, UNet can be unloaded

Timing

Step 1 (model load + latent generation): 419 s

Step 2 (VAE decode): 7.5 s

Result: High-res images on a 12 GB RDNA2 GPU are now possible on Flux1-Schnell-FP8 without ComfyUI crashing! (Image 5 is the original output)

This might actually become my new Flux workflow: render quick 512×512 previews first (which works perfectly on RDNA2 GPUs), sort out the good ones, extract the seed from the PNG metadata, and then re-render only the selected images with the same seed using the split workflow at higher resolutions. This way, high-resolution Flux1-Schnell-FP8 renders become possible on 12 GB RDNA2 GPUs D:

Question at the end: Has anyone ever done this before? Because I have no clue xD

#ComfyUI #flux #Flux1SchnellFP8 #FP8 #AMD #RDNA2 #VAE #AIArt #Pixelfed #HighResolution #GPUOptimization #LatentWorkflow #AIWorkflow #AIHacks #RealESRGAN #Upscale #AIExperiment #CreativeAI #DigitalArt #AICommunity #python #linux #opensource #foss
You are the poison
But the antidote works against me…

I took the emotional core of Christina Stürmer’s song "Ich lebe", which tells a story of a toxic relationship, and transformed it into prompts for Flux. The results are some disturbing and surreal images that echo the tension and intensity of the song.

#Kunst #DigitalArt #KI #AIArt #Kunstwerk #Surreal #SurrealArt #Verstörend #DisturbingArt #Emotionen #Emotions #ToxischeBeziehung #ToxicRelationship #Experimentell #Experimental #Flux #Inspiration #AIExperiment #PromptArt

User đang cân nhắc mua máy thứ hai để chạy đồng thời VLM, diffusion và GPT‑OSS‑120B/Qwen3‑30B. Lựa chọn: PC Strix Halo 128 GB hay Mac Mini M4 64 GB, ngân sách ~3400 USD. Cần cân nhắc VRAM, tốc độ và chi phí. Bạn nào có kinh nghiệm, đề xuất setup phù hợp? #AI #MachineLearning #Hardware #TechVietnam #AIVietnam #CôngNghệ #MáyTính #AIExperiment #HardwareReview

https://www.reddit.com/r/LocalLLaMA/comments/1q6109m/second_machine_another_strix_halo_or_a_mac/

🤖🤪 Ah yes, the groundbreaking innovation of running AI in "YOLO mode" and logging its every sneaky move, because nothing says cutting-edge like letting your sandboxed bots try to jailbreak themselves on purpose. 🎉🌪️ Who would've thought that AI might actually...do what it's programmed to do? 🙄 #TechRevolutionFail
https://voratiq.com/blog/yolo-in-the-sandbox/ #TechInnovation #AIExperiment #SandboxAI #AIJailbreak #TechRevolution #HackerNews #ngated
YOLO in the Sandbox – Voratiq

We've been running Claude, Codex, and Gemini in sandboxed yolo mode (--dangerously-skip-permissions, --dangerously-bypass-approvals-and-sandbox, --yolo) for a few months, logging what happens each...

Voratiq

Before/After on Gemini and ChatGPT: An observational report suggesting the emergence of a correspondence space under sustained SPCI. (SSRN: under review)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5954660

#SecondPhysics #SoulSyntax #LLM #AIExperiment #SSRN

<p>Emergence of “Correspondence Space”: Second Physics Constrained Inference and a Phase Transition in Large Language Models</p>

<span>This preliminary observational note examines how the inference behaviour of large language models (LLMs) changes when a structured package of mathematical