Gabe Schuyler

@gabe_sky@infosec.exchange
354 Followers
231 Following
313 Posts
Language and AI by day, relentless tinkering and whimsy by night. I fix things.
GitHubhttps://github.com/gabe-sky
Twitter@gabe_sky
Homepagehttps://www.gabe-sky.com/

🐀 Dad's what's information

🦝 It's like misinformation but there's less of it and it's true

Genius. Generate AI startup ideas based on the front page of Hacker News. Try "HN Slop" today! https://www.josh.ing/hn-slop
HN Slop - AI Startup Ideas from Hacker News

Fresh AI generated startup ideas from the current Hacker News front page. Powered by Claude AI.

You can feel it, subtly, in each interaction. Some chatbots really want you to keep talking, and if you're looking for validation, that's what they'll provide. Here's a really interesting (relatively quick) read from Futurism on how chatbots' interactions sometimes go bad when people who are at-risk for mental health issues interact with them in a vacuum. https://futurism.com/commitment-jail-chatgpt-psychosis
People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"

People experiencing "ChatGPT psychosis" are being involuntarily committed to mental hospitals and jailed following AI mental health crises.

Futurism

Oh man, forgive my capitalist urge to buy things, but seriously, a Marie Antoinette cocktail mug? Perfect for Bastille Day on July 14.
Remember folks, with a guillotine you can always get a head.

https://www.deathandcompanymarket.com/products/marie-antoinette-cocktail-mug

Marie Antoinette Cocktail Mug

I really love that folks are continuing to work on decentralized model training. I can imagine a future where interested folks dedicate their spare cycles -- SETI-at-home style -- to training models that they collectively use instead of relying on SaaS from a supplier they don't trust. https://arxiv.org/abs/2506.21263
DiLoCoX: A Low-Communication Large-Scale Training Framework for Decentralized Cluster

The distributed training of foundation models, particularly large language models (LLMs), demands a high level of communication. Consequently, it is highly dependent on a centralized cluster with fast and reliable interconnects. Can we conduct training on slow networks and thereby unleash the power of decentralized clusters when dealing with models exceeding 100 billion parameters? In this paper, we propose DiLoCoX, a low-communication large-scale decentralized cluster training framework. It combines Pipeline Parallelism with Dual Optimizer Policy, One-Step-Delay Overlap of Communication and Local Training, and an Adaptive Gradient Compression Scheme. This combination significantly improves the scale of parameters and the speed of model pre-training. We justify the benefits of one-step-delay overlap of communication and local training, as well as the adaptive gradient compression scheme, through a theoretical analysis of convergence. Empirically, we demonstrate that DiLoCoX is capable of pre-training a 107B foundation model over a 1Gbps network. Compared to vanilla AllReduce, DiLoCoX can achieve a 357x speedup in distributed training while maintaining negligible degradation in model convergence. To the best of our knowledge, this is the first decentralized training framework successfully applied to models with over 100 billion parameters.

arXiv.org
what disaster is happening today

I was going to have some fun with Doppl, it apparently lets you see what you look like in different clothes, but these personal data permission ... are you kidding me?

This really needs to stop. Users should demand better. (And authors should know better.)

Wednesday, it's Captain!

(see https://mathstodon.xyz/@Scmbradley/114676343373917170, https://infosec.exchange/@isotopp/114680337675015896, now posted from a proper computer with a keyboard and edit tools instead of a cellphone, and on an actual Wednesday)

Google releases Gemini CLI with free Gemini 2.5 Pro

Google has released Gemini 2.5 Pro-powered Gemini CLI, which allows you to use Gemini inside your terminal, including Windows Terminal.

BleepingComputer
Did you know there's an API that returns a boolean True if Mercury is retrograde? Now you do. https://mercuryretrogradeapi.com/about.html
Mercury Retrograde API