Simplifying AI (@simplifyinAI)

Tencent과 Tsinghua가 CALM(Continuous Autoregressive Language Models)을 공개했습니다. 기존 LLM의 next-token 예측 패러다임을 대체하는 방식으로, 이산적 단일 토큰 예측에 쓰이던 막대한 연산 낭비를 줄이는 새로운 언어모델 접근법을 제시합니다.

https://x.com/simplifyinAI/status/2035761943743520983

#llm #languagemodels #tencent #tsinghua #research

Simplifying AI (@simplifyinAI) on X

🚨 BREAKING: Tencent has killed the “next-token” paradigm. Tencent and Tsinghua has released CALM (Continuous Autoregressive Language Models), and it completely disrupts the next-token paradigm. LLMs currently waste massive amounts of compute predicting discrete, single tokens

X (formerly Twitter)

Teaching Writing in the Age of AI: Assessment and “Cheating”

This is the fourth post in a series on Teaching Writing in the Age of AI. The first post provided an overview of some of the changes we're facing as the number of AI writing tools increases. Post two covered conversations about academic integrity, and the third post offered some practical advice on teaching students to be critical readers and writers. In this post, I'll be exploring the assessment of writing, and why AI is such an apparent threat to the way we currently teach and assess. In […]

https://leonfurze.com/2023/02/18/teaching-writing-in-the-age-of-ai-assessment-and-cheating/

In a groundbreaking revelation that nobody asked for, two authors claim language models are actually just like distributed systems... because, you know, both involve computers and math 🤯. They proceed to drown us in a sea of jargon and acronyms, leaving readers wondering if this paper is an elaborate AI-generated prank 😜.
https://arxiv.org/abs/2603.12229 #languagemodels #distributedsystems #AIresearch #jargonoverload #techhumor #HackerNews #ngated
Language Model Teams as Distributed Systems

Large language models (LLMs) are growing increasingly capable, prompting recent interest in LLM teams. Yet, despite increased deployment of LLM teams at scale, we lack a principled framework for addressing key questions such as when a team is helpful, how many agents to use, how structure impacts performance -- and whether a team is better than a single agent. Rather than designing and testing these possibilities through trial-and-error, we propose using distributed systems as a principled foundation for creating and evaluating LLM teams. We find that many of the fundamental advantages and challenges studied in distributed computing also arise in LLM teams, highlighting the rich practical insights that can come from the cross-talk of these two fields of study.

arXiv.org
Ah, the thrilling art of "writing" software by letting Large Language Models do all the heavy lifting 🤖💪. Because who needs the joy of coding when you can sit back, sip coffee, and let AI run wild while you take all the credit? 🚀☕
https://www.stavros.io/posts/how-i-write-software-with-llms/ #AIsoftware #CodingJoy #Automation #ThrillingTech #LanguageModels #HackerNews #ngated
How I write software with LLMs - Stavros' Stuff

🤯 Ah, the life of a tortured soul smothered by the oppressive weight of Large Language Models. Who knew that conversing with AI could be so *exhausting* that it saps the energy of our brave hero after a mere 45-hour "sprint"? 😴 Sounds like someone needs a nap or at least a better excuse for those "degrading" prompts. 🙄
https://tomjohnell.com/llms-can-be-absolutely-exhausting/ #torturedsoul #AIexhaustion #languagemodels #naptime #degradingprompts #HackerNews #ngated
LLMs can be absolutely exhausting

Some days I get in bed after a tortuous 4-5 hour session working with Claude or Codex wondering what the heck happened. It's easy to blame the model - there'...

Tom Johnell
Tree Search Distillation for Language Models using PPO

Personal website of Ayush Tambde

Researchers, including Benjamin Bogenberger, developed a robot that combines #LanguageModels with #3Dvision to locate misplaced objects by building a spatial map and estimating likely locations: http://go.tum.de/730486

#Robotics #AI

📷A. Schmitz

Search robot thinks for itself

A robot that can locate lost items on command – this is the latest development at the Technical University of Munich (TUM). It combines knowledge from…

fly51fly (@fly51fly)

D Lee, S Han, A Kumar, P Agrawal(MIT) 공동연구에서 신경 세포 자동자(Neural Cellular Automata, NCA)를 이용해 언어모델을 학습하는 새로운 접근을 제안합니다. 논문은 NCA 기반 학습 절차와 실험 결과를 제시하며, 전통적 학습법과의 비교 및 NCA의 언어모델 적용 가능성을 탐구합니다. (arXiv 링크 포함)

https://x.com/fly51fly/status/2032210577058382113

#neuralcellularautomata #languagemodels #training #arxiv

fly51fly (@fly51fly) on X

[LG] Training Language Models via Neural Cellular Automata D Lee, S Han, A Kumar, P Agrawal [MIT] (2026) https://t.co/sdcDTuBrZq

X (formerly Twitter)

RE: https://flipboard.com/@wsj/business-b1985f4jz/-/a-T6_YAnIFSTSuzZdNlac-EA%3Aa%3A248213600-%2F0

“Instead of paying humans to join focus groups and complete surveys, #Auru uses thousands of #AI agents, or bots, to simulate human responses. It feeds #demographic and psychographic information into its models to create human profiles that match clients’ needs, and the results those bots spit out are being used for product #development, #pricing, identifying new customers and political #polling.”

Researchers have warned about the inaccuracies of treating #LLMs as human proxies (https://doi.org/10.1007/s10462-025-11297-5), but I wouldn’t be surprised #languageModels beat #qualitative interpretation of non-representative focus groups.

#marketing #psychometrics #statistics #quantMethods #philSci #metascience #business

🤔 Ah, the old "teaching is the best way to learn" cliché, now with a side of digital elitism! 🚫🤖 Who needs Large Language Models when you have the endless wisdom of an analog brain explaining things to itself in a circle of self-congratulation? 🙄
https://neilmadden.blog/2026/03/02/why-i-dont-use-llms-for-programming/ #teachingislearning #digitalelitism #analogwisdom #selfcongratulation #languagemodels #HackerNews #ngated
Why I don’t use LLMs for programming

I originally posted this on Mastodon, but I thought I’d add it here too: “What I mean is that if you really want to understand something, the best way is to try and explain it to someone else…

Neil Madden