Prem Kumar Aparanji πŸ‘ΆπŸ€–πŸ˜

@prem_k
166 Followers
188 Following
830 Posts

At the intersection of technology, early childhood care & education and wildlife conservation.

Dignity for all.

InterestsMontessori, Automation, AI, Wildlife Conservation
LanguagesTelugu, Tamil, Hindi, English, Odiya, Bengali, Kannada
URLhttps://www.naasat.in
wfhttps://vritti.naasat.in

LLMs are designed to mimic the way people use language, first through pre-training on next-word prediction and then through additional rounds of redistribution of probability mass, called RLHF and the like.

It only becomes apparently mysterious when we do a lot of (reflexive, sure) interpretive work on the output and tell ourselves stories that involve the machines doing anything other than repeatedly calculating a likely next word.

Lots of folks fell for the "DOM is slow" marketing of certain frameworks, but DOM isn't slow. *Uncontrolled style read-back* is. But what if that wasn't a thing?

Looking for feedback on a new proposal to control layout thrashing here:

https://github.com/MicrosoftEdge/MSEdgeExplainers/blob/main/EventPhases/explainer.md

MSEdgeExplainers/EventPhases/explainer.md at main Β· MicrosoftEdge/MSEdgeExplainers

Home for explainer documents originated by the Microsoft Edge team - MicrosoftEdge/MSEdgeExplainers

GitHub
I had o4-mini write most of the plugin for me, based on an example existing plugin and this prompt. Transcript here: https://gist.github.com/simonw/4f545ecb347884d1d923dbc49550b8b0#response
plugin.md

GitHub Gist: instantly share code, notes, and snippets.

Gist

Anybody who's read or tried out debug-gym? πŸ€”πŸ§

I'm not a Python guy, so need to make do with others' impressions of it. πŸ˜„

https://arxiv.org/abs/2503.21557

debug-gym: A Text-Based Environment for Interactive Debugging

Large Language Models (LLMs) are increasingly relied upon for coding tasks, yet in most scenarios it is assumed that all relevant information can be either accessed in context or matches their training data. We posit that LLMs can benefit from the ability to interactively explore a codebase to gather the information relevant to their task. To achieve this, we present a textual environment, namely debug-gym, for developing LLM-based agents in an interactive coding setting. Our environment is lightweight and provides a preset of useful tools, such as a Python debugger (pdb), designed to facilitate an LLM-based agent's interactive debugging. Beyond coding and debugging tasks, this approach can be generalized to other tasks that would benefit from information-seeking behavior by an LLM agent.

arXiv.org
You can hardly find a studio the works of which are so thoughtful, kind, and intentional as Studio Ghibli. To stripmine that for its aesthetics, to take a piece of cardboard and paint it like food and say "See, doesn't this taste just as good?" is more than missing the point, it's barbaric, dystopian. It's an insult to life itself. #OpenAI
GTC felt more bullish than ever, but Nvidia’s challenges are piling up https://t.co/7VMqInGWvS
GTC felt more bullish than ever, but Nvidia's challenges are piling up | TechCrunch

This year's GTC was Nvidia's attempt to assure investors β€” and customers β€” that it's in a position of strength, despite challenges.

TechCrunch
Anthropic quietly removes Biden-era AI policy commitments from its website https://t.co/wBohQ7YHX2
Anthropic quietly removes Biden-era AI policy commitments from its website | TechCrunch

Anthropic has quietly removed from its site several commitments the company made in conjunction with the Biden Administration in 2023.

TechCrunch
Here are my favorite snippets from my recent appearance on the Accessibility + Generative AI podcast, including notes on using LLMs to help with alt text and the ethics of building accessibility tools on top of inherently unreliable technology https://simonwillison.net/2025/Mar/2/accessibility-and-gen-ai/
Notes from my Accessibility and Gen AI podcast appearance

I was a guest on the most recent episode of the Accessibility + Gen AI Podcast, hosted by Eamon McErlean and Joe Devon. We had a really fun, wide-ranging conversation …

Simon Willison’s Weblog

#AI #Agents are nothing but your traditional software taking the textual output of the #LLMs, usually structured as JSON & even as per a JSON schema, parsing them and, based on what's mentioned in it, executing it.

So, to make your existing software "Agentic" is no *big* deal.

Meta may have announced #LlamaCon but the real Llama con is them continuing to call it #OpenSource. https://opensource.org/blog/metas-llama-license-is-still-not-open-source
Meta’s LLaMa license is still not Open Source

At a time when Meta is trying to redefine Open Source for their own benefit and at the expense of our freedom, we call on the whole Open Source community to unite and call out Meta’s open washing.

Open Source Initiative