How AI coding agents work—and what to remember if you use them

From compression tricks to multi-agent teamwork, here's what makes them tick.

Ars Technica

New research shows engineers are walking a tightrope between ultra‑short prompts and over‑loading LLM context windows. The balance reduces hallucinations and boosts memory for chat‑based assistants and code generators. Curious how this changes prompt design? Dive into the details. #AI #LargeLanguageModels #PromptEngineering #ContextWindows

🔗 https://aidailypost.com/news/engineers-balance-concise-prompts-context-saturation-new-ai-approach

Who needs git when you have 1M context windows?

The RAG Obituary: Killed by Agents, Buried by Context Windows

Why Retrieval-Augmented Generation Won’t Survive the Context Revolution and the End of Chunking, Embeddings, and Rerankers as We Know Them.

Nicolas Bustamante
The huge potential implications of long-context inference - Epoch AI https://epochai.substack.com/p/the-huge-potential-implications-of #AI #ContextWindows (interesting)
The huge potential implications of long-context inference - Epoch AI https://epochai.substack.com/p/the-huge-potential-implications-of #AI #ContextWindows (interesting)

It's incredible that people can feed up to one million tokens (1 000 000) to LLMs and yet they still most of the time fail to take advantage of that enormous context window. No wonder people say that the output generated by LLMs is always crap... I mean, they're not great but at least they can manage to do a pretty good job - that is, only IF you teach them well... Beyond that, everyone has their own effort + time / results ratio.

"Engineers are finding out that writing, that long shunned soft skill, is now key to their efforts. In Claude Code: Best Practices for Agentic Coding, one of the key steps is creating a CLAUDE.md file that contains instructions and guidelines on how to develop the project, like which commands to run. But that’s only the beginning. Folks now suggest maintaining elaborate context folders.

A context curator, in this sense, is a technical writer who is able to orchestrate and execute a content strategy around both human and AI needs, or even focused on AI alone. Context is so much better than content (a much abused word that means little) because it’s tied to meaning. Context is situational, relevant, necessarily limited. AI needs context to shape its thoughts.
(...)
Tech writers become context writers when they put on the art gallery curator hat, eager to show visitors the way and help them understand what they’re seeing. It’s yet another hat, but that’s both the curse and the blessing of our craft: like bards in DnD, we’re the jacks of all trades that save the day (and the campaign)."

https://passo.uno/from-tech-writers-to-ai-context-curators/

#AI #GenerativeAI #LLMs #Chatbots #PromptEngineering #ContextWindows #TechnicalWriting #Programming #SoftwareDevelopment #DocsAsDevelopment

AI must RTFM: Why technical writers are becoming context curators

I’ve been noticing a trend among developers that use AI: they are increasingly writing and structuring docs in context folders so that the AI powered tools they use can build solutions autonomously and with greater accuracy. They now strive to understand information architecture, semantic tagging, docs markup. All of a sudden they’ve discovered docs, so they write more than they code. Because AI must RTFM now.

passo.uno

Here is why it pays to learn the nuts and bolts of #AI parts and nomenclature;

Prompt: Can you provide an audit list of my promts for this chat?

AI: No

Prompt: What about the #contextwindows ?

AI: Oh, hey, I can totally do that. Sorry, I was wrong.

Name your (#LLM) demons to own them.