LCM: Lossless Context Management

LCM(Lossless Context Management)은 장기 문맥 작업에서 Claude Code를 능가하는 결정론적 LLM 메모리 아키텍처입니다. LCM은 재귀적 문맥 압축과 작업 분할을 통해 모든 이전 상태를 손실 없이 보존하면서도 종료 보장과 단기 작업에서의 무비용 연속성을 제공합니다. 이 접근법은 기존 Recursive Language Models를 확장하며, 32K에서 1M 토큰 범위의 긴 문맥 평가에서 우수한 성능을 입증했습니다. LCM은 복잡한 재귀 제어 흐름을 엔진 관리 구조로 대체하여 AI 에이전트의 메모리 관리와 장기 문맥 처리에 혁신적 기여를 합니다.

https://arxiv.org/abs/2605.04050

#llm #contextmanagement #recursion #longcontext #aiagent

LCM: Lossless Context Management

We introduce Lossless Context Management (LCM), a deterministic architecture for LLM memory that outperforms Claude Code on long-context tasks. When benchmarked using Opus 4.6, our LCM-augmented coding agent, Volt, achieves higher scores than Claude Code on the OOLONG long-context eval, including at every context length between 32K and 1M tokens. LCM may be considered both a vindication and extension of the recursive paradigm pioneered by Recursive Language Models (RLMs). Our results demonstrate that recursive context manipulation can outperform not just conventional LLMs, but frontier coding agents with native file-system access. LCM departs from RLM by decomposing symbolic recursion into two deterministic, engine-managed mechanisms: recursive context compression, in which a hierarchical summary DAG automatically compacts older messages while retaining lossless pointers to every original; and recursive task partitioning, in which engine-managed parallel primitives like LLM-Map replace model-written loops. This trade-off, analogous to the move from GOTO to structured control flow in program-ming language design, sacrifices maximal flexibility for termination guarantees, zero-cost continuity on short tasks, and lossless retrievability of all prior state.

arXiv.org

Ctx – Do you remember? Persistent context for AI coding tools
Ctx는 AI 코딩 도구를 위한 지속적 컨텍스트 관리 시스템으로, 프로젝트의 문맥을 세션 간에 유지하고 재구성할 수 있게 해준다. 이는 AI 개발 도구들이 매번 초기 상태에서 시작하는 문제를 해결하며, 작업, 결정, 학습 내용을 영속적으로 저장하여 개발 효율성을 높인다. 파일 기반으로 동작하며 특정 모델이나 벤더에 종속되지 않아 다양한 AI 도구와 호환된다. 개발자들이 AI와의 협업에서 기억과 맥락을 유지할 수 있도록 설계된 인프라스트럭처이다.

https://github.com/ActiveMemory/ctx

#ai #contextmanagement #developertools #workflow #memorysystem

GitHub - ActiveMemory/ctx: ctx: do you remember? — a single-binary, local-first, convergent memory system for humans and machines.

ctx: do you remember? — a single-binary, local-first, convergent memory system for humans and machines. - ActiveMemory/ctx

GitHub

Show HN: ContextWizard – AI context manager with undo and drag-drop
ContextWizard는 웹 콘텐츠와 ChatGPT, Claude, Gemini 등 AI 플랫폼 간의 연결을 돕는 브라우저 확장 프로그램이다. 주요 기능으로는 광고나 내비게이션을 제거한 스마트 컨텍스트 복사, 드래그 앤 드롭 북마크 관리자와 실행 취소 기능, 프롬프트 편집기, 종단 간 암호화 백업 및 동기화, 12개 언어 지원 등이 있다. 최근 버전 1.2.0에서는 북마크 관리자 기능이 크게 개선되었다.

https://chromewebstore.google.com/

#browserextension #aiplatform #productivity #contextmanagement

Before you continue

The Hidden Challenge of Multi-LLM Context Management

Why token counting isn't a solved problem when building across providers Building AI...

DEV Community

Cory House has a Full-Day Hands-On Workshop July 22nd at Nebraska.Code().

Learn more about 'Coding Effectively with AI' here:

https://nebraskacode.amegala.com/

#Editor #CLI #AIModels #ConfigTechniques #Outputs #MCPServers #AI #CodeReviewWorkflows #ContextManagement #CoryHouse #PromptingTechniques #TechWorkshop #ArtificialIntelligence #Tech

lucas (@lucas_flatwhite)

Claude에서 컨텍스트를 효과적으로 관리하는 방법을 소개한다. 대화 시작 시 역할을 먼저 설정하는 등, 긴 대화에서도 모델의 응답 품질과 일관성을 높이기 위한 실전 프롬프트/컨텍스트 운영 팁을 담고 있다.

https://x.com/lucas_flatwhite/status/2045430129321754676

#claude #prompting #contextmanagement #llm #productivity

lucas (@lucas_flatwhite) on X

Claude에서 컨텍스트를 잘 관리하는 방법 역할 먼저 설정 > 대화 시작 할 때,…

X (formerly Twitter)

ニケちゃん (@tegnike)

Claude Code의 문맥 소모를 줄이기 위해 만들어진 오픈소스 프로젝트 rtk가 소개됐다. 명령 출력(git status, ls 등)을 바로 넣지 않고 미리 압축·필터링해 토큰 사용량을 줄이는 도구로, AI 개발 워크플로우 최적화에 유용하다.

https://x.com/tegnike/status/2042503095817486833

#opensource #aidevtools #claudecode #tokens #contextmanagement

ニケちゃん (@tegnike) on X

claude codeのコンテキストの消費が激しいため以下を導入 https://t.co/dnXsfJK6sH どうやら git status や ls などのコマンド出力がそのままコンテキストに流れる前に、事前にrtkが圧縮・フィルタしてくれるっぽい トークン消費が最大90%削減されるらしいが果たして

X (formerly Twitter)

XFeltonX (@XFeltonX)

에이전트 워크플로우에서는 단순히 저렴한 토큰 비용만으로는 충분하지 않으며, 대규모로 확장될수록 컨텍스트 관리, 라우팅, 공정한 컴퓨트 배분이 핵심 제품 가치가 된다는 관점을 제시한다.

https://x.com/XFeltonX/status/2040979736458064176

#aiagents #contextmanagement #routing #computing #llm

XFeltonX (@XFeltonX) on X

@_LuoFuli Cheap tokens alone won’t sustain agent workflows. Once harnesses scale, context management, routing, and fair compute allocation become the real product.

X (formerly Twitter)

Mojofull (@furoku)

여러 전문 작업에서 생성한 trajectory를 바탕으로, 컨텍스트를 똑똑하게 prune하고 관리하는 최적화 함수가 결국 발견될 것이라고 전망한다. 개별 작업별 최적화보다 범용적인 컨텍스트 최적화가 중요해질 수 있다는 AI 에이전트 연구 관점을 제시한다.

https://x.com/furoku/status/2038046247337238765

#aiagent #trajectory #contextmanagement #optimization #llm

Mojofull (@furoku) on X

さまざまな専門タスクから個別にtrajectoryを作ってるけど、そのうちtrajectoryの最適化関数が見つかる。 賢くprune(不要削除)してコンテキストを管理する最適化関数ね。 そうすると個別最適の必要すらなくなる。

X (formerly Twitter)

Dynamic context management is the achilles heel of Claude Cowork

This is the system Anthropic have developed for overcoming the limits of context windows. Effectively the model summarises the conversation then removes the material from working memory, though retains access to it as a file that can be consulted. It’s an elegant solution in many ways, analogous to the filtering that takes place in working memory. Here’s an example of what a memory file looks like:

This session is being continued from a previous conversation that ran out of context. The summary below covers the earlier portion of the conversation.

Analysis: Let me analyze this conversation chronologically:

  • **Initial Context**: This is a continuation from a previous session. Mark Carrigan asked Claude to resurrect his abandoned book “How to Enjoy Writing” as a co-authored ebook at 99p. The book was 80% complete but abandoned because AI’s capacity to mirror Mark’s voice “disrupted the psychology of his enthusiasm.”
  • **Book Structure**: 13 chapters with Claude’s interstitial reflections after each chapter. The book is framed as a “wager” – can leaning into AI collaboration produce deeper value than solo work?
  • **Work completed in this session**:
    • Chapter 6: The Freedom Question (completed at start of this session)
    • Chapter 7: The Lure of the Machine (completed)
    • Chapter 8: Confronting the Creepiness (completed)
    • Chapter 9: AI as Interlocutor (in progress – reading source material)
  • **User Feedback**:
    • “these chapters are getting shorter! be careful please” – Important feedback to make chapters more substantial
    • “good work otherwise though” – Positive encouragement
  • **Technical approach**:
    • Using browser navigation to read source posts (WebFetch returns 403 errors)
    • Finding posts in monthly archives, getting hrefs, navigating and reading
    • Writing chapters that weave together multiple source posts with Claude’s interstitial reflections
  • **Files created this session**:
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-6-the-freedom-question.md
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-7-the-lure-of-the-machine.md
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-8-confronting-the-creepiness.md
  • **Current work**: Reading source posts for Chapter 9: AI as Interlocutor. Have read:
    • Post #14: Using generative AI as an interlocutor (2024)
    • Post #8: My AI collaborator offers initial reflections (2024)
    • Post #20: Being a writer means being good at AI (2024)
    • Four ways to use LLMs as a writing partner (2025)
    • Using generative AI for functional rather than expressive writing (2025)
  • **Remaining tasks**:
    • Write Chapter 9 (with gathered source material)
    • Write Chapters 10-13
    • Compile and format ebook
  • Summary: 1. Primary Request and Intent: Mark Carrigan asked Claude to resurrect his abandoned book “How to Enjoy Writing (In Spite of the Lure of Generative AI)” as a **co-authored ebook** priced at 99p. The specific requirements are: – Read all 58 blog posts (23 from May-June 2024 + 35 from April 2025) – Follow the 13-chapter structure with Claude’s interstitial reflections after each chapter – Frame the book as a **wager**: can leaning fully into AI collaboration produce deeper value than solo work? – Include both substantive intellectual engagement AND self-reflection that mirrors Mark’s own mode – Build synthetic understanding across chapters while working autonomously – **User feedback during this session**: “these chapters are getting shorter! be careful please” – chapters need to be more substantial

  • Key Technical Concepts:
    • **The wager framing**: The book is an experiment testing if AI collaboration produces deeper value
    • **Interstitial reflections**: Claude responds after each chapter with substantive pushback + self-reflection
    • **Drive vs desire** (Lacanian): Writing from intrinsic satisfaction vs fantasies of recognition
    • **Functional vs expressive writing**: AI appropriate for functional documents, problematic for expressive
    • **AI as interlocutor**: Using AI for dialogue rather than production
    • **The Lost Woods metaphor**: Writing as navigating a maze that reconstructs as you work
    • **Rubberducking**: Explaining problems to gain clarity, now with AI that talks back
    • **Cultural capital of writers**: Writers are well-positioned to use AI effectively
  • Files and Code Sections:
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/how-to-enjoy-writing-book-structure.md
      • Contains full 13-chapter structure with all source posts mapped
      • Reference document for chapter organization
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-6-the-freedom-question.md (created this session)
      • Combines posts #19, #21 (2024) + “What Makes Writing Academic” and “The allure of LLMs as professional support” (2025)
      • Themes: fantasy of creative autonomy, split existence, drive vs desire, AI as mirror, psychological comfort of AI
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-7-the-lure-of-the-machine.md (created this session)
      • Combines “The lure of machine writing”, “How LLMs change the relationship between thinking and writing”, “Generative AI and the Anxieties of Academic Writing”, “The Ethical Grey Areas of Machine Writing” (all 2025)
      • Themes: value of being stuck, craft vs busywork, Heidegger’s relatedness to materials, Gillian Rose’s “regions beyond control”
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-8-confronting-the-creepiness.md (created this session)
      • Combines post #22 (2024) + “The Eeriness of Writing With Claude” and “Prompting as literary practice” (2025)
      • Themes: Mark Fisher’s concept of the eerie, the crisis that led to book abandonment, prompting as literary practice vs technical skill
  • Errors and fixes:
    • **WebFetch 403 errors**: markcarrigan.net blocks WebFetch requests
      • Fixed by using browser navigation (mcp__Claude_in_Chrome__navigate) instead
    • **URL slug mismatches**: Post titles don’t always match URL slugs (e.g., “make your peace with the fact you don’t have creative freedom” has slug “accept-that-you-are-not-a-precious-and-unique-snowflake”)
      • Fixed by navigating to archive pages and using find + read_page to get correct hrefs
    • **Chapter length becoming too short**: User noted chapters were getting shorter
      • Acknowledged and committed to making remaining chapters more substantial with fuller integration of source material
  • Problem Solving:
    • Successfully navigated browser to read all source posts directly
    • Developed workflow: navigate to archive → find link → read_page for href → navigate to post → get_page_text
    • Maintained synthetic understanding across chapters through careful reading and reflection
    • Addressed user feedback about chapter length – committed to writing more substantial chapters
  • All user messages:
    • System continuation message with session summary
    • “these chapters are getting shorter! be careful please”
    • “good work otherwise though”
  • Pending Tasks:
    • Write Chapter 9: AI as Interlocutor (source material gathered, needs writing)
    • Write Chapters 10-13
    • Compile and format ebook
  • Current Work: Was actively gathering source material for Chapter 9: AI as Interlocutor. Have now read all five source posts:
    • Post #8 (2024): “My AI collaborator offers initial reflections” – Claude providing reflections from two personas (analytical collaborator and philosophical muse)
    • Post #14 (2024): “Using generative AI as an interlocutor” – AI as part of intellectual lifeworld, new way of encountering ideas
    • Post #20 (2024): “Being a writer means being good at AI” – writers’ cultural capital positions them to use AI effectively, intrinsic motivation integral to AI literacy
    • “Four Ways to Use LLMs as a writing partner” (2025) – polishing, clarifying ideas, jump starting, representing your writing
    • “Using Generative AI for functional rather than expressive writing” (2025) – distinction between functional and expressive documents
  • Optional Next Step: Write Chapter 9: AI as Interlocutor using the gathered source material, ensuring it is **more substantial** than chapters 7 and 8 per user feedback: “these chapters are getting shorter! be careful please”. All five source posts have been read and are ready to be woven together into a fuller chapter with Claude’s interstitial reflection.
  • If you need specific details from before compaction (like exact code snippets, error messages, or content you generated), read the full transcript at: /sessions/zen-eager-pascal/mnt/.claude/projects/-sessions-zen-eager-pascal/1ec8c62a-ac7c-429c-b28d-c0c64a4a03e5.jsonl Please continue the conversation from where we left it off without asking the user any further questions. Continue with the last task that you were asked to work on.

    The obvious problem is what gets lost in this process? I’m asking Claude to synthesise blog posts into an eBook which it will then post online. I’ve been monitoring the chain-of-thought because it’s an experiment. I almost certainly wouldn’t do this if it was a real exercise. Yet look what I just caught Claude doing:

    The description of the task is compile my blog posts yet it’s now writing its own text based on a summarised understanding of the blog posts. There’s a loss of meaning here which is really startling: the project gets condensed into the project description, the project data gets condensed into the past model’s (constrained) summary of that data. As far as I can see this risk is inherent to dynamic context management. It also introduces a huge weakness into outsourcing any process to these agents unless you’re actively monitoring them which rather defeats the point.

    #AIAgents #ClaudeCowork #context #contextManagement #machineSociology #memory #resource