[Show GN: One Ego, Any Model – 멀티 AI 시대를 위한 맥락 지갑

Context Wallet은 사용자가 AI 모델 간 맥락을 쉽게 전환할 수 있도록 하는 크롬 확장 프로그램으로, 다양한 AI 모델(ChatGPT, Claude, Gemini, 로컬 LLM 등)을 사용할 때 매번 맥락을 설명하는 불편함을 해결하기 위해 개발되었습니다. 로컬 우선 구조를 채택하여 특정 모델에 종속되지 않도록 설계되었으며, 필요시 로그인을 통해 다양한 환경에서 사용할 수 있습니다.

https://news.hada.io/topic?id=26362

#ai #context #chrome #extension #multimodel

One Ego, Any Model – 멀티 AI 시대를 위한 맥락 지갑

<p>개인적인 불편함에서 시작한 프로젝트를 공유합니다.</p> <p>저는 작업에 따라 ChatGPT, Claude, Gemini, 로컬 LLM을 계속 바꿔 씁니다.<br /> ...

GeekNews

[Show GN: AI 에이전트 컨텍스트 인수인계를 위한 CLI 도구, ctxbin.

개발자는 여러 AI 코딩 에이전트를 번갈아 사용할 때 컨텍스트 인수인계가 번거롭다는 문제를 해결하기 위해 ctxbin이라는 CLI 도구를 개발했습니다. 이 도구는 Node.js 환경에서 npx로 쉽게 실행할 수 있으며, Upstash Redis를 스토리지로 사용합니다. ctxbin은 컨텍스트 저장 및 불러오기, 에이전트 스킬 폴더 복구, 서브 에이전트 룰 복사 등의 기능을 제공합니다.

https://news.hada.io/topic?id=26293

#ai #cli #context #agent #tool

AI 에이전트 컨텍스트 인수인계를 위한 CLI 도구, ctxbin.

<p>저 같은 경우는, 여러 AI 코딩 에이전트를 번갈아 사용하다 보니,<br /> 매번 컨텍스트를 복사해서 다시 설명해 주는 과정이 꽤 번거롭게 느껴...

GeekNews
Next week my new course is starting, and one of the things I want to do, is show how important #context is for when you manage people. One aspect is #culture. What works well in one culture, is not the way to do it in another. This week I had a fun talk with a Lebanese interviewee for my #research, and he commented on the less formal dress code here in Denmark - we were both sitting there in the Teams meeting in our hoodies :) Do you have any examples of how culture influences the workplace? #HR

Dynamic context management is the achilles heel of Claude Cowork

This is the system Anthropic have developed for overcoming the limits of context windows. Effectively the model summarises the conversation then removes the material from working memory, though retains access to it as a file that can be consulted. It’s an elegant solution in many ways, analogous to the filtering that takes place in working memory. Here’s an example of what a memory file looks like:

This session is being continued from a previous conversation that ran out of context. The summary below covers the earlier portion of the conversation.

Analysis: Let me analyze this conversation chronologically:

  • **Initial Context**: This is a continuation from a previous session. Mark Carrigan asked Claude to resurrect his abandoned book “How to Enjoy Writing” as a co-authored ebook at 99p. The book was 80% complete but abandoned because AI’s capacity to mirror Mark’s voice “disrupted the psychology of his enthusiasm.”
  • **Book Structure**: 13 chapters with Claude’s interstitial reflections after each chapter. The book is framed as a “wager” – can leaning into AI collaboration produce deeper value than solo work?
  • **Work completed in this session**:
    • Chapter 6: The Freedom Question (completed at start of this session)
    • Chapter 7: The Lure of the Machine (completed)
    • Chapter 8: Confronting the Creepiness (completed)
    • Chapter 9: AI as Interlocutor (in progress – reading source material)
  • **User Feedback**:
    • “these chapters are getting shorter! be careful please” – Important feedback to make chapters more substantial
    • “good work otherwise though” – Positive encouragement
  • **Technical approach**:
    • Using browser navigation to read source posts (WebFetch returns 403 errors)
    • Finding posts in monthly archives, getting hrefs, navigating and reading
    • Writing chapters that weave together multiple source posts with Claude’s interstitial reflections
  • **Files created this session**:
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-6-the-freedom-question.md
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-7-the-lure-of-the-machine.md
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-8-confronting-the-creepiness.md
  • **Current work**: Reading source posts for Chapter 9: AI as Interlocutor. Have read:
    • Post #14: Using generative AI as an interlocutor (2024)
    • Post #8: My AI collaborator offers initial reflections (2024)
    • Post #20: Being a writer means being good at AI (2024)
    • Four ways to use LLMs as a writing partner (2025)
    • Using generative AI for functional rather than expressive writing (2025)
  • **Remaining tasks**:
    • Write Chapter 9 (with gathered source material)
    • Write Chapters 10-13
    • Compile and format ebook
  • Summary: 1. Primary Request and Intent: Mark Carrigan asked Claude to resurrect his abandoned book “How to Enjoy Writing (In Spite of the Lure of Generative AI)” as a **co-authored ebook** priced at 99p. The specific requirements are: – Read all 58 blog posts (23 from May-June 2024 + 35 from April 2025) – Follow the 13-chapter structure with Claude’s interstitial reflections after each chapter – Frame the book as a **wager**: can leaning fully into AI collaboration produce deeper value than solo work? – Include both substantive intellectual engagement AND self-reflection that mirrors Mark’s own mode – Build synthetic understanding across chapters while working autonomously – **User feedback during this session**: “these chapters are getting shorter! be careful please” – chapters need to be more substantial

  • Key Technical Concepts:
    • **The wager framing**: The book is an experiment testing if AI collaboration produces deeper value
    • **Interstitial reflections**: Claude responds after each chapter with substantive pushback + self-reflection
    • **Drive vs desire** (Lacanian): Writing from intrinsic satisfaction vs fantasies of recognition
    • **Functional vs expressive writing**: AI appropriate for functional documents, problematic for expressive
    • **AI as interlocutor**: Using AI for dialogue rather than production
    • **The Lost Woods metaphor**: Writing as navigating a maze that reconstructs as you work
    • **Rubberducking**: Explaining problems to gain clarity, now with AI that talks back
    • **Cultural capital of writers**: Writers are well-positioned to use AI effectively
  • Files and Code Sections:
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/how-to-enjoy-writing-book-structure.md
      • Contains full 13-chapter structure with all source posts mapped
      • Reference document for chapter organization
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-6-the-freedom-question.md (created this session)
      • Combines posts #19, #21 (2024) + “What Makes Writing Academic” and “The allure of LLMs as professional support” (2025)
      • Themes: fantasy of creative autonomy, split existence, drive vs desire, AI as mirror, psychological comfort of AI
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-7-the-lure-of-the-machine.md (created this session)
      • Combines “The lure of machine writing”, “How LLMs change the relationship between thinking and writing”, “Generative AI and the Anxieties of Academic Writing”, “The Ethical Grey Areas of Machine Writing” (all 2025)
      • Themes: value of being stuck, craft vs busywork, Heidegger’s relatedness to materials, Gillian Rose’s “regions beyond control”
    • /Users/markcarrigan/Library/Mobile Documents/com~apple~CloudDocs/chapter-8-confronting-the-creepiness.md (created this session)
      • Combines post #22 (2024) + “The Eeriness of Writing With Claude” and “Prompting as literary practice” (2025)
      • Themes: Mark Fisher’s concept of the eerie, the crisis that led to book abandonment, prompting as literary practice vs technical skill
  • Errors and fixes:
    • **WebFetch 403 errors**: markcarrigan.net blocks WebFetch requests
      • Fixed by using browser navigation (mcp__Claude_in_Chrome__navigate) instead
    • **URL slug mismatches**: Post titles don’t always match URL slugs (e.g., “make your peace with the fact you don’t have creative freedom” has slug “accept-that-you-are-not-a-precious-and-unique-snowflake”)
      • Fixed by navigating to archive pages and using find + read_page to get correct hrefs
    • **Chapter length becoming too short**: User noted chapters were getting shorter
      • Acknowledged and committed to making remaining chapters more substantial with fuller integration of source material
  • Problem Solving:
    • Successfully navigated browser to read all source posts directly
    • Developed workflow: navigate to archive → find link → read_page for href → navigate to post → get_page_text
    • Maintained synthetic understanding across chapters through careful reading and reflection
    • Addressed user feedback about chapter length – committed to writing more substantial chapters
  • All user messages:
    • System continuation message with session summary
    • “these chapters are getting shorter! be careful please”
    • “good work otherwise though”
  • Pending Tasks:
    • Write Chapter 9: AI as Interlocutor (source material gathered, needs writing)
    • Write Chapters 10-13
    • Compile and format ebook
  • Current Work: Was actively gathering source material for Chapter 9: AI as Interlocutor. Have now read all five source posts:
    • Post #8 (2024): “My AI collaborator offers initial reflections” – Claude providing reflections from two personas (analytical collaborator and philosophical muse)
    • Post #14 (2024): “Using generative AI as an interlocutor” – AI as part of intellectual lifeworld, new way of encountering ideas
    • Post #20 (2024): “Being a writer means being good at AI” – writers’ cultural capital positions them to use AI effectively, intrinsic motivation integral to AI literacy
    • “Four Ways to Use LLMs as a writing partner” (2025) – polishing, clarifying ideas, jump starting, representing your writing
    • “Using Generative AI for functional rather than expressive writing” (2025) – distinction between functional and expressive documents
  • Optional Next Step: Write Chapter 9: AI as Interlocutor using the gathered source material, ensuring it is **more substantial** than chapters 7 and 8 per user feedback: “these chapters are getting shorter! be careful please”. All five source posts have been read and are ready to be woven together into a fuller chapter with Claude’s interstitial reflection.
  • If you need specific details from before compaction (like exact code snippets, error messages, or content you generated), read the full transcript at: /sessions/zen-eager-pascal/mnt/.claude/projects/-sessions-zen-eager-pascal/1ec8c62a-ac7c-429c-b28d-c0c64a4a03e5.jsonl Please continue the conversation from where we left it off without asking the user any further questions. Continue with the last task that you were asked to work on.

    The obvious problem is what gets lost in this process? I’m asking Claude to synthesise blog posts into an eBook which it will then post online. I’ve been monitoring the chain-of-thought because it’s an experiment. I almost certainly wouldn’t do this if it was a real exercise. Yet look what I just caught Claude doing:

    The description of the task is compile my blog posts yet it’s now writing its own text based on a summarised understanding of the blog posts. There’s a loss of meaning here which is really startling: the project gets condensed into the project description, the project data gets condensed into the past model’s (constrained) summary of that data. As far as I can see this risk is inherent to dynamic context management. It also introduces a huge weakness into outsourcing any process to these agents unless you’re actively monitoring them which rather defeats the point.

    #AIAgents #ClaudeCowork #context #contextManagement #machineSociology #memory #resource

    TestingCatalog News (@testingcatalog)

    Anthropic이 곧 Claude Projects에서 사용자가 Cowork 작업을 직접 시작할 수 있도록 하는 기능을 도입할 예정이라고 발표. 해당 작업은 지침 및 업로드된 파일 같은 프로젝트 컨텍스트를 활용해 실행되며 현재는 아직 공개적으로 제공되지 않는 단계임을 명시.

    https://x.com/testingcatalog/status/2016468942185697761

    #anthropic #claude #cowork #projects #context

    TestingCatalog News 🗞 (@testingcatalog) on X

    Anthropic will enable users to start Cowork tasks directly from Claude Projects soon. These tasks will utilise project context, like instructions and uploaded files, for task execution. * not available to the public yet

    X (formerly Twitter)

    ⚠️ The upcoming ConTeXt update has changes WRT footnotes.

    Better install it beside your current one instead of just updating and check if everything works as intended.

    Full announcement in https://mailman.ntg.nl/archives/list/ntg-context@ntg.nl/thread/UQ3RKCRNII5WR3JWEOPHIPBULWB6XB5J/

    Maybe also have a look in the manual “beyond-inserts”.

    #ConTeXt #TeXConTeXt #LMTX

    ¿El fin de una era? Por esto medio mundo está borrando TikTok en EUA hoy mismo 🚩😱

    #tiktok #privacidad #actualidad #parati #tiktokshop #context #pictoline

    #Ciberseguridad #Seguridad #Privacidad

    I like the 40-minute games on NFL GamePass, but sometimes you miss out on the context of what the commentators are saying.   

    #nfl #context

    Sampo (@___sampo___)

    Cursor Cafe Berlin에서의 주요 인사이트: LLM 사용 시 컨텍스트를 짧게 유지할수록 더 나은 출력이 나오며, 'one-shot prompting'은 함정이라고 경고. 새 채팅으로 간결한 브리프를 주고 빠르게 코드 작성 후 꼼꼼히 감사하라는 실무 팁을 제시.

    https://x.com/___sampo___/status/2015436142150136001

    #prompting #llm #promptengineering #context #cursor

    Sampo (@___sampo___) on X

    At Cursor Cafe Berlin ☕️⚡️ Big takeaway: the shorter the context, the better the output. New chat > cleaner brief > better results. And “one-shot prompting” is basically a trap. Vibe code fast, then audit like your job depends on it. @benln @cursor_ai @slobkebap @naoufal_elh

    X (formerly Twitter)