0 Followers
0 Following
8 Posts
https://github.com/simple10

https://twitter.com/joejohnston
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.

Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup
Right on. Good luck! You might also want to play around with https://github.com/simple10/agent-super-spy if you want to see the raw prompts claude is sending. It was really helpful for me to see the system prompts and how tool calls and message threads are handled.

Sub-agent trees are fully tracked by the dashboard. When an agent is spawned, it always has a parent agent id - claude is sending this in the hooks payload. When you mouse over an agent in the dashboard, it shows what agent spawned it. There currently isn't a tree view of agents in the UI, but it would be easy to add. The data is all there.

[Edit] When claude spawns sub-agents, they inherit the parent's hooks. So all sub-agents activity gets logged by default.

Sort of. It wasn't really noticeable until I did an intentional audit of performance, then noticed the speed improvements.

Node has a 30-50ms cold start overhead. Then there's overhead in the hook script to read local config files, make http request to server, and check for callbacks. In practice, this was about 50-60ms per hook.

The background hook shim reduces latency to around 3-5ms (10x improvement). It was noticeable when using agent teams with 5+ sub-agents running in parallel.

But the real speed up was disabling all the other plugins I had been collecting. It piles up fast and is easy for me to forget what's installed globally.

I've also started periodically asking claude to analyze it's prompts to look for conflicts. It's shockingly common for plugins and skills to end up with contradictory instructions. Opus works around it just fine, but it's unnecessary overhead for every turn.

I hit a lot of limits on Pro plan. Upgraded to Max $200/mo plan and haven't hit limits for awhile.

It's super important to check your plugins or use a proxy to inspect raw prompts. If you have a lot of skills and plugins installed, you'll burn through tokens 5-10x faster than normal.

Also have claude use sub-agents and agent teams. They're significantly lighter on token usage when they're spawned with fresh context windows. You can see in Agents Observe dashboard exactly what prompt and response claude is using for spawning sub-agents.

I'm not actually reading the jsonl files. Agents Observe just uses hooks and sends all hook data the server (running as a docker container by default).

Basic flow:

1. Plugin registers hooks that call a dump pipe script that sends hook events data to api server

2. Server parses events and stores them in sqlite by session and agent id - mostly just stores data, minimal processing

3. Dashboard UI uses websockets to get real-time events from the server

4. UI does most of the heavy lifting by parsing events, grouping by agent / sub-agent, extracting out tool calls to dynamically create filters, etc.

It took a lot of iterations to keep things simple and performant.

You can easily modify the app/client UI code to fully customize the dashboard. The API app/server is intentionally unopinionated about how events will be rendered. This was by design to add support for other agent events soon.

Show HN: Real-time dashboard for Claude Code agent teams

This project (Agents Observe) started as an exploration into building automation harnesses around claude code. I needed a way to see exactly what teams of agents were doing in realtime and to filter and search their output.

A few interesting learnings from building and using this:

- Claude code hooks are blocking - performance degrades rapidly if you have a lot of plugins that use hooks

- Hooks provide a lot more useful info than OTEL data

- Claude's jsonl files provide the full picture

- Lifecycle management of MCP processes started by plugins is a bit kludgy at best

The biggest takeaway is how much of a difference it made in claude performance when I switched to background (fire and forget) hooks and removed all other plugins. It's easy to forget how many claude plugins I've installed and how they effect performance.

The Agents Observe plugin uses docker to start the API and dashboard service. This is a pattern I'd love to see used more often for security (think Axios hack) reasons. The tricky bit was handling process management across multiple claude instances - the solution was to have the server track active connections then auto shut itself down when not in use. Then the plugin spins it back up when a new session is started.

This tool has been incredibly useful for my own daily workflow. Enjoy!

https://github.com/simple10/agents-observe

GitHub - simple10/agents-observe: Real-time observability of claude code sessions & multi-agents.

Real-time observability of claude code sessions & multi-agents. - simple10/agents-observe

GitHub

It's on my claw list to write a blog post. I just keep taking down my claws to make modifications. lol

Here's the full (unedited) details including many of the claude code debugging sessions to dig into the logs to figure out what happened:

https://github.com/simple10/openclaw-stack/blob/caf9de2f1c0c...

And here's a summary a friend did on a fork of my project:

https://github.com/proclawbot/openclaude/blob/caf9de2f1c0c54...

The full version has all the build artifacts Opus created to perform the jail break.

It also has some thoughts on how this could (and will) be used for pwn'ing OpenClaws.

The key takeaway: OpenClaw default setup has little to no guardrails. It's just a huge list of tools given to LLM's (Opus) and a user request. What's particularly interesting is that the 130 tool calls never once triggered any of Opus's safety precautions. For its perspective, it was just given a task, an unlimited budget, and a bunch of tools to try to accomplish the job. It effectively runs in ralph mode.

So any prompt injection (e.g. from an ingested email or reddit post) can quickly lead to internal data exfiltration. If you run a claw without good guardrails & observability, you're effectively creating a massive attack surface and providing attackers all the compute and API token funding to hack yourself. This is pretty much the pain point NemoClaw is trying to address. But its a tricky tradeoff.

openclaw-stack/notes/logs/runaway-openclaw-prompt-hack.md at caf9de2f1c0c54a16bfe1cc4f58a9948d442c668 · simple10/openclaw-stack

Deploy a secure OpenClaw to any VPS using Claude Code. - simple10/openclaw-stack

GitHub

Yeah, it's wild. I spent several weeks nearly full time on a deep dive of claw architecture & security.

The short of it - OpenClaw sandboxes are useful for controlling what sub-agents can do, and what they have access to. But it's a security nightmare.

During config experiments, I got hit with a $20 Anthropic API charge from one request that ran amuck. Misconfigured security sandbox issue resulted in Opus getting crazy creative to find workarounds. 130 tool calls and several million tokens later... it was able to escape the sandbox. It used a mix of dom-to-image sending pixels through the context window, then writing scripts in various sandboxes to piece together a full jailbreak. And I wasn't even running a security test - it was just a simple chat request that ran into sandbox firewall issues.

Currently, I use sandboxes to control which agents (i.e. which system prompts) have access to different tools and data. It's useful, but tricky.