0 Followers
0 Following
4 Posts

This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup

I'm wondering if the obvious (and stated) fact that the site was vibe-coded - detracts from the fact that this tool was hand written.

> jai itself was hand implemented by a Stanford computer science professor with decades of C++ and Unix/linux experience. (https://jai.scs.stanford.edu/faq.html#was-jai-written-by-an-...)

FAQ | jai - easy containment for AI agents

Super-lightweight Linux sandbox for AI agents

It seems like it's controlled by the Bash tool (https://code.claude.com/docs/en/sandboxing) and then bubblewrap (https://github.com/containers/bubblewrap) on linux and Seatbelt on mac at the system level
Sandboxing - Claude Code Docs

Learn how Claude Code's sandboxed bash tool provides filesystem and network isolation for safer, more autonomous agent execution.

Claude Code Docs
We just say words now that sound good for marketing but have no real meaning.

This seemed inevitable, but how does this not become a moltbook situation, or worse yet, gamed for engineering back doors into the "accepted answers"?

Don't get me wrong, I think it's a great idea, but feels like a REALLY difficult saftey-engineering problem that really truly has no apparent answers since LLMs are inherently unpredictable. I'm sure fellow HN comments are going to say the same thing.

I'll likely still use it of course ... :-\