0 Followers
0 Following
3 Posts
I'm Dan. I co-founded https://www.choiceofgames.com/

Choice of Games is the world’s largest publishing house for interactive novels. Our award-winning games are entirely text-based—hundreds of thousands of words and hundreds of choices, without graphics or sound effects—and fueled by the vast, unstoppable power of your imagination. Choose your path: your choices control the story.

[email protected]
@dfabu
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.

Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup

Indeed, and the dead comments (from new users!) overwhelmingly favor the government position.

But, this is a non-story, because those comments were correctly killed precisely so they wouldn't clog up this thread.

If that's not motivation enough for you to rename it, well, TypeScript already has a static type checker called Hegel. https://hegel.js.org/ (It's a stronger type system than TypeScript.)
Index

Feel power of types

> Separate Accounts for your OpenClaw

> As I have mentioned, treat OpenClaw as a separate entity. So, give it its own Gmail account, Calendar, and every integration possible. And teach it to access its own email and other accounts. In addition, create a separate 1Password account to store credentials. It’s akin to having a personal assistant with a separate identity, rather than an automation tool.

The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it.

Which is to say, there is no way to run OpenClaw safely at all, and there literally never will be, because the "lethal trifecta" problem is inherently unsolvable.

https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

The lethal trifecta for AI agents: private data, untrusted content, and external communication

If you are a user of LLM systems that use tools (you can call them “AI agents” if you like) it is critically important that you understand the risk of …

Simon Willison’s Weblog