0 Followers
0 Following
1 Posts

This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup

Show HN: A deterministic middleware to compress LLM prompts by 50-80%

Hi HN,

I’m working on Skillware, an open-source framework that treats AI capabilities as installable, self-contained modules.

I just added a "Prompt Token Rewriter" skill. It’s an offline heuristic middleware that strips conversational filler and redundant context from long agentic loops before they hit the LLM. It saves significant token costs and inference time, and it's 100% deterministic (no extra model calls).

We're building a registry of "Agentic Know-How" (Logic + Cognition + Governance). If you have a specialized tool for LLMs or want to see what a "standard" skill looks like, I'd love your feedback or a PR:

https://github.com/ARPAHLS/skillware

https://github.com/ARPAHLS/skillware

GitHub - ARPAHLS/skillware: A Python framework for modular, self-contained skill management for machines.

A Python framework for modular, self-contained skill management for machines. - ARPAHLS/skillware

GitHub