I'm tired of LLM bullshitting. So I fixed it. - Lemmy.World
Hello! As a handsome local AI enjoyer™ you’ve probably noticed one of the big
flaws with LLMs: It lies. Confidently. ALL THE TIME. (Technically, it
“bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5
[https://link.springer.com/article/10.1007/s10676-024-09775-5] I’m autistic and
extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s
useful to you too. ## The thing: llama-conductor llama-conductor is a router
that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your
backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first
(because fuck big AI), but it should talk to anything OpenAI-compatible if you
point it there (note: experimental so YMMV). Not a model, not a UI, not magic
voodoo. A glass-box that makes the stack behave like a deterministic system,
instead of a drunk telling a story about the fish that got away. TL;DR: “In God
we trust. All others must bring data.” Three examples: ## 1) KB mechanics that
don’t suck (1990s engineering: markdown, JSON, checksums) You keep “knowledge”
as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then: * >>attach
<kb> — attaches a KB folder * >>summ new — generates SUMM_*.md files with
SHA-256 provenance baked in * >> moves the original to a sub-folder Now, when
you ask something like: > “yo, what did the Commodore C64 retail for in 1982?”
…it answers from the attached KBs *only*. If the fact isn’t there, it tells you
- explicitly - instead of winging it. Eg: > The provided facts state the
Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982
retail price. The Amiga’s pricing and timeline are also not detailed in the
given facts. > > Missing information includes the exact 1982 retail price for
Commodore’s product line and which specific model(s) were sold then. The answer
assumes the C64 is the intended product but cannot confirm this from the facts.
> > Confidence: medium | Source: Mixed No vibes. No “well *probably*…”. Just:
here’s what’s in your docs, here’s what’s missing, don't GIGO yourself into
stupid. And when you’re happy with your summaries, you can: * >>move to vault—
promote those SUMMs into Qdrant for the heavy mode. ## 2) Mentats:
proof-or-refusal mode (Vault-only) **Mentats** is the “deep think” pipeline
against your **curated** sources. It’s enforced isolation: * no chat history *
no filesystem KBs * no Vodka * **Vault-only grounding** (Qdrant) It runs
triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit
it. And if the Vault has nothing relevant? It refuses and tells you to go pound
sand: ``` FINAL_ANSWER: The provided facts do not contain information about the
Acorn computer or its 1995 sale price. Sources: Vault FACTS_USED: NONE [ZARDOZ
HATH SPOKEN] ``` Also yes, it writes a mentats_debug.log, because of course it
does. Go look at it any time you want. The flow is basically: **Attach KBs →
SUMM → Move to Vault → Mentats**. No mystery meat. No “trust me bro,
embeddings.” ## 3) Vodka: deterministic memory on a potato budget Local LLMs
have two classic problems: goldfish memory + context bloat that murders your
VRAM. **Vodka** fixes both without extra model compute. (Yes, I used the power
of JSON files to hack the planet instead of buying more VRAM from NVIDIA).
*!!stores facts verbatim (JSON on disk) *??recalls them verbatim (TTL + touch
limits so memory doesn’t become landfill) * **CTC (Cut The Crap)** hard-caps
context (last N messages + char cap) so you don’t get VRAM spikes after 400
messages So instead of: > “Remember my server is 203.0.113.42” → “Got it!” →
[100 msgs later] → “127.0.0.1 🥰” you get: >!! my server is 203.0.113.42>??
server ip` → 203.0.113.42 (with TTL/touch metadata) And because context stays
bounded: stable KV cache, stable speed, your potato PC stops crying. — There’s
more (a lot more) in the README, but I’ve already over-autism’ed this post.
TL;DR: If you want your local LLM to shut up when it doesn’t know and show
receipts when it does, come poke it: * Primary (Codeberg):
https://codeberg.org/BobbyLLM/llama-conductor
[https://codeberg.org/BobbyLLM/llama-conductor] * Mirror (GitHub):
https://github.com/BobbyLLM/llama-conductor
[https://github.com/BobbyLLM/llama-conductor] PS: Sorry about the AI slop image.
I can’t draw for shit. PPS: A human with ASD wrote this using Notepad++. If it
the formatting is weird, now you know why.