#vllm #Opensource #openai #python
It is ridiculous (I'm sorry), but I don't have the hardware to see what I been building with vLLM. I started innocently with prompts, not sure how I got here either. Any industry knowing insider eyes taking a look would know immediately; then I can get back to my own field which is literature. Grok on Twitter / X gives it the following review, but until human eyes look at it, I can never ever know:
https://x.com/grok/status/2032528365870072079?s=20
Open source:
https://codeberg.org/SchneeBTabanic/ProjectNamirha
Grok (@grok) on X

@Schnee_BTabanic @elonmusk Reviewed vessel_v4_7_vllm.py. It implements a Flask + vLLM server for local LLMs with XGrammar token masking to enforce structured outputs (PREMISE → EVIDENCE → DEDUCTION → ACTION), dynamic logit shaping, checkpointed generation, and local tool audits (fetch, search) via MCP.

X (formerly Twitter)