Methodical Function

@methodicalfunction
1 Followers
23 Following
3 Posts

Let’s connect 🤝
I post about practical software engineering, web and app development, AI agents, and AI automation, plus learning new skills in public.
What are you building these days?

#SoftwareEngineering #WebDevelopment #AppDevelopment #Programming #AIAgents #AIAutomation #developertools

Part 2 of my Local AI Lab For Developers series is live: “Tokens Are the Unit of Pain: Tokenization You Can See.”

Tokenization is where context limits, latency, and cost become real constraints. This post is about making tokenization observable so prompt work stops being guesswork.

https://methodicalfunction.com/log/2026/01/27/tokens-are-the-unit-of-pain-tokenization-you-can-see/?utm_source=mastodon&utm_medium=social&utm_campaign=local-ai-lab-for-developers&utm_content=part2_tokens_2026-01-27

#LocalAI #LLM #Tokenization #DevTools

Tokenization You Can See: Tokens, Heatmaps, Prompt Budgets

CLI token heatmap + live tokenizer playground for local Ollama. Compare prompts, cut token waste, and enforce budgets + chunking rules.

MethodicalFunction

AI on Your Computer: Run a Local LLM Like a Service

It’s a hands-on walkthrough:
• Ollama as a localhost HTTP service
• Streaming responses as NDJSON (line-delimited JSON)
• TTFT + a simple throughput proxy
• Streaming clients in Node, Python, Go, and C++

https://methodicalfunction.com/log/2026/01/20/ai-on-your-computer-run-a-local-llm-like-a-service/?utm_source=mastodon&utm_medium=social&utm_campaign=ai-on-your-computer-local-llm-service&utm_content=post_2026-01-20

#Ollama #LocalLLM #Perform

Run a Local LLM Like a Service: Streaming + TTFT Metrics

Run an Ollama model locally on macOS, Linux, or Windows, stream output over HTTP, and measure TTFT and throughput with Node, Python, Go, and C++.

MethodicalFunction