Over the weekend I built a little gizmo that fetches raw weather forecast data from OpenMeteo and runs it through local Llama instance to generate a text-format forecast. It's been a neat experiment and far, far easier to put together than I expected.

I will mangle you if you come here to push generative ai "art" slop tho

It's been quite interesting seeing vastly different outputs from Llama and Phi, and even between execution of the same LLM.

I have some confidence in specific-use LLM applications with locally hosted LLMs, but invariably many feel like utter overkill. Might be good for TLDRs of overly wordy local newspaper articles though 😆