@S0AndS0

45 Followers
66 Following
804 Posts

I've been doing questionable things with technology since the days modems squealed.

FOSS projects I maintain may be found on GitHub;
https://github.com/S0AndS0/

Available for contract/free-lance work via Matrix;
https://matrix.to/#/@s0ands0:matrix.org

Tech Bloghttps://s0ands0.github.io/

Finally finished 2Bit invitation story ark in Updamm last weekend's stream, which be uploading now

And once updates finish updating I should be soon™ playing a bit more Deathloop today

At The Complex obtained ghost upgrade in that prolongs cloaking so long as one does not move, in the evening, and returned in morning to divert power to two locations during last weekend's play of Deathloop

Which now be uploading!

... And in bit may stream more

Happened across someone suffering from the AI Delusion Syndrome

... they was saying AI somehow lowers difficulty of injecting payloads

difficulty be so low as to be unworthy of measure :-|

Was talking with a recruiter, as one does on a Monday, and grasping
... sorta struggling...
they was wat sort of box to put me in

Didn't have the heart to tell 'em, with sort of things I've done, such box getting legs and personality disorder by the next time we chat is non-zero

Got password to RAK after slaying all but one enemies without being spotted in last weekend's Deathloop stream, that is uploading now

And in a few moments I may be streaming some-more today

No lie, it's becoming one of my favorite demos LLMs not being good

`qwen3-coder:30b` got closest, so close it could almost not taint performance, but, close ain't gonna cut it

Plus it still took 5 proompts to get this far, far slower than doing it old-fashioned way with hands

Update; `starcoder2:instruct` does not play nice with my hardware/Ollama version(s)

Also found incantation which does use GPU, for working models, successfully!

Also also, found and submitted _opportunity_ to enhance documentation;

https://github.com/ollama/ollama/pull/14696

docs: Add GTX 960M to supported Nvidia GPU table by S0AndS0 · Pull Request #14696 · ollama/ollama

As well as note for how to obtain compute version locally via CLI, because the table saying 980 should use 5.2 vs CLI asserting 5.0 consumed a few hours of me chasing my own tail x-} This has been ...

GitHub

Gonna maybe try/regret local LLMs again, now that I'm using NixOS BTW™

Anyone have some good, and tested, recommendations for coding that be 30b or less in params?

Woke today, Spidey Senses tingling, when I saw this code in a certain online place;

```python
for key, values in dict.items():
for value in values:
if values.count(value) > 1:
```

... am I being too sensitive?