Kale 

@DarkestKale
538 Followers
132 Following
39.8K Posts
It'll all be fine.
Pronounshe/him, but honestly it doesn't matter to me

Hrm. It's kicking out numbers as outputs again.

I'm skimming my dataset, and they're not THAT present. Seems a bit fucking weird, yo

🗣️ Prompt: Can you write me a poem?

[temp: 0.8 | persona: storyteller] 🤖: I must tolerate the human form.

Ok, ok, settle down, model

Buuuut, I did log some time, dorking around on the ESP32 with the colour display, getting familiar with some of the workflow, etc.

a 64x64 image isn't too big, so I can basically use a fair number of them, etc - preloaded, of course, but we can work with this.

Re-ran the model training code while I was dorking around on the ESP32.

Fine.

But, for some reason, an epoch that takes 20 mins took 39, and one that was meant to take 10mins was also going to take 30, so I halted shit.

Turned out, the Arduino IDE was clamping the fuck outta my CPU - an instance had crashed and it was pissing about.

**long sigh**

Brain's not working super well, but I've got code and a workflow for putting colour images on this tiny little SPI RGB display, so that's lovely

ESO did this to me as well, and it just turns me off ever going back to games like this, tbh.

Just... stop limiting my inventory, or, if I haven't played in a year, just gimme a few weeks grace to clean shit up, kthx

Dipping into #Fortnite's #SaveTheWorld mode, and facing the 'I had a LOT OF SHIT in this game' problem where you come back, have a TONNE of stuff and you can't remember what to keep, and it dumps tonnes on you as 'while you were gone... have stuff', so now: an hour of cleaning up old invent

I know I say this every few weeks, but holy shit, diversification of data is this big fucking pendulum that swings from 'yeah, that's great' to 'ok, that broke my shit'

Like, adding diverse data enriches the model, but too much diversity means you start using vocab count on odd words that don't come up in your general sweep, or you fragment them which means you now have sub-word tokens that're more likely, etc

It also has the 'random numbers appearing as prompt responses' thing happening again, which means the data I incorporated this week needs to be reconsidered.

It has some measurements, etc - seems like, frankly, numbers just aren't something a model of this size can deal with well in this fashion.

ON THE OTHER HAND, it has now responded to the 'what's your name?' prompt with a few different, coherent replies.

Among junk, but it seems like it's vaguely coming along?