299 Followers
359 Following
7K Posts

They/them online pseudonym. Anti-capitalist, anti-religious, anti-spiritual, anti-AI, anti-fascist. I was apparently raised in one of the more relaxed branches of a cult.

I don't have a fleshed-out political theory yet. I'm not sure I can attain to one, but I belong somewhere among the anarchists and socialists.

I make music, pottery, food, repairs, software, and hardware. I boost much more than I post, but I don't boost images without alt-text.

Fascists fuck off.

#nobridge #nobot

Digital Gardenhttp://lyk.so
Bandcamphttps://mumbleandsigh.bandcamp.com
Mirlohttps://mirlo.space/mumbleandsigh
Codeberghttps://codeberg.org/lykso

If you are thinking about running your blogpost through an AI editor, don't! It almost always makes it more boring.

Whatever you have to say is what you had to say anyway. Just say that, you don't need more. And the mistakes are perfectly fine.

I spell check once, proofread once, then publish. When people point out errors, it makes me feel good, because it means people are reading what I write, and I correct it then.

I'd rather have your charming acoustic-performance words, even if you make mistakes! I love mistakes in writing. Rustic and cozy.

@artemis @johnzajac it really is -all- about external status markers for the worst of them. I've taken to calling them status monkeys. No introspection, very little interior life going on, "a 'successful' human is what those things out there are -that appear to be in charge, let's ape it as best we can"

The key is that's really about the ENTIRE depth they have.

But at the same time we all have -some- of those tendencies. We are social critters and we all do some social learning.

But one thing I am really concerned about is that there are not many constituencies involved in this discussion who are actually aware that "autoformalisation" of theorem statements / specifications can easily produce vacuous specifications, and that as AI capabilities increase, it will be more and more difficult for humans (and AIs) to tell whether the autoformalised specification has anything to do with the mathematical problem at hand.

In this hypothetical world, refereeing goes from a straightforward "spot-check the math, assuming good faith on the part of the author/mechaniser" process to a true needle-in-haystack scenario. Mechanisation, far from increasing confidence in the results, will decrease the level of confidence that we may justifiably have in mathematical results.

I am really concerned when I see professional mathematicians telling me that they are excited that the system will be able to translate their precise English specification into a formalised theorem statement, so that they don't have to understand the code. That is the one part that LLMs will never EVER be able to do with the reliability required by mathematics. The LLM can certainly in many cases fill in the proof, which is cool. But professional mathematicians are always shocked when I tell them that no version of the current AI technology will EVER be able to translate their specification from English into Lean in a way that would not require them to be a Lean expert.

I made a thing! Come help out?

retrorepro.wiki — a catalogue of modern reproduction parts for vintage computers. Reloaded logic boards, 3D-printable brackets, replacement chips, analog board recreations and more.
Mac-heavy right now (that's what my bookmarks look like), but Commodore and IBM PC sections are coming.
Missing something? Account requests are open.
https://retrorepro.wiki/Main_Page

(Who wants to make a logo?)
#vintagemac #vintagecomputers #vintageapple #retrocomputing

Retro Repro Wiki

The thing that kills me about all this is that after years of building up kind/human/caring community around me, ppl I appreciated because of those traits suddenly drop all semblance of caring about them because.... they have the perception of coding a bit better/faster. That's all it took.

I'll say it straight: you can't voluntarily and willingly use LLMs in a way that's aligned with respect for marginalized folks, with respect for the environment, with respect for labor issues and rights, with respect for art, and with respect for the community aspects of things like open source.

It is simply impossible. Pretending you can is the deepest form of cognitive dissonance and I'm just beyond disturbed to see it happening all around me.

I've already unfollowed so many folks I once respected, blocked some others, and just generally withdrawn from a community that used to be so personally fulfilling to be a part of. It's really sad.

Please in the year of our lord 2026 stop putting hyperlinks only on the word "here" 😭
Found it

When I was studying CS (and music) I took one single philosophy class, in Ethics. But it was offered by the philosophy department to philosophy majors,so it wasn't what I think most people mean when they say programmers should study ethics.

We had two class meetings per week. In the first class meeting, the professor would tell us about a system of ethics. Who came up with it and why. How it solved problems. And we could ask questions about what seemed to be shortcomings and he would give us the answers developed by people working on that system. It was finally the answer to all of our conundrums.

Then in the second session, he would tear it to shreds. He would raise a problem with it, maybe a problem we had raised, and show how the answer given was actually a tautology or logically confused or wrong in some other way. This system did not solve ethics and was in fact an incoherent mess!

The last week of the term, he got into the system popular now with tech oligarchs. They do actually have a system of ethics! (Which I don't recall the name of.) And boy, was it obviously a mess of scientific racism.

All during the term, I would get excited during the intro week and try to find holes. But this one was so obviously going to be eviscerated on Thursday, I didn't even try to point out how it was full of shit. I was llokinf forward to the coming destruction.

Thursday was the course review for the paper or exam or whatever. He let the last one stand.

At the time I thought he might actually be endorsing it and was upset. Later, I thought maybe because it was current rather than historical, counter arguments hadn't solidified.

Only much later did I realise that he had given us the tools to rip it apart ourselves. Indeed, it was the weakest and most poorly constructed of all the systems and we were certainly up to tearing it down.

So when I say CS students should take ethics, I mean, they should take a class like that, where they aren't left with a perfect framework to apply, but the tools to critique frameworks they encounter. They need to be able to spot bullshit. Right now, they are way too credulous of bullshit.

Edit: Effective altruism didn't exist yet. It was the racism stuff left as an exercise.

showing up to the macro meet in my "Global Tetrahedron - Infinite Growth Forever" shirt and getting asked a lot of questions already answered by my shirt.

hey #electronics fedi, anyone wants to do me a solid and glance at the Glasgow Interface Explorer revD design at https://codeberg.org/tachiniererin/glasgow_revD ? it's just a prototype pass right now and i know the layout and routing is far from optimal, i'm just trying to find any show-stoppers so that we can start with testing.

edit: i've gotten enough findings by people in the meantime to warrant enough changes and a re-layout, thank you all!

glasgow_revD

glasgow_revD

Codeberg.org