Diagnostica di Matteo (Matteo & Team): Skidata
Skidata: Errore V80 - Internal Clock Drift - Diagnosi e Soluzioni
#Skidata #V80:InternalClockDrift
🔍 Rapporto Completo: https://www.soscassa.org/skidata/article/skidata-errore-v80-internal-clock-drift
Diagnostica di Matteo (Matteo & Team): Skidata
Skidata: Errore V80 - Internal Clock Drift - Diagnosi e Soluzioni
#Skidata #V80:InternalClockDrift
🔍 Rapporto Completo: https://www.soscassa.org/skidata/article/skidata-errore-v80-internal-clock-drift
I've been bashing my head against a stone (in this case, silicon) for almost 10 months now converting my #Z80 assembler written in Z80, #v80, from its custom syntax to standard Zilog syntax. I've been reduced to writing a couple of _instructions_ a day due to background struggles in my life and after a recent failed suicide attempt, I'm not even sure why I'm doing this or what it is I should even be doing instead any more.
Ultimately I want to get away from the open hostility, deceit and disregard for users endemic in modern software. It's like "hate" is a malicious dependency that every app can't avoid nowadays. The left-pad of grifting.
I want to write my own text editor, but when you're dragging yourself toward that goal by bloodied fingernails, an inch a day, you're losing sight of what you're really trying to do, which is to find a particular kind of comfortable experience, hopefully as soon as possible.
And yet, still the fascists will come for me.
"Just write it in C" whispers in my ears, but I keep climbing the pavement because I know that compiler suites aren't reproducible or portable. If GCC or LLVM (or increasingly, Rust) don't support, or decide to drop support for my choice of hardware / OS, then I can never hope to rectify that on my own.
Ultimately, I know that I *like* solving excruciating problems of optimising single cycles and bytes, but when is this ever going to matter? What does it matter that my assembler is smaller and more efficient than others when nobody uses it? When this only matters on 8-bit hardware, whilst vibe coders are spending hundreds of $ each month using AI to uppercase input (that isn't a joke, it's real)
I keep cutting myself in halves to do the right thing and what I get for it is less friends, less people who use my software which is becoming increasingly obscure and niche. I already have no friends outside of the Internet, how much more must I give up to escape #enshitification?
I have been driven to death and back by stress and worry. I am overwhelmed to the point of meltdown just trying to look after myself. I do not possess the strength to fight for good software any more. I don't know where to start and if I did, progress would be glacial. Yet if I give up writing software, that will be just another part of me I've cut off to escape the destruction of what I love.
And yet, still the fascists will come for me.
We've reached a breaking point, like I have, where this can't go on any more. The software industry cannot be fixed without regulation and regulation cannot be put into place with fascism -- literally as I was typing this sentence, a toot just appeared on my feed about the UK's plan for a unique digital ID that would follow you across every site and real-world application giving the government Facebook-level of tracking.
#Linux will not save you from Google/Microsoft/Apple etc. mandated software protocols that your bank now requires (because, App Stores), should you desire to feed yourself.
Open source cannot save you when the worst people are in control. #Mastodon devs look up to Twitter and not Web 1.0; "fork it" is their response to basic freedoms like transferring post history. Sure, let me create _yet another_ piece of software nobody uses. That'll show them.
So here I am, pulling myself along the ground a few bytes at a time, because what else am I supposed to do?
Oh great, NOW this gets some attention right as I’m ripping out the custom mnemonics for standard ones :P Too much for users to mind-map on top of learning #eZ80 when I add it. I will say though that even if you don’t use #v80, the source code is the most heavily commented and described assembly you will likely ever see. If you want to know how to write a real CP/M and/or #Z80 application, look no further.
Attempting a rewrite of my 8-bit multi-platform, multi-ISA, assembler's instruction parser. #v80 uses a static tree for mapping instructions into opcodes (2nd image) which is very fast and efficient but hairy to write.
The new approach (1st image), uses an alphabetical list of instructions with a byte to state how many chars of the previous entry are re-used since alphabetically, the left-most chars repeat the most.
There's no guarantee this will even save space (#Z80 ISA is currently 4KB, #MOS6502 is about 1.7KB) and it will likely be slower as more lines will need to be skipped vs. a char-by-char branching approach, but it may help for very large ISAs, the size of which can balloon drastically with lots of shared prefixes, something I'm worried about with adding #eZ80 support. The alternative is adding 'macro' characters for shared-prefixes, but that bloats the native code that needs to be ported between architectures.
I don't know how many of you out there regularly write both #Z80 and #MOS6502 and have to juggle the two different syntaxes; my multi-platform, multi-ISA assembler #v80 uses a custom syntax designed for parsing speed and simplicity, and in some way unifies instruction syntax between Zilog & MOS ISAs.
v80’s syntax, for example uses "*" for a memory dereference, so that `ld.HL* $nnnn` = Zilog `ld HL, [$nnnn]` (v80 can use optional brackets for clearer intent here) and because instructions can't be split between parameters, `ld*$.HL $nnnn` is used for `ld [$nnnn], HL`.
For 6502 syntax I'm wondering what the best choice is, either `adc*$.x $nnnn` for MOS `adc $nnnn, x` or should I go with `adc.x* $nnnn` for something simpler but not as consistent with Z80 syntax?
I feel like a madman -- my #z80 assembler, #v80, assembles itself! This is it doing so on an #Amstrad #PCW emulator, a #z80 CP/M machine from 1985. It's not optimised for speed on original hardware -- over 50% of the runtime will be just echoing text (thanks, CP/M) and I'm surprised by the amount of heap data needed in the end, but we are talking 335 KB of source code (8KB binary), and I can look into that