I spent some time looking at GBA stuff after posting about it a few days ago. It's been so long since I've touched ARM32 that I'd forgotten the insane shit you can do in one instruction, e.g. LDMEQFD SP!, {R0, R2-R5, PC}.
@pervognsen Is that still technically RISC? Or has RISC just shifted its baseline because of how execution architecture has matured?
@nick I'm pretty sure that LDM would have worked as-is with the original ARM1 instruction set so this was there in the beginning. ARM has never been RISC in any meaningful sense. It's a load/store architecture with a bunch of GPRs but that's about it. I guess if you wanted to be snide, you could say that it shares in the earliest RISC tradition of shipping parts of your microarchitecture as the ISA (barrel shifter, predication, etc) like MIPS did with branch delay slots and imprecise exceptions.
@pervognsen @nick FWIW the string-ish multi-loads were in early POWER as well and that was definitely sticker-label RISC
@rygorous @nick The pièce de résistance is combining it with predication and the PC as a pseudo-GPR. Now we're cooking.

@pervognsen @nick reference on early POWER multi-loads https://bitsavers.org/pdf/ibm/IBM_Journal_of_Research_and_Development/341/ibmrd3401E.pdf pp. 7-10 starting with "The RS/6000 architecture has adopted the following strategy for dealing with misaligned data."

Load-multiple section starts. on p. 9 "Another aspect of including string operations..."

@pervognsen @nick I will say that they are IMO bang on the money here on _all_ counts - calling out that

a) mem copies/string copies etc. are important and usually unaligned
b) Alpha-esque "we give you a way to do SWAR loops for this" only gets you so far,
c) for load/store multiple, that function prologues/epilogues are the key use case

other ISAs have struggled to learn that lesson 30 years later...

@rygorous @pervognsen @nick

> The architecture allows for the partial
completion of an operation and thegeneration of an
alignment-check interrupt when the datacrosses a cache-
line boundary. System softwarecan then complete the
instruction by fixing up the affected registersor memory
locations.

this has EINTR vibes

@wolf480pl @pervognsen @nick also how REP MOVS/STOS, the new ARM mem block copies/sets, ARM SVE loads/stores (first fault lane!) etc. work! (At page not cache line level)
@wolf480pl @pervognsen @nick specifically it's very interesting that, 30 years after POWER initially defined this (and, mind, they deprecated this for most of the intervening time), we're now back to a world where more and more ISAs are coming around to their original PoV, for pretty much the exact reasons they gave
@rygorous re: prologues/epilogues, it's also interesting to observe that register windows identified the problem correctly, just not the right solution.

@resistor Yup!

And I think part of the reason the uptake was so delayed was that there was a big detour in the middle where when RISCs were originally defined, it was rare for compilers to do aggressive global opts or aggressive inlining.

First-order, especially for small frequently-called subroutines, inlining is better !/$ than making call sequences cheap.

But now we're all the way around to aggressive inlining + deep superscalar + giant code working sets.

@resistor And suddenly we care a lot about decreasing call overhead again because inlining even big-ish fns just to avoid prologue/epilogue overhead is in many ways a cure worse than the disease again
@rygorous @wolf480pl @pervognsen @nick And gather/scatter 🙂
@TomF @wolf480pl @pervognsen @nick well they don't actually work so.... (ever since GDS)
@rygorous @wolf480pl @pervognsen @nick Oh, I had not kept up to date with this. Fun!
@TomF
@rygorous what's gather/scatter?

@TomF
@rygorous
oh, this stuff? https://en.wikipedia.org/wiki/Gather/scatter_(vector_addressing)

specifically the AVX2 implementation of it?

Gather/scatter (vector addressing) - Wikipedia

VGATHERDPS/VGATHERDPD — Gather Packed Single, Packed Double with Signed Dword Indices

@TomF @wolf480pl @pervognsen @nick I mean the instructions are still there but they just bail into full microcode fallback now
@rygorous @wolf480pl @pervognsen @nick I'm a little surprised these cores don't have a segregated mode on a chicken bit for all their register files by now. How many bugs of essentially the same format is this now?
@TomF not nearly as many as there are different named exploits, a lot of them were Intel doctoring around on symptoms because the real underlying issue was a fundamental problem with the cache access path design that was unfixable without a major uArch rev

@TomF specifically the Spectre stuff (which boils down to data-dependent branches cause data to leak into branch history) was exploitable ~everywhere, on every uArch and every ISA, and arguably not really Intel's fault, it's a fundamental issue with speculation.

The thing that really reamed Intel, Meltdown/L1TF and friends, was an unforced mistake in their L1 access path design.

@TomF Namely, everyone else either does privilege checks up front, or at most did them in parallel with the access path and made sure to mux in 0 on the data returns in case of privilege check failure.

Intel did the privilege checks in parallel/late and makes the instruction raise an exception on retirement, but did forward the actual privileged data (that you weren't supposed to be able to read) onwards to dependent insns regardless.

@TomF As for GDS, I am really surprised that all the Spectre-era exploits apparently did not cause Intel to do an internal audit of all speculative state and see if it might leak to attackers.

I am not surprised that the bug exists in Skylake/SKX era uArchs, and it would be totally fine if Intel found this in a post-Spectre security audit but kept quiet about it until it was discovered externally or similar, but it doesn't look like that's what happened.

@TomF Instead, from the response (and the fact that it affects many post-SKL uArchs), the likely conclusion is that they still hadn't gone over all shared and potentially security-sensitive state in the memory access path with a fine-toothed comb by 2023, 5 years after Meltdown, which is disappointing to say the least.
@rygorous Five years at Intel is like six months anywhere else.
@TomF one would assume that by the third time you step on that particular rake, you maybe start looking for these issues on your own and try to prevent them even if someone hasn't fed you a PoC exploit yet
@rygorous @TomF "surely all the rakes have been stepped on by now"
@JoshJers @rygorous Just going to turn Transactional Memory on again, BRB.

@TomF @JoshJers look, it's simple math, there's a finite number of possible bugs there so eventually we have to run out

...right?

@TomF @JoshJers it's a simple plan, we just take the most cursed problem in comp arch (memory access) and make it worse, what could go wrong?
@rygorous @TomF I see no flaws in this plan
@JoshJers @TomF to be fair, it did work for virtual memory. can't be lucky twice I guess