Happy Mainframe Day!
OTD 1964: IBM announces the System/360 family. 8-bit bytes ftw!

Shown: Operator at console of Princeton's IBM/360 Model 91.

@aka_pugs Really was the beginning of the modern era of computing, starting with the normalisation of 8-bit bytes and character addressable architecture.

Well, that's all true so long as we don't mention EBCDIC 🙂

@markd They had ASCII mode, but the peripherals never got the memo.

@aka_pugs @markd ASCII mode was only about how some of the decimal arithmetic instructions behaved. For the printers, the character set was pretty arbitrary, and the Translate instruction would have allowed for easy compatibility no matter what. The real EBCDIC issue was the card reader—and per Fred Brooks, IBM wanted to go with ASCII but their big data processing customers talked them out of it.. But that's a story for another post. (And 8-bit bytes? Brooks felt that 8-bit bytes and 32-bit words was one of the most important innovations in the S/360 line. It wasn't a foregone conclusion—many scientific computing folks really wanted to stick with 36-bit words, for extra precision. IBM ran *lots* of simulations to assure everyone that 32 bit floating point was ok.)

Why yes, in grad school I did take computer architecture from Brooks…

@SteveBellovin @aka_pugs If you were on the non-EBCDIC side of the fence you got the impression that IBM sales pushed EBCDIC pretty hard as a competitive advantage - even if their engineering covertly preferred ASCII.

The 32-bit word must have been a harder-sell for the blue suits since the competition were selling 60bit and 36bit amongst other oddballs.

Fortunately the emergence of commercial customers marked the declining relevance of scientific computing... Did IBM get lucky or were they prescient?

But yeah, the S/360 definitely marked the end of the beginning of computing in multiple ways.

@markd @SteveBellovin IBM had a huge lead in commercial data processing because of their punch card business. And that world did not care about floating point. The model 91 was an ego-relief product, not a real business. IMO.

Data processing and HPC markets never converged - until maybe AI.

@aka_pugs @markd @SteveBellovin
But still, FORTRAN IV got lots of use especially on 360/50…85 in universities & R&D labs. i suspect not much on /30 /40.
I still think of 360 as a huge bet to consolidate the chaos of the 701…7094 36-bit path and the 702…7074 &1401 variable-string paths.
And for fun: I asked both Gene Amdahl & Fred Brooks why they used 24-bit addressing, ignoring high 8-bits… which caused a lot of problems/complexity later.
A: save hardware on 360/30, w/8-bit data paths.

@JohnMashey @aka_pugs @markd @SteveBellovin sounds like Motorola copied their reasoning years later, with the MC68000.

For us UMich folks, the 360/67 was the machine that mattered...

@hyc @aka_pugs @markd @SteveBellovin
16MB in S/360 & 68K, ignoring high bits, => clever programmers used high 8 bits for flags, as I did when writing ASSIST in 1970, still running as late as 2015, likely still.
68000 to 68020, 24 to 32-bit caused trouble for Mac II software.
I wrote of this in BYTE 1991, see section
“The mainframe, minicomputer, and microprocessor”
https://www.bourguet.org/v2/comparch/mashey-byte-1991
MIPS R4000 was released later in 1991. It translated 40 bits, but trapped high-order bits not all 0s/1s.
John Mashey on 64-bit computing

@hyc @aka_pugs @markd @SteveBellovin

Some users who knew about R4000 wanted to use the high bits as tag bits. I said NO!

@JohnMashey @hyc @aka_pugs @markd Brooks said that the 24-bit address decision was an economic one. But he also recognized, and stated, that "every successful architecture runs out of address space." (Aside: that's one reason why IPv6 addresses are 128 bits instead of 64—I and a few others insisted on it, and I specifically quoted Brooks' observation.) But there was one really crucial error in the S/360 architecture: the Load Address instruction was defined by the architecture to zero the high-order byte, making it impossible to use that instruction on 32-bit address machines. Since LA was the most common instruction used, per actual hardware traces, this was a serious issue. (It wasn't only used for addresses; indeed, many of the instances were to provide what Brooks called the "indispensable small positive constant".) The I/O architecture was also 24-bit, but that didn't bother the architects—they figured it would be replaced with something smarter later on anyway.

Update: I forgot about the Branch and Link instructions, which were used for subroutine calls. Per the Principles of Operation manual, "The rightmost 32 bits of the PSW, including the updated instruction address, are stored." The high-order 8 bits of the PSW included the "condition code", used for conditional branches, and the "program mask", which could be and was changed by application programs to disable some software-related interrupts, e.g., fixed-point overflow. This instruction was also not 32-bit-address compatible. (In Blaauw and Brooks, they note that extension to 32-bit addressing was seen as desirable and necessary from the very beginning.)

@SteveBellovin @hyc @aka_pugs @markd
Agreed, I wrote a lot of S/360 assembly code & LA was very useful.
Although not architectural, but software convention, using high-order bit in last ptr argument in argument list persisted.

On economics, I do wonder how much $ the 360/30-based decision cost IBM in the long term, in terms of software/hardware complexity.