Doing some research for more #Unix #manpage history goodness... Does this look familiar? The old-timey mv and chmod from the #CTSS programmer's manual, 1963, MIT.

The other is that memory management is way more basic than I'd assumed. In fact, all that happens is that the whole of the memory (or rather, as much of the 'core B' memory as is needed to load the incoming user) is swapped to the drum every time there's a context switch. The documentation pretty much says this is about the dumbest thing you could do, and it really is.

3/3

#retrocomputing #computerhistory #CTSS #IBM7094

Two interesting things: the first is that system calls look more modern than I'd thought. Previously I'd assumed that memory protection would have been so basic that accessing kernel routines would (somehow) just be a regular subroutine call (which aren't massively standardised on the #IBM7094), but it looks like they are implemented through interrupts in a similar way to more recent systems. One difference is that there are no software interrupts, and instead it looks like you have to deliberately trigger a specific kind of protection fault which will be interpreted as an interrupt request by the trap handler in the kernel! Note that memory protection was one of the modifications made to the #MIT systems used for #CTSS - I don't know whether it was present on any other contemporary machines.

2/n

#retrocomputing #computerhistory

I've also just found this excellent technical report on #CTSS:

https://people.csail.mit.edu/saltzer/Multics/CTSS-Documents/TR-016.pdf

It contains a huge amount of detail, including a full list of kernel modules and their entry points, so I can use that to compare against what I've extracted from the source code.

1/n

#retrocomputing #computerhistory #IBM7094

A little more info on the word count in #CTSS ARCHIV - I've worked through some of the source code, and it looks like it's literally the value reported by FSTATE, which is the equivalent of stat(2), i.e. to read file metadata from the filesystem. It's also the same value used for the loop control in the actual file transfer, so by definition it's the exact number of words written to the output, which I guess means that the difference is down to whatever later processing happened to convert it to ASCII.

Either that or it's not a literal decimal value? There are definitely digits up to 9 in that field though, so it's not octal (which is what tended to be used on the #IBM7094). The code which writes it to the buffer which will eventually be written to the file is:

RCONV.(BZEL.(DEFBC.(CT)),NWLNMK(4),NWLNMK(3))

I need to look up what RCONV, BZEL and DEFBC actually do...

#retrocomputing

Unifortunately, the actual archive files I have (which are modern ASCII files) use neither of these conventions, and instead have 4 blank lines (CR LF termination) followed by something like:

AARCHV BCD 10/04/70 1545.9 42312 00000

The fields are identifiable, apart from the 00000, which doesn't appear in the source code, however, assuming the 42312 is a word count, I'm really struggling to relate that in any meaningful way to the actual length of the following file (as identified by scanning for the next super line mark). There's a good chance that the files I have have been processed in unknown ways since they were written by the original machine, but I would like to have a complete understanding.

As an aside there are also page and section headers in the file, and I'm not sure whether these are part of the ARCHIV format or from the original files.

I'm going to have to park this for now, but will investigate when I next have some time.

7/7

#CTSS #retrocomputing #IBM7094

The later version is from 1970, and the comment has changed to:

THE FORMAT OF 'SUPER LINE MARK' IN ASCII ARCHIV FILES
IS DEFINED AS FOLLOWS:
FF NL NL NL DEL DEL DEL DEL
NAME1 SP SP NAME2 SP SP MM/DD/YY SP SP
HHMM.M SP SP ZZZZZZ DEL DEL DEL DEL NL NL NL NL
TOTALLING 14 WORDS. ZZZZZZ IS THE WORD COUNT.

So by now we've switched to #ASCII. From the code, NL is what is now called LF. The code also implies that ASCII is internally represented with 9 bits (i.e. 3 octal digits), so 4 characters per 36 bit word.

6/n

#CTSS #retrocomputing #IBM7094

The earlier version is from 1968. The key thing is the format of the 'super line mark', which is the separator between the various files in the archive. The 1968 version has the following comment describing how it works:

THE FORMAT OF THE 'SUPER LINE MARK' IN ARCHIV FILES
IS:
777777000000K,..,..,..,777777000011K,FLN1,FLN2,
$ MM/DD/YY HHMM.M $,
$ZZZZZZ$,$$,$ 000$,$00 $
TOTALLING 14 WORDS, WHERE ZZZZZZ IS FILE WORD COUNT.
THIS INFORMATION SHLD NOT BE KNOWN ANYWHERE IN THE PROGRAM
EXCEPT IN THE FLLWING INTERNAL FUNCTIONS AND V'S STATEMENTS.

This describes a #BCD format (6x 6 bit characters per 36 bit word).

5/n

#CTSS #retrocomputing #IBM7094

The second thing I've been doing is trying to figure out how ARCHIV (roughly equivalent to 'tar') works and therefore how to interpret the file format. This is the file format for most of the preserved source code (which is fortunately pretty readable without understanding the details), but the preserved files also include two versions of the utility itself.

4/n

#CTSS #retrocomputing #IBM7094

One thing I haven't got my head around yet is memory management. I previously assumed that there was no support for virtual memory at all, but I'm not sure that's the case - I've now seen a paper which discusses what might be support for it as one of the hardware modifications which were made to the #MIT machines to support #CTSS, but also suggests it wasn't actually used. In any case, presumably memory fragmentation needs to be dealt with somehow. I think I need to either look at how the emulator is implemented or do some experiments.

3/n

#retrocomputing #IBM7094