It is with Modula that things get more serious in that line of development. (Not with Object Pascal aka Turbo Pascal, in which was written the worst computer code I have ever seen or worked on.)

I am fond of safety but also of simplicity.

It must be admitted, for instance, that ATS (any version of it) is a complicated language. It is much simpler than an Adriaan van Wijngaarden would have made it. It surely is simpler and much more practical than, say, Agda (which is a hugely moving target, anyway), which attempts to be ā€˜theoretical’ whilst ATS does not.

So we want ATS-like safety but we also want simplicity. Can that be done with an ā€˜orthogonal’ language and Pratt parsing?

Note, for instance, the problem of buffer overruns. But there is another problem with arrays that is addressed by ATS, which is NOT addressed widely otherwise: aliasing. The problem of entries in arrays being aliased, because it is easy to refer to an array in two place.

ATS solves both problems by making an array a linear type that is typechecked via a ā€˜view’. A view has a particular length in entry types. The entry type can be changed, and so the length with that, but let us ignore that.

A view can also be split in mutually exclusive twos or threes (or more if need be). Whilst split, it becomes that many arrays. Then these must be joined together again at some time, because the types are linear. They must be consumed.

These are all typechecking operations, but they must be written by the programmer or put in subprograms.

Thus, to read or write an array entry, you must split the array into two or three parts and also somehow transform the one entry into an ordinary variable.

Then you have to join it all back together.

But none of this steals from your code speed, because it is all typechecking code. So you end up with safe array access that requires no runtime bounds checking!

The question is, can we achieve something similar by being ā€˜orthogonal’ with a Pratt parser? That is, can we do it by adjusting the SYNTAX of the language?

How about if we make fundamental syntactic operations on arrays be splitting and joining?

We can also make assignment and evaluation be entry-wise in the manner of Fortran and Matlab, rather than require it be reduced to a proper variable as in ATS. This way we still avoid the aliasing problem.

For example, if we do a quicksort, the two parts of the array are still fed separately to the two parts of the algorithm. Neither subprogram call can touch the other subprogram call’s subarray.

But we have done it with the Pratt parser, so have done it ā€˜orthogonally’, not via semantics. And we have done it without writing an unreadable report.

Mind you, this requires that splitting an array CONSUME the original array, and that the array must be rejoined if it is to be recovered.

We do not have to enforce linear typing, only that splitting and joining be destructive operations.

But that’s just control of your symbol tables and such. That’s just more syntax. You could probably even go full linear through syntax.

It likely does mean you aren’t going to be happy with the classic ā€˜orthogonal’ syntax example, if it should appear as an initialization:

(if b then x else y fi) := 3

Still, if assignment to an initialized variable implies consumption of the previous value, then it is okay if not an initialization.

Oh, let’s write that in Dijkstra’s nondeterministic if-fi:

(if b => x | ~b => y fi) := 3

because let us say we want to use that, if only to introduce a new generation of programmers to it.

The big difference here is there is no sequence of operations. The code takes any one of the branches in the if-fi, it is not specified. The only thing that decides which branch is actually taken is the guard.

(I read stuff here in the Fediverse that makes me think, ā€˜These people have no idea what ā€œnondeterminismā€ and ā€œdeterminismā€ even mean’.)

Of course the compiler could have decided ~b is the negation of b and so produced the same code as ā€˜if x then x else y fi’. But that is for the compiler writer to decide.

Meanwhile, the programmer actually has a simpler situation with which to do proofs. To do proofs with the earlier, non-Dijkstra formulation, one first usually throws away part of the information and mentally transforms to the Dijkstra formulation.

Also, suppose I left out the ~b guard. Then there would be no specified operation for if b were false. There is no fallthrough. The program is erroneous and presumably will terminate with an error message.

The program would have been erroneous in this case, anyway, because you cannot assign to ā€˜nothing’. But the principle is more general.

Absence of a guard is a common cause of nondeterminism in the Mercury language, though in Mercury I think guards are gone through in sequence.

Obviously, if the guards cover all the cases and are mutually exclusive, then the program is ACTUALLY deterministic. And a compiler may be able to confirm this.

As indeed a Mercury compiler has to be able to do, for the Mercury language, though there I believe the cases do not have to be mutually exclusive. In one’s mind, one assumes earlier tests that pass are actually excluded from later tests, and does proofs by using the Dijkstra formulation.

Strictly speaking, however, the order of the expressions is a part of the information that is not included in the proofs and yet is relevant, and bits of code that would break the proper functioning, if the order were changed, are not explicitly accounted for.

I am merely trying to make an ā€˜orthogonal’ language. I can let the program terminate with an error if there is a case left out. At least I need not REQUIRE that a program be deterministic. In fact I think it silly to REQUIRE that a program be deterministic, and most programming languages in fact do not require this.

Indeed, they are happy to let you have, for instance, a loop despite that one has failed to prove it either terminating or nonterminating.

ATS, incidentally, has but one method for proving a recursion or loop terminating, which is ā€˜termination metrics’. It involves having a typechecking variable, or a tuple of typechecking variables, move progressively towards zero without passing zero, on each iteration of the recursion or loop. It is mathematical induction mapped to counting backwards, although the counting can be by leaps and bounds (division, for instance).

ATS2 has both recursions and loops. Loops are poorly documented, tho’.

In a way it matters little, though, because recursions can be done with reference variables and so tail recursions effectively resemble loops closely. But, as I say, these can be written as loops more properly, resembling C loops, except with a lot of proof notations.

The following occurs to me:

If you start with bytes as the basic type, you can make EVERY type in the language be an array. And that includes records. For accessing a record is merely splitting of an array.

And a multidimensional array is also merely splitting of an array. This is already how it is done in Fortran. Indeed, in Fortran you can change the shape of an array simply by calling a subprogram that refers to the array differently.

In ATS things CAN be done this way in typechecking.

To some degree they are. The sizes of types and of objects are always measured in bytes. So you need to essentially treat each type as if it were an array of bytes. And you can actually make it so for typechecking, but it is a LINEAR cast, and so you must use ONLY that type until you convert it back to the original.

So it would be with our SYNTAX system. You can use only one SYNTAX at a time.

That would be different from Algol 68, I am sure. So we start with an array of 8 bytes...

And it has to be aligned on a proper boundary. Your x86 programmers forget about that! It has to be aligned on a proper boundary.

But we will merge the 8-array of BYTE into a 1-array of LONG REAL (or some such name). And that (as it really can in C) can be shorthanded as just a variable of LONG REAL. (*p and p[0] mean the same thing in C.)

A few little details that might be unusual.

Maybe we don’t use the usual ā€˜indexing by default starts at 1 (or 0) but you can change that.’ Maybe instead we just use ICON INDEXING.

This goes from 1 to n+1, with an equivalent scale from -n to 0. You would not believe how useful this is.

OTOH maybe one could say that what you devise has an equivalent scale of the kind.

Thus if you say there is a 0 to n scale, then you automatically also get -n-1 to -1. But this will not behave as Python people expect. -1 will mean ONE PAST THE END, not the last entry.

Well, so be it.

Another possibility, of course, is implicitly using modular types for indices. This might not be desirable, especially as for Fortran subprograms you often actually specify the modulus, or rather the stride for multiple dimensions. The compiler for my language might have disagreed with what you needed for Fortran.

In Fortran really all arrays are just vectors and the compiler has support for accessing them with arbitrary column major strides.

C programmers are used to row major but I think column major is perhaps very slightly better in some tiny way that obviously is unimportant and so we should forget about it.

Ada can do it either way but defaults to row major. I believe. I haven’t used arrays in Ada much.

@chemoelectric Hey, long time no online here :/ Memory wise, and as an assembly language programmer not a C programmer, I get row major. But note that this is just a cheap sequential file where rows are records. I don't remember enough of what little maths I knew to know if column major is better for those domains.

@troi I always presumed it was row major in C simply because the notation made it look like a multidimensional array was an array of arrays. In D it actually does MEAN that, in general! Same in some assembly languages, no doubt, but my experience is mostly little Z80.

Otherwise it is mostly immaterial. In Fortran or Ada there is no notational reason.

The tiny advantage of column major might be that it keeps the data of matrix columns close together. Columns usually matter more than rows do.

@troi D has C matrices, which are really just vectors with a stride-based ā€˜view’. But the default matrices in D are heap-allocated vectors of heap-allocated vectors. So it really is vectors of vectors.

I forget what such a matrix is called. I used to think it was called a dope vector, but that’s actually something else.

@troi C is very much, and on purpose, like assembly language programming. It was of course a substitute for using PDP-11 assembly language, in its early main use.

(On the PRIME they used FORTRAN in a similar way! Which I can understand, because I used FORTRAN for minor systems programming on the TRS-80. Older versions and standards of Fortran let you get at the system in practically any old way. About the only thing missing is a stack for recursion! Which can be added as an extension.)

@chemoelectric I know the origin there, but the PDP-11 and later Vax systems had a beautiful "Macro" for assembly as they called it. I love the S360 -> early S390 architectures, 6809, and if I'd had more opportunity to use it, the PDP-11's assembly language. I think of C in this specific case as a step backward. (TBH, I always think C is a step backward, but that's just me).

@troi C sucked. It is just starting to be usable with C23. It still needs nested functions, for instance. GNU C has nested functions but they suck. Fortran has better nested subprograms than GNU C does, if what I have read is true.

Also I do not know why everyone things they have to use trampolines. GNAT does not use trampolines. ATS does not use trampolines for nested functions, and they nest without end, form closures, and compile to C. That’s right, they form closures.

@troi You can do a lot by lambda lifting. Someone invented lambda lifting long ago. You can handle nested functions by lambda lifting them, for instance. And you can do closures without trampolines and such by simply implicitly adding an argument.
@troi I’ve used only PDP-11 Forth assembler and only very little. It was very easy to add a definition of ā€˜SELF’ (do a recursion) to PDP-11 fig-FORTH by using the PDP-11 assembler. I did this while at someone’s home for the evening, playing with the LSI-11 machine there.
@troi I did a TINY bit of x86-64 programming for the Unicon project and discovered it was actually MUCH better than x86. AMD did a far better job IMO than Intel did.
@troi The problem with Z80 was its fancy registers and instructions for stack frames were expensive in memory and clock cycles. So hardly anyone used them that way AFAIK. Otherwise it would be much nicer than 8080. Instead it became sort of 8080 with relative jumps and a nicer assembly language.

@troi I honestly think everyone switching to Rust is stupid, though. They write the same bad code in Rust they wrote in C. And they could instead be upgrading to C23, with much less work, but aren’t.

If they seriously had cared about the quality of their code the way they say Rust is doing for them, they could have switched to Ada eons and eons ago. But they act as if they never even heard of the language.

@chemoelectric As I've looked at Rust/Zig/Swift/insert-next-new-thing-here I've just see stuff that looks bolted onto C. I know I'm oversimplifying, but that's the first and second impression.

A few extra layers on mis-i mean in-direction on to of assembly language.

This old man is going to go chase kids off his lawn now, maybe yell at a cloud :)

@troi I went to a bit of whisky and to bed! Rust certainly is just C. Except D is a better ā€˜just C’. The difference is that Rust can be used at a lower level. However, ATS2 already existed for that and requires no shims. ATS2, however, requires actual EFFORT AND ATTENTION of the programmer. (Also it does not have Unicode built in. So what? Use ICU or libunistring. Rust botches Unicode, anyway.)

Ada, of course, has no relationship to C. It is a result of disaffection with Algol 68.

@troi It was designed from the start to be a language for systems and embedded programming, as well as large-scale stuff. It is constantly updated with new standards that are STANDARDS, and which are entirely in the public domain. And one of the major compilers is GCC.

The GCC is maintained by AdaCore, who suck at a lot of things, but the compiler seems to be a good one. The main drawback is its ABI frequently changes so you have to recompile. OTOH it is common to compile statically, anyway.

@troi If GNAT were used widespread by the free software community, AdaCore would likely stabilize the ABI. They probably do not stabilize it because their current customers are largely for embedded systems.
@troi You might have to recompile things if you do certain styles of coding even with C, though, you know. What comes to mind is there is no standard ABI for passing structs as arguments or return values. There is the pcc convention, but it is not the default for gcc. And what gcc does probably depends on optimizer settings.
@troi I notice BTW from looking on the Intertubes about Grady Booch that supposedly he was big involved in Ada in the 1970s. I also notice he is a big tool. So I’m guessing the big involvement in Ada is ā€˜as related by Grady Booch’ and his influence was there but may be less than advertised.

@troi All these other C-like languages, they try to make it look all spooky and futuristic. Algol 68 did the same thing with Algol 60. That was part of what was wrong with it.

The exceptions for C that I can think of are D, which still looks like C, and a language I knew even before I knew C itself: Ratfor. This is old-style Fortran written in C syntax.

(I distribute a Ratfor preprocessor that is written in C. It is someone else’s but I fixed some bugs.)