So now I am getting sidetracked again, from TGE to making a language that is ‘orthogonal’ but which requires no ‘two-level grammar’ yadda yadda yadda nor a report Wirth, Dijkstra, and Hoare could not read and about which Hoare would end up writing a paper roughly entitled ‘Algol 68 Sucks’.

Indeed, my notion is to not write any formal grammar at all, and to use only A PRATT PARSER.

Which, admittedly, is a thing that did not exist yet for Algol 68, but that hardly matters.

Incidentally, Wirth developed a language called ‘Algol 68 Sucks’ (actually it is called Pascal) and the United Stated Department of Defense funded development of another language called ‘Algol 68 Sucks’ (actually it is called Ada).

Neither of these languages sucks.

Pascal as originally developed is a whole program only language, mind you. Not a separate compiler. You must understand that to understand the language. Everything is nested within ‘program’.

It is with Modula that things get more serious in that line of development. (Not with Object Pascal aka Turbo Pascal, in which was written the worst computer code I have ever seen or worked on.)

I am fond of safety but also of simplicity.

It must be admitted, for instance, that ATS (any version of it) is a complicated language. It is much simpler than an Adriaan van Wijngaarden would have made it. It surely is simpler and much more practical than, say, Agda (which is a hugely moving target, anyway), which attempts to be ‘theoretical’ whilst ATS does not.

So we want ATS-like safety but we also want simplicity. Can that be done with an ‘orthogonal’ language and Pratt parsing?

Note, for instance, the problem of buffer overruns. But there is another problem with arrays that is addressed by ATS, which is NOT addressed widely otherwise: aliasing. The problem of entries in arrays being aliased, because it is easy to refer to an array in two place.

ATS solves both problems by making an array a linear type that is typechecked via a ‘view’. A view has a particular length in entry types. The entry type can be changed, and so the length with that, but let us ignore that.

A view can also be split in mutually exclusive twos or threes (or more if need be). Whilst split, it becomes that many arrays. Then these must be joined together again at some time, because the types are linear. They must be consumed.

These are all typechecking operations, but they must be written by the programmer or put in subprograms.

Thus, to read or write an array entry, you must split the array into two or three parts and also somehow transform the one entry into an ordinary variable.

Then you have to join it all back together.

But none of this steals from your code speed, because it is all typechecking code. So you end up with safe array access that requires no runtime bounds checking!

The question is, can we achieve something similar by being ‘orthogonal’ with a Pratt parser? That is, can we do it by adjusting the SYNTAX of the language?

How about if we make fundamental syntactic operations on arrays be splitting and joining?

We can also make assignment and evaluation be entry-wise in the manner of Fortran and Matlab, rather than require it be reduced to a proper variable as in ATS. This way we still avoid the aliasing problem.

For example, if we do a quicksort, the two parts of the array are still fed separately to the two parts of the algorithm. Neither subprogram call can touch the other subprogram call’s subarray.

But we have done it with the Pratt parser, so have done it ‘orthogonally’, not via semantics. And we have done it without writing an unreadable report.

Mind you, this requires that splitting an array CONSUME the original array, and that the array must be rejoined if it is to be recovered.

We do not have to enforce linear typing, only that splitting and joining be destructive operations.

But that’s just control of your symbol tables and such. That’s just more syntax. You could probably even go full linear through syntax.

It likely does mean you aren’t going to be happy with the classic ‘orthogonal’ syntax example, if it should appear as an initialization:

(if b then x else y fi) := 3

Still, if assignment to an initialized variable implies consumption of the previous value, then it is okay if not an initialization.

Oh, let’s write that in Dijkstra’s nondeterministic if-fi:

(if b => x | ~b => y fi) := 3

because let us say we want to use that, if only to introduce a new generation of programmers to it.

The big difference here is there is no sequence of operations. The code takes any one of the branches in the if-fi, it is not specified. The only thing that decides which branch is actually taken is the guard.

(I read stuff here in the Fediverse that makes me think, ‘These people have no idea what “nondeterminism” and “determinism” even mean’.)

Of course the compiler could have decided ~b is the negation of b and so produced the same code as ‘if x then x else y fi’. But that is for the compiler writer to decide.

Meanwhile, the programmer actually has a simpler situation with which to do proofs. To do proofs with the earlier, non-Dijkstra formulation, one first usually throws away part of the information and mentally transforms to the Dijkstra formulation.

Also, suppose I left out the ~b guard. Then there would be no specified operation for if b were false. There is no fallthrough. The program is erroneous and presumably will terminate with an error message.

The program would have been erroneous in this case, anyway, because you cannot assign to ‘nothing’. But the principle is more general.

Absence of a guard is a common cause of nondeterminism in the Mercury language, though in Mercury I think guards are gone through in sequence.

Obviously, if the guards cover all the cases and are mutually exclusive, then the program is ACTUALLY deterministic. And a compiler may be able to confirm this.

As indeed a Mercury compiler has to be able to do, for the Mercury language, though there I believe the cases do not have to be mutually exclusive. In one’s mind, one assumes earlier tests that pass are actually excluded from later tests, and does proofs by using the Dijkstra formulation.

Strictly speaking, however, the order of the expressions is a part of the information that is not included in the proofs and yet is relevant, and bits of code that would break the proper functioning, if the order were changed, are not explicitly accounted for.

I am merely trying to make an ‘orthogonal’ language. I can let the program terminate with an error if there is a case left out. At least I need not REQUIRE that a program be deterministic. In fact I think it silly to REQUIRE that a program be deterministic, and most programming languages in fact do not require this.

Indeed, they are happy to let you have, for instance, a loop despite that one has failed to prove it either terminating or nonterminating.

ATS, incidentally, has but one method for proving a recursion or loop terminating, which is ‘termination metrics’. It involves having a typechecking variable, or a tuple of typechecking variables, move progressively towards zero without passing zero, on each iteration of the recursion or loop. It is mathematical induction mapped to counting backwards, although the counting can be by leaps and bounds (division, for instance).

ATS2 has both recursions and loops. Loops are poorly documented, tho’.

In a way it matters little, though, because recursions can be done with reference variables and so tail recursions effectively resemble loops closely. But, as I say, these can be written as loops more properly, resembling C loops, except with a lot of proof notations.

The following occurs to me:

If you start with bytes as the basic type, you can make EVERY type in the language be an array. And that includes records. For accessing a record is merely splitting of an array.

And a multidimensional array is also merely splitting of an array. This is already how it is done in Fortran. Indeed, in Fortran you can change the shape of an array simply by calling a subprogram that refers to the array differently.

In ATS things CAN be done this way in typechecking.

To some degree they are. The sizes of types and of objects are always measured in bytes. So you need to essentially treat each type as if it were an array of bytes. And you can actually make it so for typechecking, but it is a LINEAR cast, and so you must use ONLY that type until you convert it back to the original.

So it would be with our SYNTAX system. You can use only one SYNTAX at a time.

That would be different from Algol 68, I am sure. So we start with an array of 8 bytes...

And it has to be aligned on a proper boundary. Your x86 programmers forget about that! It has to be aligned on a proper boundary.

But we will merge the 8-array of BYTE into a 1-array of LONG REAL (or some such name). And that (as it really can in C) can be shorthanded as just a variable of LONG REAL. (*p and p[0] mean the same thing in C.)

A few little details that might be unusual.

Maybe we don’t use the usual ‘indexing by default starts at 1 (or 0) but you can change that.’ Maybe instead we just use ICON INDEXING.

This goes from 1 to n+1, with an equivalent scale from -n to 0. You would not believe how useful this is.

OTOH maybe one could say that what you devise has an equivalent scale of the kind.

Thus if you say there is a 0 to n scale, then you automatically also get -n-1 to -1. But this will not behave as Python people expect. -1 will mean ONE PAST THE END, not the last entry.

Well, so be it.

Another possibility, of course, is implicitly using modular types for indices. This might not be desirable, especially as for Fortran subprograms you often actually specify the modulus, or rather the stride for multiple dimensions. The compiler for my language might have disagreed with what you needed for Fortran.

In Fortran really all arrays are just vectors and the compiler has support for accessing them with arbitrary column major strides.

C programmers are used to row major but I think column major is perhaps very slightly better in some tiny way that obviously is unimportant and so we should forget about it.

Ada can do it either way but defaults to row major. I believe. I haven’t used arrays in Ada much.

In OUR language it is just syntax. We split the storage block up in some way and give that a syntax. Once the block is split up, you cannot refer to any more by the old syntax. That is to prevent aliasing.

We probably do not require that the block be put back together and consumed. That would be necessary for linearity. ATS would require that. Mind you, this would be in the typechecking sublanguage of ATS, rather than the ‘dynamic’ syntax of executable code.

We are doing it in the latter.

Of course it is in the ‘prelude’ of the language and the programmer does not have to put all this in every program.
Now I must go rest and imagine all this before bootstrapping it.
@chemoelectric Hey, long time no online here :/ Memory wise, and as an assembly language programmer not a C programmer, I get row major. But note that this is just a cheap sequential file where rows are records. I don't remember enough of what little maths I knew to know if column major is better for those domains.

@troi I always presumed it was row major in C simply because the notation made it look like a multidimensional array was an array of arrays. In D it actually does MEAN that, in general! Same in some assembly languages, no doubt, but my experience is mostly little Z80.

Otherwise it is mostly immaterial. In Fortran or Ada there is no notational reason.

The tiny advantage of column major might be that it keeps the data of matrix columns close together. Columns usually matter more than rows do.

@troi D has C matrices, which are really just vectors with a stride-based ‘view’. But the default matrices in D are heap-allocated vectors of heap-allocated vectors. So it really is vectors of vectors.

I forget what such a matrix is called. I used to think it was called a dope vector, but that’s actually something else.

@troi C is very much, and on purpose, like assembly language programming. It was of course a substitute for using PDP-11 assembly language, in its early main use.

(On the PRIME they used FORTRAN in a similar way! Which I can understand, because I used FORTRAN for minor systems programming on the TRS-80. Older versions and standards of Fortran let you get at the system in practically any old way. About the only thing missing is a stack for recursion! Which can be added as an extension.)

@troi (Billg’s Fortran compiler was buggy, though. Assuming Billg wrote it, and not one of the other losers at early Micro Soft.)
@chemoelectric "assuming Billg wrote it" reminds me of one of the books about early Microsoft days (like < 20 employees or something) where some hotshot programmer was describing this stupid bug he'd found in one of the MS Basics (ok, it was probably in all of them) to "the Chairman" Bill himself, not realizing that the bug was Bill's :) _Barbarians at the Gates_ maybe? it's been ages but that stuck with me.
@chemoelectric I know the origin there, but the PDP-11 and later Vax systems had a beautiful "Macro" for assembly as they called it. I love the S360 -> early S390 architectures, 6809, and if I'd had more opportunity to use it, the PDP-11's assembly language. I think of C in this specific case as a step backward. (TBH, I always think C is a step backward, but that's just me).

@troi C sucked. It is just starting to be usable with C23. It still needs nested functions, for instance. GNU C has nested functions but they suck. Fortran has better nested subprograms than GNU C does, if what I have read is true.

Also I do not know why everyone things they have to use trampolines. GNAT does not use trampolines. ATS does not use trampolines for nested functions, and they nest without end, form closures, and compile to C. That’s right, they form closures.