How about if we make fundamental syntactic operations on arrays be splitting and joining?
We can also make assignment and evaluation be entry-wise in the manner of Fortran and Matlab, rather than require it be reduced to a proper variable as in ATS. This way we still avoid the aliasing problem.
For example, if we do a quicksort, the two parts of the array are still fed separately to the two parts of the algorithm. Neither subprogram call can touch the other subprogram callās subarray.
Mind you, this requires that splitting an array CONSUME the original array, and that the array must be rejoined if it is to be recovered.
We do not have to enforce linear typing, only that splitting and joining be destructive operations.
But thatās just control of your symbol tables and such. Thatās just more syntax. You could probably even go full linear through syntax.
It likely does mean you arenāt going to be happy with the classic āorthogonalā syntax example, if it should appear as an initialization:
(if b then x else y fi) := 3
Still, if assignment to an initialized variable implies consumption of the previous value, then it is okay if not an initialization.
Oh, letās write that in Dijkstraās nondeterministic if-fi:
(if b => x | ~b => y fi) := 3
because let us say we want to use that, if only to introduce a new generation of programmers to it.
The big difference here is there is no sequence of operations. The code takes any one of the branches in the if-fi, it is not specified. The only thing that decides which branch is actually taken is the guard.
(I read stuff here in the Fediverse that makes me think, āThese people have no idea what ānondeterminismā and ādeterminismā even meanā.)
Of course the compiler could have decided ~b is the negation of b and so produced the same code as āif x then x else y fiā. But that is for the compiler writer to decide.
Meanwhile, the programmer actually has a simpler situation with which to do proofs. To do proofs with the earlier, non-Dijkstra formulation, one first usually throws away part of the information and mentally transforms to the Dijkstra formulation.
Also, suppose I left out the ~b guard. Then there would be no specified operation for if b were false. There is no fallthrough. The program is erroneous and presumably will terminate with an error message.
The program would have been erroneous in this case, anyway, because you cannot assign to ānothingā. But the principle is more general.
Absence of a guard is a common cause of nondeterminism in the Mercury language, though in Mercury I think guards are gone through in sequence.
Obviously, if the guards cover all the cases and are mutually exclusive, then the program is ACTUALLY deterministic. And a compiler may be able to confirm this.
As indeed a Mercury compiler has to be able to do, for the Mercury language, though there I believe the cases do not have to be mutually exclusive. In oneās mind, one assumes earlier tests that pass are actually excluded from later tests, and does proofs by using the Dijkstra formulation.
I am merely trying to make an āorthogonalā language. I can let the program terminate with an error if there is a case left out. At least I need not REQUIRE that a program be deterministic. In fact I think it silly to REQUIRE that a program be deterministic, and most programming languages in fact do not require this.
Indeed, they are happy to let you have, for instance, a loop despite that one has failed to prove it either terminating or nonterminating.
ATS, incidentally, has but one method for proving a recursion or loop terminating, which is ātermination metricsā. It involves having a typechecking variable, or a tuple of typechecking variables, move progressively towards zero without passing zero, on each iteration of the recursion or loop. It is mathematical induction mapped to counting backwards, although the counting can be by leaps and bounds (division, for instance).
ATS2 has both recursions and loops. Loops are poorly documented, thoā.
The following occurs to me:
If you start with bytes as the basic type, you can make EVERY type in the language be an array. And that includes records. For accessing a record is merely splitting of an array.
And a multidimensional array is also merely splitting of an array. This is already how it is done in Fortran. Indeed, in Fortran you can change the shape of an array simply by calling a subprogram that refers to the array differently.
In ATS things CAN be done this way in typechecking.
To some degree they are. The sizes of types and of objects are always measured in bytes. So you need to essentially treat each type as if it were an array of bytes. And you can actually make it so for typechecking, but it is a LINEAR cast, and so you must use ONLY that type until you convert it back to the original.
So it would be with our SYNTAX system. You can use only one SYNTAX at a time.
That would be different from Algol 68, I am sure. So we start with an array of 8 bytes...
And it has to be aligned on a proper boundary. Your x86 programmers forget about that! It has to be aligned on a proper boundary.
But we will merge the 8-array of BYTE into a 1-array of LONG REAL (or some such name). And that (as it really can in C) can be shorthanded as just a variable of LONG REAL. (*p and p[0] mean the same thing in C.)
A few little details that might be unusual.
Maybe we donāt use the usual āindexing by default starts at 1 (or 0) but you can change that.ā Maybe instead we just use ICON INDEXING.
This goes from 1 to n+1, with an equivalent scale from -n to 0. You would not believe how useful this is.
OTOH maybe one could say that what you devise has an equivalent scale of the kind.
Thus if you say there is a 0 to n scale, then you automatically also get -n-1 to -1. But this will not behave as Python people expect. -1 will mean ONE PAST THE END, not the last entry.
Well, so be it.
Another possibility, of course, is implicitly using modular types for indices. This might not be desirable, especially as for Fortran subprograms you often actually specify the modulus, or rather the stride for multiple dimensions. The compiler for my language might have disagreed with what you needed for Fortran.
In Fortran really all arrays are just vectors and the compiler has support for accessing them with arbitrary column major strides.
C programmers are used to row major but I think column major is perhaps very slightly better in some tiny way that obviously is unimportant and so we should forget about it.
Ada can do it either way but defaults to row major. I believe. I havenāt used arrays in Ada much.
@troi I always presumed it was row major in C simply because the notation made it look like a multidimensional array was an array of arrays. In D it actually does MEAN that, in general! Same in some assembly languages, no doubt, but my experience is mostly little Z80.
Otherwise it is mostly immaterial. In Fortran or Ada there is no notational reason.
The tiny advantage of column major might be that it keeps the data of matrix columns close together. Columns usually matter more than rows do.
@troi D has C matrices, which are really just vectors with a stride-based āviewā. But the default matrices in D are heap-allocated vectors of heap-allocated vectors. So it really is vectors of vectors.
I forget what such a matrix is called. I used to think it was called a dope vector, but thatās actually something else.
@troi C is very much, and on purpose, like assembly language programming. It was of course a substitute for using PDP-11 assembly language, in its early main use.
(On the PRIME they used FORTRAN in a similar way! Which I can understand, because I used FORTRAN for minor systems programming on the TRS-80. Older versions and standards of Fortran let you get at the system in practically any old way. About the only thing missing is a stack for recursion! Which can be added as an extension.)
@troi C sucked. It is just starting to be usable with C23. It still needs nested functions, for instance. GNU C has nested functions but they suck. Fortran has better nested subprograms than GNU C does, if what I have read is true.
Also I do not know why everyone things they have to use trampolines. GNAT does not use trampolines. ATS does not use trampolines for nested functions, and they nest without end, form closures, and compile to C. Thatās right, they form closures.
@chemoelectric For reasons I attribute to terminology I never got into segment register addressing used in the 16 bit Intel. It should have been dead simple for someone who mastered the mainframe base-and-displacement addressing. Many years later I tried to get into the X86-64 assembly but I just don't like Intel. Big effing code museums on a chip that they can't keep track of anymore.
In my limited experience with ARM assembly I find it almost beautiful.
@chemoelectric to the real Don? <insert "we're unworthy" Wayne's World GIF here>. That's the second time Wayne's World has come to mind this morning. I don't know if you've seen the C64 retro-resurrection project going on. I ordered one before I got that bad pathology report. I never owned one, Tandy CoCo guy here, I just like to support such efforts. Instant on BASIC was a great learning tool and Python and other REPLs just aren't the same for most people.
Oh, yeah, I mentioned Wayne's World and then drove into a ditch ... there's an unboxing video posted today I think at commodore.net or .com with Tia Carrera (Cassandra) and I guess her hubby. I did not know she had any geek tendencies.
@troi Friend in Ottawa, Ontario, had a C128 that I used when visiting. For dialup.
Friend of the family back in N.J. had retired from his work with his portfolio of patents on food additives and sold rights to use the stuff. He did his business computing on a CoCo. But I had a Model I.
My FiL had a collection of Model III and IV but by the time I knew him they had already quit working. The 4116 DRAM chips had limited lifespans, really. Probably you could simply have replaced those.
@troi I have looked at 1802 chips and thought, āMaybeā. And I just saw kits for some old microprocessor, and briefly thought, āMaybeā. But, nah.
I have had a Chinese Arduino clone kit for years, and all I have done with it is run the built in LED-blink program. And I have a Raspberry Pi Pica kit, have done even less. Itās because I am disabled. I cannot use an ordinary keyboard. Even using a laptop requires I hook up one of my DataHands to it. And they havenāt made those in years. It is a mess.
@troi Plus my hand tremor is mild but greater than average. It is the same kind of essential tremor that Katherine Hepburn had more severely and that most people have very mildly. I have it badly enough that my electrical construction work is not good.
Which made my modified TRS-80 a sloppy piece of work!
@troi I could use an Arduino to help me make machines that duplicate the results of the Aspect experiment, you know.
One of the problems with my āStern-Gerlachā mechanical devices, which behave like Stern-Gerlach magnets, is that they have to be fed with falling objects that are uniformly distributed in space.
It is easy to provide objects distributed according to a bell curve. There is a device with no moving parts called a Galton box, which can be used for that...
@troi But how do I drop beads in a uniform distribution?
I do not know. I am not so clever. Or more likely simply have not tinkered or done semi-geometric calculations enough, because disabled both physically and mentally, and for lack of experience in mechanical work.
But with an Arduino I could use servo motors. And I did not study control engineeringāI studied digital signal processing, though never worked in the fieldābut I can understand moving a bead into a random position!
@troi (Having studied digital signal processing is why I am puzzled that someone like Lov Grover, who seems from the paper he wrote about his stupid algorithm to be a DSP person, could actually fall for J.S. Bellās nonsense. My conclusion is that Stanford University gives the PhD to obedient tools, not to people who actually exercise their innate intelligence.
Grady Booch, to speak of another tool, is these days working for IBM, touting āAIā.)
@troi (See, the thing is, when you study digital signal processing, you are taught how to analyze random processes. And you DO NOT do what John S. Bell did. You are very much taught NOT TO DO what John S. Bell did.
Yet Lov Grover and some other dipshits at Bell Labs fell for it.)
@troi (Speaking of Bell Labs, in grad school we played a little with an AT&T DSP chip that you programmed like microcode. Each line of code ended with the address of the next instruction.
At the undergrad level we taught and I had been taught on a Texas Instruments DSP chip that was unusual amongst microprocessors, aside from having a ādelay registerā instruction set, in that it was a Harvard architecture. That is, instructions and data on separate memory buses. Typical for DSP, though.)
@troi (End of semester I came into DSP lab with the chip programmed to change the sign of every sample. This has the effect of converting low frequencies to high and high frequencies to low.
One of the undergrad students just KNEW I would do something weird and so had brought in a cassette of the Beatles White Album. So we played that through the filter. :) )
@troi (BTW the 2022 remix of the Beatles Revolver is absolutely splendid.
This is a random insertion to the thread. I have a special attachment to āRevolverā because it was my first non-kidsā record. My father must have brought it home for me. I was a fan of the Beatles sing-along cartoon program for kids. So I had the bowdlerized Capitol Records version which was still the Beatlesā first really whacked out album.
The remix is in a way worse: two of the three Lennon songs Capitol removed suck.
@troi But of course it is still better to have the album restored, and also āDr. Robertā is now my ringtone. Plus they also remixed both sides of the āPaperback Writerā single.
In my Grado 125 headphones you can make out every word.
In my Grado 80e headphones it still sounds almost as it used to, though. So headphone magnets really do make a difference.)