Thinking about @enkiv2 's Guidelines for GUI Composability

https://hackernoon.com/some-tentative-guidelines-for-gui-composability-2900abead1d9

All of which I think are very good, but particularly number 10:

"Every object has a visual representation. Every visual representation corresponds to an object."

I think that one is very important, almost totally alien to mainstream UI/UX thought right now, and I would love to know what it implies.

I think it would lead to a "design language* that was an *actual* language in the formal sense.

Some tentative guidelines for GUI composability | Hacker Noon

@natecull @enkiv2 tho he might not be an exact fan of the analogy, the classic Mac finder might be interesting from that perspective. it has direct manipulation that Mac OS X and Windows lack. you manipulate a control panel? it is the actual control panel itself, not some abstract IHober COM object implemented by a registered Hober32.dll that implements an IShellHober interface loaded by Explorer's "Shell Hobers" virtual folder. it has a name that makes sense to a person, you place it in the Control Panels folder, it gets loaded on startup, and you manipulate it like a file on your desktop, while remaining a physical object.
@natecull @enkiv2 to add to this, to manage fonts and extensions on a mac, you didn't need a seperate extension or program, you just dragged and dropped into a system folder, because everything was a physical object, with an obvious meaning through a human-readable name (CD-ROM Access, not libcd.so) that could be directly manipulated in the Finder

@libc @enkiv2

One of the many sadnesses I have about Windows is that there's the filesystem, and then there's the registry, which for some reason isn't a filesystem, and then there's... whatever in heck it is that File Explorer browses, which is not quite the filesystem and not quite not, because it has things like 'Libraries' which have no filesystem representation.

it's just layers and layers of never completely finished almost-filesystem-like-objects, which obscure each other.

@libc @enkiv2

Also, I kind of wish someone would take good old database CRUD semantics and give us a formalised persistent/pure-functional/non-destructive-update variant of that, so we could think through the semantics of distributed objects run over append-only-log stores and the like. Which I once thought REST was aspiring to be, but now I don't think REST stands for anything anymore.

@libc @enkiv2

(One of the important bits still missing, I think, being the functional-reactive live data stream equivalent of 'tail recursion'. Eg: if a data stream adds a new version but nobody has observed the old version, it should be safe to optimise that as a mutate in place. Under what circumstances can a system automatically substitute a mutate for a create? It feels like a very similar problem to tail recursion elimination for code, but for data. We should probably have this.)

@natecull @libc @enkiv2 Reference counting can do this in the dynamic case. There are also ways to optimise pure functions so that they can reuse a previous computation. There was a paper somewhere that had verified versions of it in Agda for various lambda calculi.
@natecull @libc @enkiv2 Or having linearity information in functions could also help.

@grainloom @libc @enkiv2

I think you're not quite grasping my point. I'm not surprised because I don't quite grasp it either; it's murky, floating at the dim reaches of my intuition.

I mean I'm not talking about functions here, but data. It's an analogy.

@grainloom @libc @enkiv2

What I'm fishing for is something along the lines of finding a way to reliably automate cache-purging, which I think is infamous as being a Very Unsolvable Problem. So maybe it can't be done at all.

But it's sort of something we need if we want to even think about doing functional-reactive sensibly.

If everything is a live versioned data stream, then either our storage keeps filling up with infinite versions of all our stuff, or we throw stuff away at some point.

@grainloom @libc @enkiv2

And what I'm groping towards is:

* 'the cache fills up and then it all breaks' feels a little like 'the stack fills up and then it all breaks'

* tail recursion elimination can fix a lot of the stack filling up problem, meaning the programmer can think in recursion and doesn't have to worry so much

* is a live data stream (if running over immutable storage) something a little like a recursive function?

* what then is the data equivalent of 'tail call place'?

@grainloom @libc @enkiv2

My guess being something like 'a data equivalent of a tail call is like one cell which is only observed by one other cell.... and in which we know there is going to be no need to abort or backtrack the calculation, for some reason, maybe because it's a kind of throwaway value'.

There's probably a lot of throwaway values and it would be nice to not either have to manage them separately or burn them to an eternal permastore. But is there an algebra to help us decide?

@grainloom @libc @enkiv2

(Is "observed by only one other value" what you mean by "linearity"? In which case yeah that's maybe what I'm talking about. But linear logic does my head in.)

@grainloom @libc @enkiv2

and then there's the security side of things where, eg, temporary values used to compute a crypto key MUST BE purged from working storage with extreme prejudice, and I don't know at all how that can be accommodated in an "everything is immutable permanent data operated on by pure functions lifted to reactive streams" model.

But if we had some way of knowing for sure what values get purged and what get immutably stored, we might solve both problems at once.

@natecull @libc @enkiv2 You might find the #Granule language interesting.

@grainloom @libc @enkiv2

https://granule-project.github.io/granule.html

This looks very clever and mathy and I don't understand any of it. But I agree with some of what it's trying to do. Building explicit resource usage bounds and privacy levels into the calculus seems required to make computers slightly less evil than they are now.

(the Cloud will always be evil, sadly. But assuming we own and can trust the hardware, which is a very big assumption, then this seems like the kind of work we need in our languages.)

Language

The Granule Project: A Research Project Studying the Next Generation of Functional Programming Languages

The Granule Project

@grainloom @libc @enkiv2

I think static types are on a hiding to nowhere, unfortunately and sadly.

Types must be runtime-computable entities or nothing. Because there is no place inside a computer that is 'not runtime'.

When the type theorists understand that, and give us type theories that can operate on types as objects, and which double as programming languages, then they will begin to make progress. Dependent types are trembling on the brink of that chasm, but not yet stepping across.

@grainloom @libc @enkiv2

cos, eg: a database table can be thought of as a type. Yet it is an entity that can vary at runtime, and which can be generated by functions running at runtime.

Think about this, type theorists. Give us a type theory that can understand this.

something like "a type is a structured set of data; it can be either known or unknown at any point. If known at any point, a bunch of optimisations can be done at that point. If unknown, those can be done later."

@grainloom @libc @enkiv2

I think that "parse trees" need to be the native data structure that a language runtime uses, so the compiler doesn't throw useful information away.

I also think that raw S-expressions aren't quite information-dense enough to express parse trees, which is why actual Lisps keep marking up S-exps with ad-hoc syntax to express the bits that don't fit. We should stop and think about why this keeps happening and how to augment S-exps so we don't have to do this.

@natecull @libc @enkiv2 Didn't Racket solve that problem quite well?

@grainloom @libc @enkiv2

In my opinion Racket did not solve that problem.

For example, it still uses a bunch of reader macros with things like '#'.

And then it has its own separate 'syntax' representation which I think is more like S-expressions. I dunno. It was so ugly and repelled me so hard that I didn't delve heavily into it.

While Racket 'works' for the use of case of 'implementing racket', it does not possess the quality of simplicity and beauty I'm looking for in a syntax.

@grainloom @libc @enkiv2

I am arguing that reader macros are a code smell in a Lisp, and their existence should be purged from the language.

Until we can take arbitrary RAM structure and be able to 100% unambiguously serialise it out to text and read it back in again with zero data loss, we don't actually have a syntax, just a sort of loose set of guidelines.

I realise 100% is a high bar but I am very tired of using 10,000 interlocking computer systems each of which is 99% implemented.

@grainloom @libc @enkiv2

Maybe someone could take Racket and do to it what RSR7 is trying to do to Scheme RSR6, come up with a tiny minimal subset that can be used to implement all the rest of it?

Maybe even Racket itself has already done that, along with everything else it's doing. But last I looked, the minimal base looked pretty darn huge.