@zwarich @dotstdy @ianh @joe Related, but why isn't first class multiple return values a thing more often?
For example, the C++26 standard library has senders/receivers which essentially work via continuations, so you can write async functions that take N inputs and M outputs natively without resorting to tuples. You can even send outputs of different types without having to box them into a sum type (because they just statically dispatch to different overloads).
@joe @foonathan @dotstdy @ianh I think most languages that would allow you to return borrows would let you also store them as struct fields, etc. and just add a borrowed pointer type, in which case you could just make a tuple of borrows. I guess you could take a purist "parameter modes" approach and define borrowed struct fields via parameter modes on the struct's constructor? However, I think it might be tricky to make this work well with generics.
Another place where a similar distinction comes up is in-place construction of return values. Rust doesn't have this, but I assume that some successor language will want this to support internal self-reference, e.g. for efficient containers with inline capacity
@zwarich @foonathan @dotstdy @ianh yeah return value emplacement was the other thing i had in mind where a tuple (in its naive unexploded representation) isn't the same thing as multiple values.
even with first-class borrows, the way swift tries to allow for tuples to be magically exploded and imploded by the implementation fights against the very concept of a borrow-of-tuple ever existing, since you really want a contiguous representation for that borrow to refer to
@joe @foonathan @dotstdy @ianh Another funny realization is that if all return values are returned by writing to a passed reference, then you could have functions with no actual return values and only out-params (with the appropriate pointer type that must be written before returning). It's the use of resources like registers (which may be implicitly used by other code in the function) that necessitates presenting a value at the point of return.
You could take this to the next level and actually have out-params that "steal" registers when written to, but at that point you're probably in meme language territory.
@joe @slava @foonathan @dotstdy @ianh This is one of the things I'm trying to do in my new language. There are a few new features required, e.g. variable-dependent types (so that indices into an arena can depend on the arena instance), coeffects/implicit context and coeffect polymorphism, "linear" types, etc.
I am hoping this will remove a lot of the complexity of a language like Rust and look more like the C/Zig one would write (except now with static checking), but I'm sure there will be ways in which things are more complicated compared to an "isolated ownership" language like Rust.
@shac @joe @slava @foonathan @dotstdy @ianh To be a bit more serious, I think it’s hard to predict how easy things will be to understand in advance. When you implement a language, you are usually spending a lot of your time thinking about the dark corners of features or rare interactions between them rather than the phenomenology of actually writing programs.
As an example, I never would have guessed that Rust would reach the level of adoption it currently has outside of its original niche. While a lot of this is due to features outside of its initial value proposition or the wider ecosystem, the people that come for these reasons still have to put up with everything else.
Since I’m making an inherently weird language, I’ve adopted a few principles to make it more understandable:
1) Avoid prematurely adding features like type classes/traits that invite an endless horizon of potential generalizations but don’t have much to do with my basic thesis.
2) Constantly look at everything I have in the language and try to refactor it by reusing common technology, even if this enables users to do things that seem useless to me. To go back to Rust, since its development is the result of a sometimes contentious social process and constrained by strict backwards compatibility, it is incredibly asymmetric.
3) Plan the development of the language in stages, where at each stage I am building something that is actually useful and internally consistent, but is also deliberately missing features that I plan to add in later stages.
I guess I should actually write a post about my goals and ideas. I previously told myself I don’t want to release anything until I have a bootstrapped self-hosted compiler, but maybe that’s too conservative.
@doctorgoktor @shac @joe @slava @foonathan @dotstdy @ianh My basic thesis is that it's possible to make a safe programming language that enables many of the standard programming patterns for improving performance that might be labelled "data-oriented design", e.g. using arenas (including with indices rather than pointers), stealing bits from pointers, AoS/SoA transposition, rare parts of structs that live externally, etc.
I have lots of other smaller theses (including some just related to the implementation techniques rather than the user experience), but this one is the core because if I fail to achieve it then I don't think there is enough to distinguish it from other languages.
@joe @slava @foonathan @dotstdy @ianh There is another approach that I didn’t mention but that Swift also dabbles in, which is copy-on-write based on dynamic reference uniqueness (as determined by a reference count). We also used this in Lean, in a more “pure” form where the mutable updates were made more explicit, but it has a few major drawbacks:
1) You are forced to use immediate reference counting, which means immediate *atomic* reference counting in a multithreaded environment. Lean goes a step further than Swift and uses non-atomic reference updates if the object hasn’t yet escaped to multiple threads, but in a way this makes things worse because the gap between single-threaded and multi-threaded code gets even wider.
2) I’m not sure it’s possible to make a compiler that always meets user expectations of when copies will occur. Ensuring uniqueness often requires a linear flow of use-to-def edges, and it is not obvious when a def should “depend” on a prior use, especially across function calls (and especially across ABI boundaries). There are also situations where the optimizer can recognize that a value is non-unique anyways and thus all use-to-def edges can be ignored, although at least this has heavy overlap with RC-related optimizations that you probably want to do anyways.
I had some conversations with Andy and Michael on the Swift team long ago about these problems, and am somewhat familiar with the “ownership SSA” solution that they came up with, but I don’t know how it turned out in practice. When I worked on Lean, some of my biggest improvements were in this area, but the logical next steps seemed like a huge upgrade in compiler complexity. It felt like dealing with this problem would slowly become the main focus of the compiler.
3) It’s difficult to make concurrency primitives (e.g. a mutex owning its contents) that stay on the happy path of single-threaded reference uniqueness. We never did this for Lean, but I think it would require some dynamic escape analysis that goes back and poisons the value stored in the mutex if any sufficiently derived value escapes.