if the goal of "free software" is to give the user control over how the software behaves then is minecraft[1] more free than, say, gcc[2]? thread locked by moderators after reaching 100 pages of intense debate

[1]: where an end-user can usually install >100 mods with a lot changing the game in nontrivial ways reasonably trivially without major conflicts
[2]: where combining multiple features from different forks requires lengthy conflict resolution usually dependent on deep knowledge of the systems being modified
what im trying to say is github.com/FabricMC/Mixin & github.com/LlamaLad7/MixinExtras rock and we need clones of them for more languages and environments
GitHub - FabricMC/Mixin: Mixin is a trait/mixin framework for Java using ASM

Mixin is a trait/mixin framework for Java using ASM - FabricMC/Mixin

GitHub
@kopper i feel like jvm being jvm makes this stuff a lot easier than like c

@j0 @kopper yea, the jvm has an idea of what a local variable is, and only inlined stuff ever if constants (depends on compiler / obfuscater settings too).

something similar for native is, while not impossible, going to be much harder and much less powerful (again, jvm generally has only one form for certain things, native has as many as codegens * their versions

even just inject at head is going to be difficult because where does function prepare end and body begin ?

@j0 @kopper i think there is much more value in making our software's features more decoupled so instead of rewriting parts, you swap them in and out for ones that suit your needs better
@SRAZKVT @j0 this bit is interesting to me. how granular is the swapping parts bit supposed to be? executables? libraries? what's the real difference between the two?

@kopper @j0 it's imo better to standardise executables because a library's api shape how it is implemented, while executables gives a lot more freedom (heh) on how to do things

but there's cases where we need libraries

there's also the possibility to standardise a daemon's protocol for certain operations, though this is in this case mostly useful for things which would need some form of privilege (like ping)

@SRAZKVT @kopper @j0 spack has done the most work on swapping libraries that i'm familiar with both through the concretization that reconciles compatibility directives and then through the installation process which e.g. rewrites RPATH. there's not really a distinction there between executables and libraries but it's not magic and as you say there are library decisions that affect this
@SRAZKVT @kopper @j0 i also agree that a cli api is standardizeable like a daemon protocol but this is not done for some reason and in fact google engineers will say outright that a cli is not an api when they whine on twitter about linus banning them from the kernel for breaking the perf cli

@hipsterelectron @kopper @j0 a cli is an interface that can be constructed to do needed operations by a program. it's an API.

a restricted one, sure, but it's still one

@SRAZKVT @hipsterelectron @kopper @j0 it's a terrible boundary to cross though

@natty @kopper @hipsterelectron @j0 how so ? your build system does it every day by calling your compiler, i don't see why we shouldn't generalise it

API implementation freedom is inversely proportionnal to how specific to the internals the API is, executables in that regard just give a very generic thing. that's why we can have wildly different C compilers and yet all of them can work together to build something that works

@SRAZKVT @kopper @hipsterelectron @j0 I have no idea what you picked up from our reply, are working on one such project ourselves (it works well in shells)

Generalizing CLI might improve the interface and UX/DX by avoiding repeated parsing and serializing, however it's unlikely to open any new opportunities given how sloooow executing a program is. Microservices already show roughly how small you can go

It's kinda like RPC in that sense, without the R. Replacing argparse with something like a Protobuf definition won't really fix the inherent limitations. It would be an improvement to the status quo tho

You could have the processes running and talk via IPC- hold on that's what Mojo is
@natty @kopper @SRAZKVT @j0 RPC as IPC and using shared memory for immutable shared data is what i started working on many years ago https://codeberg.org/cosmicexplorer/upc i don't know that i would call executing a process slow but rather the interfaces exposed for local execution (particularly linear path traversal) are not only inefficient but also nonreproducible. serializing the result of that process could well be done through the filesystem
upc

Ultra-high-performance local IPC framework with Zipkin tracing to conduct a beautiful symphony of (brotherhood) build tooling.

Codeberg.org