"Installation: we recommend that you use Docker."

what I'm supposed to see: "hey, it's a simple one-liner! Such clean install, much wow."

what I actually see: "we couldn't figure out how to install this thing on anything but our own machine, but hey, here is a well-compressed image of our entire disk, use this instead so that we can stop trying"

@ssafar Couldn't agree more. Docker is making developers lazy and leading to software that's impossible to install outside of the very specific hand-tweaked environment provided by the docker image.

@dfs @ssafar

make clean
make depend
make
make install

This is the way :)

@hhardy01 @dfs @ssafar honestly, your Makefile should allow folding this into just `make install`
@hhardy01 @ssafar Everything I write works that way, although there may be a ./configure step before the first make. (Yes, I do use autotools for some projects.)

@dfs @ssafar

I thought about including ./configure, but I was making a reference to the most ancient, bsdish way I learned I think SunOS 4.1.1.

I'm just making an obscure joke really, though I do get aggravated when package manglers gunna mangle. :)

@dfs
@ssafar
Between docker and static linking we are going back 30 years in security, maintainability and modularity. Goodbye library updates, goodbye auditing.
@dfs @ssafar sure, but i think that docker was so successful because environment reproducibility was already really bad; it wasn't bad just because of docker
Its not necessary about being lazy but f.e. about being able to install software in repeatable manner regardless of OS/libraries version .

@ssafar That’s not that awful. There’s software with weird enough dependencies that I really don’t want to figure out how to install the dependencies properly. Granted, extremely wrong things can and do also happen (like ad-hoc patching of files in /usr/lib :blobfoxterrified:).

If the Dockerfile is clear enough, it is self-contained recipe to install/build all the dependencies and build the software, so it can even help proper packaging downstream. I’d much rather prefer a Dockerfile that I run in reproducible manner than a bunch of unmaintained instructions in a text file, even if I ultimately wanted to install or package (AUR) a tool for my own native system.

@kristof yeah, if it's a complex install, having a Dockerfile is a lot better than not having one; at least you now have _one_ way this thing can be seen working. It's also great for reproducible builds... but if the default way of installing something on your OS involves installing another OS, something might be wrong with our idea of what an OS is supposed to be :)

(I've seen projects where "non-docker" installs were in the "compiling / hacking / advanced" section...)

@ssafar I’d say a ā€œnon-dockerā€ install could be reasonably considered ā€œcompiling / hacking / advancedā€ from the project’s standpoint — if I wanted to install something ā€œnativelyā€, but not compile it / hack on it, I’d turn to the distro I use, and not necessarily towards the upstream.

Of course, an OS where Docker is the only way to install something is probably bad (unless you consider Kubernetes an OS… ugh :blobcatglare:), but upstream projects are likely not part of any particular OS. Linking to OS packages would still be nice, though.

@kristof @ssafar kubernetes is an OS, and in that OS, Ubuntu is a library, and docker image is a statically linked executable.

It's not a particularlu good OS (eg. it has no pipes) and I don't likebhow many layers it has below it, but it's easier to think about it as an OS.

@ssafar i've never encountered such a thing to be true.

reading the dockerfile is by far the best documentation on how to install

@ssafar

This 100% made me laugh, but honestly, and Docker images are no substitute for other packages, but complex software distributed as Docker images is still often easier than the alternative.

@ssafar now EVERY machine is my machine 
@ssafar Indeed. At a previous job I discovered that our build process had an entire Docker container devoted to running a single Python script. Extracting that script to run on the host (which it was perfectly capable of doing) was a satisfying win.

@ssafar Oh, I forgot to mention that the script's only job was generating a new semantic-versioning release number for the software being built.

We were spinning up an entire docker container for the task of incrementing an integer.

@pbx wow that's indeed a whole new level :)

@ssafar "well compressed" is a pretty big stretch.

Though I'll take "install via docker" over "it's super easy, just `curl | sudo sh`!" but either way, I'm almost certainly looking for an alternative since I probably don't have time to deal with your broken, poorly defined build system just to get a potentially useful new tool installed

@ssafar hmm.. sure.. but have u dive into the sysadmin world? 

@ssafar Exactly. Personally, I consider any software that an ordinary person cannot install other than in Docker or Flatpak is a bloatware. Unfortunately (or rather fortunately) I found out that I can't run any such bloatware, because Flatpak depends entirely on the systemd which I don't have (and don't want to have).

On the other hand, some isolation would be useful in Linux. But not in the style of Flatpak or Docker. I would rather like to see it already at the level of the packaging system: dynamic chroots would be created for each program, mixed according to it's needs (docker does something similar, but works with the whole system image, this would be at a lower level). For example, if I wanted to install nginx, the "packages" pcre, zlib, openssl, geoip, mailcap and libxcrypt would be dynamically mixed in the chroot.

Each chroot will be mounted to limit the software as much as possible (on most directories noexec, nosuid, nodev, if the exec needed, then the whole directory readonly).

Maybe there is a distribution which works like that? I think it could be possible to replace, for instance, pacman and make it to install already existing Arch linux's packages in chroots. Just re-use it's existing repositories. All you need is mount --bind, layered filesystem, and/or maybe cgroups.

And then, of course, a tool to bind shared directories into chroots - but only those needed. For instance, I would like to isolate my Firefox to only see my Download folder.

@ssafar but you can actually get a Docker image with a one liner, no? šŸ¤”

Whereas non-containerised solutions might depend on your operating system, it's version and whether you used only its stable & supported packages or third party repositories, own builds etc.

@ssafar yes! I keep saying this and people be like: "you're a luddite!". No, Linux app packaging is so fucked up that shipping disk images and containers is seen as the viable solution. šŸ˜•

@AbbieNormal @ssafar

"Installation: we recommend that you use Docker." is new "It works on my machine"

😭 😭