https://stackoverflow.com/questions/33051108/how-to-get-around-the-linux-too-many-arguments-limit/33278482

> I have to pass 256Kb of text as an argument to the "aws sqs"

what, uhhh, what

> MAX_ARG_STRLEN is defined as 32 times the page size in linux/include/uapi/linux/binfmts.h:
> The default page size is 4 KB so you cannot pass arguments longer than 128 KB.
> I modified linux/include/uapi/linux/binfmts.h to #define MAX_ARG_STRLEN (PAGE_SIZE * 64), recompiled my kernel and now your code produces

casually patching the kernel to send a quarter megabyte as a *single* argument oh my god i'm laughing hard
@navi well in the early rust for Linux days we hit this limit with the passing kconfig options to rustc. Fun times

@kloenk @navi Back when 128 kB was the limit for argv+envp, Google was hitting it too because they passed all the configuration for their whole software stack on the command line as --long-option=value switches.

Their solution? Compress the command line. So every binary started by ungzipping argv[1] and parsing it to get the configuration.

The person explaining this to me saw my horrified face, and said with the perfect Hide The Pain Harold smile: "a series of individually completely rational and reasonable decisions led to this." and I have been thinking a lot about it since.

@ska @kloenk @navi
I love one of the first rational decisions here: command-line arguments in scripts should be long-form to minimize reader confusion. Things go off the rails well before you hit 128kB of args though. You need to throw that in a config file or something, folks.

@c0dec0dec0de @kloenk @navi Actually, *that* particular decision made sense: when you have a huge software stack with configuration switches, you have to use long options because you just don't have enough characters for short options. And when you have a cluster manager running a command line on thousands of machines, you don't want to have to copy a config file, it's good to have the config on the command line.

The questionable decisions were upstream (is it good to have a whole software stack with configuration switches in every binary? hmmm) and downstream (what to do if we hit the command line limit), but *that one* was sound. 😅

@ska @c0dec0dec0de @kloenk

i would honestly take the configuration from stdin at that point, and it can even look similar to the bazillion flags in a script by using here-doc

wouldn't work if they need stdin for something else, but i kinda doubt that a program that has this many flags actually uses stdin directly

@navi @kloenk @c0dec0dec0de @ska

Considering that a lot of commands parse arguments in order (and sometimes don't even need to store anything from previous arguments) streaming them could be more efficient.

And `exec foo "$@"` would not need to store the whole argument list just to pass it on again.

@sertonix @kloenk @c0dec0dec0de @ska `exec foo "$@"` doesn't help though, since the issue is too many args, so i'm confused a bit

@navi @kloenk @c0dec0dec0de @ska

I mean if all arguments (including some $@ equivalent) were done via pipes on the OS level. A bit off topic

@sertonix @navi @kloenk @c0dec0dec0de @ska unlike many uses of pipes, you generally want to know when the command line is done, because you want that configuration complete before you start initializing the rest of the system (and I recall chasing down demons in one of those initialization subsystems in a place where we were somewhat misusing the system -- again, best of intentions, but threads and fork+exec were not good friends at that time).