"Announcing wcurl: a #curl wrapper to download files"
https://samueloph.dev/blog/announcing-wcurl-a-curl-wrapper-to-download-files/
"Announcing wcurl: a #curl wrapper to download files"
https://samueloph.dev/blog/announcing-wcurl-a-curl-wrapper-to-download-files/
Curl is the ideal tool for downloading files and for interacting with HTTP APIs. Many people use two different tools just because their API specializes on one or the other use.
Functionally, you could often replace wget with a spicy Bash alias of curl.
So my point is: you are saying that the two-tools solution is THE solution to this API problem, so much so that it’s worth replicating it in a domain (the curl brand) you have complete control over. It’s surprising.
@pierreprinetti @bagder No, I am not putting out a dogma about anything.
I see that people made something that they consider useful. It is a script on top of curl, it seems to fit into what curl distributes.
On Debian is was already added to the curl distribution. So, if people using other platform also want it, it seems the curl project is the best place to keep and maintain it - assuming its author wants to, of course.
@pierreprinetti @icing @bagder There clearly are two mutually-exclusive needs for the CLI commands:
1) One that spits out the return in stdout by default
2) One that saves the output in a file by default
The use case is so common that people will naturally stick to the tool that doesn't require any parameters for the majority of the cases within that main goal (e.g.: retries by default). Thus it's impossible to have a single command addressing both issues at the same time.