Go is spectacular in every way except the actual language itself
@Seirdy LMAO what is this supposed to mean

@h top of the line:

  • fuzzers
  • linters
  • formatter
  • language server
  • native cryptographic libraries (not just bindings)
  • build speed
  • portability (far better than most LLVM langs)
  • property testing (literally in the stdlib)
  • sanitizers
  • HTTP protocols (conformant native libs for HTTP/2, HTTP/3)
  • large ecosystem
  • docs
  • stability (the Go Compatibility Promise)
  • a spec
  • other usable (mediocre but still usable!) implementations
  • dead-simple OS/arch cross-compilation
  • green threads

Go has everything except a good language.

@Seirdy i see very few flaws with the language besides things like "i want ternary operators" and "i want try-catch" (slowly realising its for the best that we dont have try-catch though so i wouldnt actually want this to change) so i just cannot understand this take i think
@h I like a concise functional style so i find its very imperative and verbose style grating.
@h once elixir gets static types that can actually be used by the BEAM (it’s being worked on right now) i might have to finally come off my static-binary high horse.
@h @Seirdy (Genuine question, not an attempt at a gotcha - you both are clearly working at a much higher level than I am) I'd love to know more about why you think it's good to have errors returned rather than thrown? To me, it seems incorrect both philosophically (the type of the representation of exceptional behaviour should be orthogonal to the return type of a function) and practically (it shouldn't be possible to forget to handle an exception) - I'd love to better understand the benefits!
@scubbo @h @[email protected] Because in practice, most devs suck at handling thrown errors. They wrap the whole thing in a big try catch, have a general catch at the bottom and then just don't do anything with it.

Returning errors per function lets you do much more fine-grained error handling, or crash with a lot more info.
@privateger @Seirdy @scubbo yep, this, ty!
@privateger @Seirdy @scubbo also i find using try-catch makes it way easier to forget to consider exceptions. that the function can throw isn't part of the shape of the function, so you can always forget to use a try-catch. if it's part of the return type, you're forced to consider it, at the very least using an underscore variable if the error is not useful to you for whatever reason

@h @privateger @scubbo I’m not so sure about this since everyone uses linters which should flag this.

right?

crickets

right????

@h "you can always forget to use a try-catch" - how? Even if your linter doesn't catch it, that's just a straight-up compilation failure in type-safe languages, right?

(Not trying to "No True Scotsman", here - when I say "type-safe languages" I'm explicitly not thinking JavaScript or Python, where types are optional-at-best)

In fact I thought

@scubbo idk my experience is with typescript which doesnt give a warning for forgetting to handle an exception at all, but yea maybe other languages handle this, regardless i think returning errors is a better way of handling exceptions
@h Gotcha. Thanks for clarifying! I feel like I understand the situation a little better now :)

@privateger 100% with you on the first paragraph - that's a bad practice, for sure, and I agree that it should be discouraged.

I need a bit more insight to understand the 2nd para, though. How does returning an error _let_ you do those things, in a way that you cannot (note - "cannot", not "don't, in practice") do with thrown exceptions?

(If the claim is actually "both approaches have equal maximal specificity, but explicit per-function error-handling promotes better practice", then I buy it!)

@scubbo This isn't a cannot thing, but it's never done. try catch just lets you be lazy, and that's what ends up being done.

I haven't seen a try catch used for a single function call basically ever, when that's really the error resolution you
should be operating at most of the time.
@privateger Fair! In which case, I understand and agree with your claim. Thanks for clarifying! :)

@h @Seirdy That's mostly because try/catch is a very weak paradigm for error handling and profoundly limiting.

The #CommonLisp #ConditionSystem is rather the way to look.

@lispi314 @h just use formal verification to prove that the function always returns its return type correctly. /hj

@Seirdy @h That works right up until you have to rely on hardware systems that can fail.

So even in Ada SPARK (which has such formal verification apparatus) you still need to handle such failures.

But more importantly, the Condition System isn't solely limited to signaling error conditions, it can be used to handle a variety of other conditions. You can also vary how you decide to respond to them (if at all), and you can even make that dynamically configurable.

@Seirdy @h Of course "hardware failure" doesn't necessarily mean permanent or catastrophic failure, transient faults are a thing too.
@lispi314 @h /me cries in non-ecc ram

@Seirdy @h I will never forgive Intel. <insert anime meme>

Seriously though, it should be the least of things for the CPU and memory to be both able to detect and *correct* errors, and signal faults to the operator.

Mainframes do it. They also support hotswap of the components involved.

Workstations fail on the hotswap and CPU fault tolerance parts.

PCs fail on everything. Thanks Intel⸮

At least even on PCs it's possible to put redundant and hot-swappable storage.

@lispi314 @h cries in the inevitable drive failure during a RAID restore operation

@Seirdy @h That's only a problem if you foolishly use parity-based redundancy.

Mirrored pairs or larger tuples have no such issue.

Error recovery or restoring is a straight copy off other correct copies on other media.

A slight bit less storage-efficient, perhaps, but oh so much more reliable.

@Seirdy @h Something file/record-aware does best, as you can only correct the one record that has gotten corrupted (like ZFS & btrfs do).

Then you can also do things like storing said records on blocks of a given size so you can more easily support diverse hardware while respecting policy constraints the user configured (btrfs uses 1GB blocks, which means you don't have to use identically-sized or larger drives in tuples, it dynamically allocates according to the profile constraints).