Oh cool, another Chrome 0-day abusing integer overflow.
Neat.
Great.
Awesome.
Oh cool, another Chrome 0-day abusing integer overflow.
Neat.
Great.
Awesome.
Meanwhile, we'll be writing about how we need to have "high impact libraries that help lots of users" and then give examples like CLI Parsing/JSON Parsing before we sit down and go "we should have some standard library types / functions for integers...?".
v.v.v.v. cool prioritization we do here.
We keep calling ourselves software engineers, but engineers elsewhere advance their industry by analyzing failures and building up tools to stop those and make them standard industry practice!
But we'll just have the same 6 problems, on a regular spin cycle, for like 40 years.
@thephd The thing that is also different in other industries is that the *engineers* are held liable for them signing off, be that in writing or even verbally, on designs.
Meanwhile, in software, we just don't give a flying fuck about it because the consequence of our decisions are going to be felt not by us but by some other random person.
And we even encourage this behaviour, by promoting people that "just ship" shit instead of people that reliably test and think about their choices.
@Girgias @thephd Fun fact, the term "software engineer" is protected in Canada, and genuine Canadian software engineers are held to the same level of liability as other engineers. Big tech companies have gotten in trouble with regulatory bodies over this & change their job titles for Canadian postings
@thephd Here's a concrete (no pun intended) example of a very basic fuckup that should have been caught, wasn't, and killed a bunch of people: the Hyatt hotel suspended walkway that collapsed during an event with hundreds of people, killing over 100 and injuring over 200 more.
The cause? A sudden design change that was inadequately discussed and reviewed, which caused the walkway to support twice the load of the original design, which *itself* was not fully up to proper design specs. The engineer who signed off on the original plans later said "any first-year engineering student could figure it [the error] out." https://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse
Oh, they definitely do. At one department I knew, there was a research project for aligning data in memory to "look tidier".
No, nobody could explain what that would be good for.
(This was not about memory alignment or paging or effective use of space.
It was literally a GUI program that allowed users to align little memory rectangles on a 2D canvas.)
@thephd Oh, but we'll create a new framework or API or maybe... update a standard?
That'll solve the problem.
@thephd tbh, most of software crafting is more akin to artisanal craftmanship: highly technical, very personal, with a less-than-ideal result.
The title of "Engineer" does not fit 90% of what I do, but "Software Artisan" does not ring a bell to anyone.
The Engineering part of Software Engineer happens very early in development, near the planning phase. It's when technology choices are made, architecture is chosen, and such. Not when the actual coding happens.
Software engineers are expected to code too, which is unusual when compared to other industries: You do not ask a production engineer to actually build and assemble a production line, not do you ask a logistics engineer to do delivery runs themselves.
Yet, software engineers do the planning, the ideation, the problem solving and the implementation. In that way, it's more of an artsan/craftmanship job than other engineering jobs.
@thephd I always found this (and follow-up) post an interesting treatment of this question.
https://www.hillelwayne.com/post/are-we-really-engineers/
I was somewhat surprised that one of the conclusions appears to be "it's just as bad in ((other engineering discipline))" by some oft-cited measures.
This is part one of the Crossover Project. Part two is here and part three is here. A conference talk based on this work is now available here. I sat in front of Mat, idly chatting about tech and cuisine. Before now, I had known him mostly for his cooking pictures on Twitter, the kind that made me envious of suburbanites and their 75,000 BTU woks. But now he was the test subject for my new project, to see if it was going to be fruitful or a waste of time.
@thephd I can never decide if the problem lies more on the engineering or the management side (or whether that's a more modern problem in the more dystopian corporate hell we have today).
I do know a number of people who absolutely hate this, but at least for people in the industry, there's no ability (without unions) to refuse ridiculous requests from management such as throwing away whole ass useful closed source tools just because the C suite saw a new shiny.
@thephd by fits and starts i think we're getting there. things like CI and coverage based testing are very normal now.
my vision of the future is one where "you're able to write what you can prove" and i think we'll get there too in time as tools for automatic proof assistance get better and the ux of languages with proof systems get better.
@thephd that's why I support moving core libraries to rust, zig and julia. They are languages which actually try to fix core issues.
I would actually be fine if we make a breaking change in C# and Java and make them nullsafe for example. Yes you wouldn't be able to compile an old program with it, but it seems so extremely neccessary.
@thephd
Swift really got this one right IMO:
+ - * / all trap on overflow.
If you want mod-2^n operations, there are entirely separate operators for that: &+ &- &* &/
There is no implicit conversion between ints of different bit widths, or between ints and floats (in either direction).
Literals implicitly take the contextual type, so there’s no need to explicitly cast, say, `17` to assign it to an Int8 or a UInt64.
Hard to accomplish this consistency anywhere except at the language level.
@inthehands @thephd Better still, actually use the type system. If you write:
i = j * k
then make sure, at compile time, that the type of i can hold the result for any possible values for the types of j and k.
And uint16, etc, aren't types; they're implementations/representations of types. The types we should be using would be something like 0..65536 (or better some number which makes sense for the application rather than an arbitrary number because we happen to be running on a binary computer today).
@revk Oops, yes, silly of me to have picked a number which had anything to do with binary, let alone get it wrong for many cases.
But a 0..65536 type would be useful in some cases. E.g., to contain the number of bytes in a 64 kB circular buffer.
@penguin42 Indeed. I doubt it happens that much in practice, though. For most real software you can easily work out what the bounds for various variables are. Automatically proving them is a bit more tricky hence @inthehands 's comment about dependent types, e.g., var i: 0..len(buffer). But it's where I think the next “Rust-like” step forward will come from. Even doing simple cases would take us a long way with most software even if some more difficult cases need a bit more manual intervention to make things really solid.
@penguin42 @edavies @thephd
The concept you’re reaching toward here is “dependent types:” https://en.wikipedia.org/wiki/Dependent_type
In short: types can carry logical constraints, e.g. not just “integer” but “integer in a range;” not just “list of T,” but “list of T whose elements are in sorted order.” You can statically verify types like that if and only if you can show the compiler how to •prove• those logical conditions are met — and so…
@penguin42 @edavies @thephd
…you end up passing around •mathematical proofs• as part of your programming language, which are machine-generated and/or human-written + machine-verified.
It’s a grandiose, confusing dream that is either The Future of Programming™ or a quixotic research exercise, depending on who you ask.
If you’re wondering what it looks like in practice, Lean might be a place to start:
https://lean-lang.org