This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
> Industry trend of building high-level language-specific IRs
"Trend"?
This was always the best practice. It's not a "trend".
I'm too much of an anarchist for that.
I believe what I said:
> I think it would be net better for the public if they just made Mythos available to everyone.
I don't trust a corpo to choose what is "most critical".
That's what's messed up about it.
That's a really good point!
But:
- Coordinated disclosure is ethically sketchy. I know why we do it, and I'm not saying we shouldn't. But it's not great.
- This isn't a single disclosure. This is a new technology that dramatically increases capability. So, even if we thought that coordinated disclosure was unambiguously good, then I think we'd still need to have a new conversation about Mythos
It's messed up that Anthropic simultaneously claims to be a public benefit copro and is also picking who gets to benefit from their newly enhanced cybersecurity capabilities. It means that the economic benefit is going to the existing industry heavyweights.
(And no, the Linux Foundation being in the list doesn't imply broad benefit to OSS. Linux Foundation has an agenda and will pick who benefits according to what is good for them.)
I think it would be net better for the public if they just made Mythos available to everyone.
> I want to be there with you, but the definition this piece uses is, I think, objectively the correct one --- "memory safety", at least as used in things like "The Case For Memory Safe Roadmaps" government guidance, is simply the property of not admitting to memory corruption vulnerabilities.
This piece does not define memory safety as "not admitting memory corruption vulnerabilities". If it was using that definition, then:
- You and I would be on the same page.
- I would have a different complaint, which is that now we have to define "memory corruption vulnerability". (Admittedly, that's maybe not too hard, but it does get a bit weird when you get into the details.)
The definition in TFA is quoted from Hicks, and it enumerates a set of things that should never happen. It's not defining memory safety the way you want.
This article comes up with yet another definition of memory safety. Thankfully, it does not conflate thread safety with memory safety. But it does a thing that makes is both inaccurate (I think) and also not helpful for having a good discussion:
TFA hints at memory safety requiring static checking, in the sense that it's written in a way that would satisfy folks who think that way, by saying thingys like "never occur" and including null pointer safety.
Is it necessary for the checking to be static? No. I think reasonable folks would agree that Java is memory safe, yet it does so much dynamic checking (null and bounds). Even Rust does dynamic checking (for bounds).
But even setting that aside, I don't like how the way that the definition is written in TFA doesn't even make it unambiguous if the author thinks it should be static or dynamic, so it's hard to debate with what they're saying.
EDIT: The definition in TFA has another problem: it enumerates things that should not happen from a language standpoint, but I don't think that definition is adequate for avoiding weird execution. For example, it says nothing about bad casts, or misuses of esoteric language features (like misusing longjmp). We need a better definition of memory safety.
This is really sad to read!
Can folks who live in Chicago confirm/deny/comment on the extent to which this article gets it right?
(I have no reason to believe that it's an exaggeration, but I sincerely hope that it is.)