No. The issue is that an assumption they make in the unsafe block does not actually always hold true. They changed the safe rust code to strenghten the assumption they made in the first place, because that is way easier than rearchitecting the unsafe part. I.e. if the unsafe part was somehow to be written safely, the mitigation they introduced now would not result in any difference in behaviour, it would be correct behaviour both before and after.
Tldr: the problem lies in the unsafe part
No, I think you misinterpeted (or the original commenter was not specific egough) what black box refers to here. I don’t mean that they are proprietary or trained in a private/secret way, I mean the model itself is so huge and impossible to understand, that it is basically a black box. There are millions and billions of connections and parameters that are not adhering to any well defined structure, they just came to form magically by the learning process. You look at a neural network and you have absolutely no idea why it works.
This is one of the biggest challenges of bringing AI into the automotive industry for example. A neural network by itself is not certifiable due to not being able to prove that it works. I heard about a new-ish field that is trying to engineer structured networks specifically for automotive and similar applications, but havent heard anything since, and can’t find an article for it on Wikipedia.