anthropic has proudly announced[1] that their latest model is capable of burning down basically everything by automating the identification of significant vulnerabilities in open source code. and that you should pay them to use their llm to find and fix vulnerabilities first
i'm a programmer, not a computer security expert. the vulnerabilities discussed in the blog post seem significant, but it's not clear to me how serious they are and how often software like this deals with such bugs. i'm also not sure how different this is in practice from fuzz testing. anyone in infosec able to weigh in? how big of a deal is this?
ash (aka khr) - ashen dragon (xey/xem, they/them, it/its)
kai - programmer synth dragon (
) (they/them, it/its)
ember
river 




