I think there are essentially two main areas in #AIRisk / #AIAssurance and from a risk/security practitioner perspective, they require different kinds of frameworks and tooling. I think they can reside un der the same governance structure though.
Essentially, I'd be looking to measure different things whether I am assessing the building of an AI Model, or the using/deployment of an AI model as part of another product.
I'd love to see standardisation on an AI Model Attestation format that is basically a set of metadata that would be delivered as part of a model to attest to how it was built, trained etc. This would be an output of empoying one of the many frameworks I see around that talk about Responsible AI, and AI Ethics etc. Metadata, a little like an AI Model BOM perhaps, that can be used to understand and evaluate the risk of that model for your use cases.
And then, a separate set of measurements, tools, process etc that let me assess an implementation, of which the Model and its "BOM/Metadata" are in input.
I think we need the first though, a set of standard measurements we can apply to a model to further understand the risks it poses in specific use cases.
It'd also allow for benchmarking, and rating of models.
Does such a thing exist? I think this is not "done" until for example, a model I download on huggingface comes with this metadata, and companies are not willing to use or consume a model that doesnt come with this ingredients type label detailing how it was built, what data it was trained on, what internal safeguards it might have etc... that can be verified and validated.
A Trustworthy AI Label for Models ?
#AI #riskmanagement #infosec #AIAssurance