AIensured Secures Funding from STPI and Pontaq to Advance Responsible and Ethical AI Deployment – Tycoon World

New Delhi: AIensured, a company focused on enabling organizations to test, validate, and govern their AI systems responsibly, has secured funding from the

Tycoon World

I think there are essentially two main areas in #AIRisk / #AIAssurance and from a risk/security practitioner perspective, they require different kinds of frameworks and tooling. I think they can reside un der the same governance structure though.

Essentially, I'd be looking to measure different things whether I am assessing the building of an AI Model, or the using/deployment of an AI model as part of another product.

I'd love to see standardisation on an AI Model Attestation format that is basically a set of metadata that would be delivered as part of a model to attest to how it was built, trained etc. This would be an output of empoying one of the many frameworks I see around that talk about Responsible AI, and AI Ethics etc. Metadata, a little like an AI Model BOM perhaps, that can be used to understand and evaluate the risk of that model for your use cases.

And then, a separate set of measurements, tools, process etc that let me assess an implementation, of which the Model and its "BOM/Metadata" are in input.

I think we need the first though, a set of standard measurements we can apply to a model to further understand the risks it poses in specific use cases.

It'd also allow for benchmarking, and rating of models.

Does such a thing exist? I think this is not "done" until for example, a model I download on huggingface comes with this metadata, and companies are not willing to use or consume a model that doesnt come with this ingredients type label detailing how it was built, what data it was trained on, what internal safeguards it might have etc... that can be verified and validated.

A Trustworthy AI Label for Models ?

#AI #riskmanagement #infosec #AIAssurance

Anyone worked on / working on any kind of risk classification/vector type measurement for the usage of #AI within enterprises?

Basically, a way to classify a use case based on the risk it poses to the business.

Looking for others to chat with about it.

#infosec #airisks #aiassurance

At last, an actually useful view on #AI Risk / #AI Assurance - https://developer.nvidia.com/blog/nvidia-ai-red-team-an-introduction/

Real talk, not just high level waffling and generics that no one can actually apply.

#airisk #aiassurance #ai

NVIDIA AI Red Team: An Introduction | NVIDIA Technical Blog

Machine learning has the promise to improve our world, and in many ways it already has. However, research and lived experiences continue to show this technology has risks. Capabilities that used to be…

NVIDIA Technical Blog

I think #ai assurance is the next area I am super interested in. Anyone got cool resources or research in this area to share?

#aiassurance #airisk #airiskmanagement