RE: https://mastodon.nz/@leighelse/116149727745113480

I worked on a large scale project testing medical transcription. (Maybe one of the largest ones) Hundreds of doctors reviewed the output and called out the issues.

It was not, and still is not, ready. Public health teams that roll this out without red teaming and remediation and feedback and a way to influence the weights of models are irresponsible.

In fact, I am willing to offer up to five hours of my time — free — to any public sector team or nonprofit (with annual operating costs below USD 2M) anywhere in the world that needs help figuring out what makes sense and how to respond to top down pressure telling you to implement AI.

And if they’ve already chosen something for you, I am willing to help you figure out how to sand down the risks.

email me: adrianna (at) futureethics.ai

Edit: for public servants who technically can’t get ‘free’ things from a vendor, consider this one on one coaching / advice or a pre-sales call

@skinnylatte Are there white papers or things you'd recommend? For profit caveat. :/

I regularly talk to someone in veterinary med that's dealing with top down AI push and other doctors creating inaccurate notes from AI. The incentives there are all out of whack since the majority of doctors don't do notes at all. And then extend that to AI for blood work and cytology.

@NegativeK yep, my lab is working on some white papers and webinars in this space. Will share some