“@royperlis This would be exempt. The model was used to suggest responses for help providers, who could opt in to use it or not. We didn’t use any PII, all anonymous data, no plan to publish. But MGH's IRB is formidable... Couldn't even use red ink in our study flyers if i recall...”
@cfiesler I believe the author later clarified that the people were not directly chatting with the model. It was used more as a tool to help peers craft their responses.
While having a human in the loop does mitigate some of the PII issues, the lack of informed consent is still stands.
@emmatonkin @cfiesler I think the demo video showed that the operators had the option of directly forwarding the responses to the model. I'm assuming (hoping) the humans acted as filters for personal stuff.
What's worse is that stuff like this just sets precedent for even more outrageous applications of LLMs.
@rajatsahay @cfiesler
Ack, though. A) I wonder what guidance, training, eval they were given because that's quite some task to carry out in a hurry and B) hang on, is the LLM responding with no context other than the last message received, then? More usual to give it context for a (seemingly) relevant answer.
Totally agreed re precedent. Not only does it need careful regulation, but I suspect this is already in breach of existing regs.
@emmatonkin @cfiesler your response perfectly highlights a huge problem with AI hype. Most companies cite human moderation to deploy borderline illegal services, claiming their "AI model" gives unreal performance- and are within the letter of the law.
When the model inevitably fails, any blame for the misaligned decisions is put directly on the same moderators- who usually receive little to no training on how to handle these situations.