The co-founder of Koko (non-profit that offers peer mental health support) has a Twitter thread (https://twitter.com/RobertRMorris/status/1611450197707464706) about an experiment where they fed requests for help to GPT-3 and help providers could send those AI-generated support messages rather than their own. They found that AI responses were rated higher but also "once people learned the messages were co-created by a machine, it didn’t work." But there have been some interesting questions about the ethics... 🧵 #gpt3
Rob Morris on Twitter

“We provided mental health support to about 4,000 people — using GPT-3. Here’s what happened 👇”

Twitter
I'm a little confused by this response about informed consent (https://twitter.com/RobertRMorris/status/1611582827224797185) but I think it illustrates a significant problem among some researchers with conflating "research ethics" with "would an IRB allow me to do it" which is potentially really harmful. I would hope that the reason to seek informed consent isn't because a regulatory body forces you to, but because it is the right and ethical thing to do. (2/n)
Rob Morris on Twitter

“@royperlis This would be exempt. The model was used to suggest responses for help providers, who could opt in to use it or not. We didn’t use any PII, all anonymous data, no plan to publish. But MGH's IRB is formidable... Couldn't even use red ink in our study flyers if i recall...”

Twitter
But regardless, based on the thread I think that though the help providers were aware of the AI (since they were choosing to use it), it seems that the people seeking help were not aware. Though based on the "once people learned" finding, at least some of them must have been debriefed? Were they essentially following typical protocol for a deception experiment? (Though if that were the case I would have expected that as an answer re: consent rather than "we didn't have to".) (3/n)
The Twitter thread emphasizes that they weren't using PII, but prompts from people seeking mental health support are still potentially quite sensitive, and some folks on Twitter were concerned about data going back to OpenAI - I assume that GPT-3 can run internally though? In which case I suppose the privacy risks would be the same as when people choose to use the system at all. (4/n)
But I think that even outside of privacy concerns, a lot of people just don't like the idea of such sensitive content potentially being used to train AI without their consent, which is something that we should know from the backlash against Crisis Text Line. (5/n)
In fact, a lot of people are upset about being "experimented on" without their consent regardless of the context. Even though this is sometimes framed as "it's just A/B testing!" when it happens on a platform/product, sensitive contexts (e.g. mental health, emotion) are a special case. (We actually found this when studying reactions to the Facebook emotional contagion study: https://cmci.colorado.edu/~cafi5706/UnexpectedExpectationsNMSPreprint.pdf ) (6/n)
@cfiesler let me know if you want an intro to Rob, in case he's able to share more about the protocol
@natematias I would encourage him to share it in the Twitter thread! There are a lot of very upset people.
@cfiesler @natematias honest question, does doing this on twitter ever actually work? i'd personally advice not to do anything on twitter, write a long-form description of the protocol, and post it on their website eventually
@jbigham @natematias Well, he chose to share the findings on Twitter without sharing anything about the ethical considerations. If he wants to address the accusations of unethical research, those accusations are on Twitter, so I think it makes sense to address them on Twitter.
@cfiesler @natematias idk, i think the dynamics of twitter makes that basically impossible. glad i'm not there anymore!
@jbigham @natematias I mean if for some reason it's impossible to address the ethical considerations for research on Twitter then I would suggest not posting about research there at all. 🤷‍♀️ (Which one might well argue lol.)

@cfiesler @natematias specifically, my argument is it's not possible to do this on Twitter **now**… even a great explanation wouldn't spread, nobody's looking to RT it. much more likely is people would be super primed to argue with it even if it's kind of reasonable. and all the while the old stuff keeps spreading without knowledge of the new stuff.

but, yeah, twitter is double-edged like that. great way to get your message out, but watch out!