I believe that crisis line conversations, and the people that have them, should be protected from exploitation. "Use" of the conversations afterwards, without informed, meaningful #consent, is exploitation.
#CrisisLines #988Lifeline
Some of the "uses" include perpetual storage (for future exploitation), and in-house algorithms reporting "effectiveness"; and I suspect: calibration of so-called sentiment detection algorithms, "AI"-type so-called predictors of risk.
I'm trying to locate the sources of data that certain companies, such as #BehavioralHealthLink and #VibrantEmotionalHealth are using and in US we have a problem where private corporations contract with government agencies, shielding themselves with proprietary software claims, shielding as not being subject to public record laws. It can be unraveled but it's going to take an organized effort, many more people, and time. For awareness. #NLP " #AI "