🚀 #AIWriting #ContentCreation #AIWriting #ContentCreation #Grammarly #JasperAI #Writesonic #WritingTools #AIAssistant #ContentMarketing #Copywriting
Writesonic AI: La soluzione completa per contenuti al Top
Leggi articolo: https://www.tantilink.net/2025/06/Writesonic-AI-La%20soluzione-completa-per-contenuti-al-Top.html
Writesonic AI: La soluzione completa per contenuti al Top
Leggi articolo: https://www.tantilink.net/2025/06/Writesonic-AI-La%20soluzione-completa-per-contenuti-al-Top.html
Buonasera 🙂
Esistono #IA etiche e #PrivacyFriendly che evitano di salvare dati, conversazioni ecc. ecc. ? 🤔
🇬🇧 Good evening 🙂
Are there ethical and #PrivacyFriendly #AIs that avoid saving data, conversations, etc. etc. ? 🤔
#chatgpt #openai #opensource #claudeai #intelligenzaartificiale #artificialintelligence #perplexityai #Writesonic #midjourney #MicrosoftDesigner #StabilityAI #suno #udio #heygen
TITLE: Criteria for an AI to write psychotherapy chart notes (or medical chart notes)
Note: Reposting to get it out to a few additional groups.
I am informed that a new product called #Mentalyc has entered the market. It's mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.
I have no firm opinion yet on Mentalyc, but it's expensive ($39-$69 per month per clinician) and I'd personally need to know a lot more about what's in that dataset and who is benefiting from it.
**So I'm asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.**
Here are MY thoughts so far:
1) REQUIRED: The AI either:
1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or
1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or
1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)
2) OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone's list, but I've seen too many #BAA subcontractors playing loose with the definition of #HIPAA (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.
3) OPTIONAL: Inexpensive (There are several free AI tools emerging.)
4) OPTIONAL: Open Source
5) Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)
6) The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)
I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note -- possibly just a paragraph to go along with the already existing form in the official note. I think. It's early days in my thinking.
--
Michael Reeder, LCPC
@psychology
@socialpsych
@socialwork
@psychiatry
#Bias #Ethics #EthicalAI #AI #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #bard #security #dataanalytics #artificialintelligence #CopyAI #Simplified #Writesonic #Rytr #Writecream #CreaitorAI #Quillbot #Grammarly #SmartCopy #TextBlaze #HIPAA #privacy #psychology #counseling #socialwork #psychotherapy #research #SOAP #EHR ##mentalhealth #technology #psychiatry #healthcare #medical #doctor
TITLE: Criteria for an AI to write psychotherapy chart notes (or medical chart notes)
I am informed that a new product called #Mentalyc has entered the market. It's mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.
I have no firm opinion yet on Mentalyc, but it's expensive ($39-$69 per month per clinician) and I'd personally need to know a lot more about what's in that dataset and who is benefiting from it.
**So I'm asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.**
Here are MY thoughts so far:
1) REQUIRED: The AI either:
1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or
1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or
1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)
2) OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone's list, but I've seen too many #BAA subcontractors playing loose with the definition of #HIPAA (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.
3) OPTIONAL: Inexpensive (There are several free AI tools emerging.)
4) OPTIONAL: Open Source
5) Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)
6) The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)
I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note -- possibly just a paragraph to go along with the already existing form in the official note. I think. It's early days in my thinking.
--
Michael Reeder, LCPC
#Bias #Ethics #EthicalAI #AI #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #bard #security #dataanalytics #artificialintelligence #CopyAI #Simplified #Writesonic #Rytr #Writecream #CreaitorAI #Quillbot #Grammarly #SmartCopy #TextBlaze #HIPAA #privacy #psychology #counseling #socialwork #psychotherapy #research @psychotherapist @psychotherapists @psychology.a.gup.pe @socialpsych.a.gup.pe @socialwork.a.gup.pe @psychiatry.a.gup.pe #SOAP #EHR ##mentalhealth #technology #psychiatry #healthcare #medical #doctor
This #indian #ai #chatbot came 1 year before #chatgpt
#writesonic #chatsonic is trained from google data so it understand current events. Though does lack accuracy at times.
ChatSonic is the perfect Chrome extension for faster, smarter and more productive content creation and communication. With ChatGPT-like AI, it helps you save time by generating, summarising and replying to emails, tweets, LinkedIn posts and more.
So, I used an artificial intelligence (AI) bot to write a conference session proposal. Here's what I did, why I did it, and what I think about the process and product.
https://wiobyrne.com/make-your-next-conference-proposal-a-smash-hit-with-an-ai-writing-assistant/
Literarisch ist mit den Chatbots nicht viel Staat zu machen.
Ich schmiss ihm folgenden Input in den Rachen:
"Und willst du Enge vom Konto buchen
Lass dein Geschäftsmodell behufen
Hast du einen ähnlichen Reim für mich?"
Er antwortete:
"Willst du die Kontobuchung sehen
Wird dein Geschäftsmodell behüten
Gibt es eine Lösung in der Not
Macht es mehr Sinn, als es nur soll."
So ganz versteh' ich den Hype nicht ...