I'm speaking at #AMIA2024 in San Francisco. My first talk is at the #NLP Pre-Symposium Workshop on Sunday. On Monday, I'll be presenting my colleague's poster on Stressful Life Events and Colon Cancer Screening. On Tuesday, I have a talk in the afternoon about using time-sensitive NLP extractions to help predict patient visit outcomes and a poster on #BehavioralTesting and pejorative language in patient notes.

Register here to see me give a talk this Tuesday at noon Eastern (US) time: https://amia.org/webinar-library/behavioral-testing-and-evaluation-probe-language-models-algorithmic-bias

I'll be talking about using #BehavioralTesting to probe for #AlgorithmicBias in several of my clinical #NLP projects as part of #AMIA's Working Group Webinar Series

Behavioral Testing and Evaluation to Probe Language Models for Algorithmic Bias

With growing legal and scientific evidence for the importance of reducing model bias, both model developers and deployers need tools to quantify the bias. Unfortunately, algorithmic bias can take as many forms as there are implementations. In this talk, Paul M. Heider covesr a range clinical NLP use cases like de-identification and predicting diagnoses, highlighting the utility of behavioral testing and comparative evaluation methods to identify the scope of a model’s bias.

AMIA - American Medical Informatics Association