It seems a fair AI would treat you the same if you had a different race/gender/disability/etc., but how can we ever test counterfactual fairness? In #NeurIPS2023 w Victor Veitch, we show you sometimes can with simple, observed metrics like group parity! 🧵 https://arxiv.org/abs/2310.19691
Causal Context Connects Counterfactual Fairness to Robust Prediction and Group Fairness

Counterfactual fairness requires that a person would have been classified in the same way by an AI or other algorithmic system if they had a different protected class, such as a different race or gender. This is an intuitive standard, as reflected in the U.S. legal system, but its use is limited because counterfactuals cannot be directly observed in real-world data. On the other hand, group fairness metrics (e.g., demographic parity or equalized odds) are less intuitive but more readily observed. In this paper, we use $\textit{causal context}$ to bridge the gaps between counterfactual fairness, robust prediction, and group fairness. First, we motivate counterfactual fairness by showing that there is not necessarily a fundamental trade-off between fairness and accuracy because, under plausible conditions, the counterfactually fair predictor is in fact accuracy-optimal in an unbiased target distribution. Second, we develop a correspondence between the causal graph of the data-generating process and which, if any, group fairness metrics are equivalent to counterfactual fairness. Third, we show that in three common fairness contexts$\unicode{x2013}$measurement error, selection on label, and selection on predictors$\unicode{x2013}$counterfactual fairness is equivalent to demographic parity, equalized odds, and calibration, respectively. Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.

arXiv.org
Final day of our roadshow in December 2023 - it has been a pleasure to represent #AI4Health at #NeurIPS2023

I was an invited speaker at the Neurips conference in New Orleans in Dec 2023 for the NeuroAI social.

I was more than surprised to be invited to what is now primarily an AI/ML conference (despite "Neural" being the first word, and the conference's origins in comp neuroscience). To say that the successful AI systems currently deployed and neuroscience/study of biological intelligence have diverged would be an understatement, it was a somewhat odd choice for the organizers to invite a neurophysiologist like me.

So, I took the invite as an opportunity to talk about attention in biological vision and how whatever they now call as attention in AI/ML/CNN/transformers
is almost orthogonal to what many others and I study within visual neuroscience or psychology or cognitive science.

While the talk was a partial critique of current AI models, it was more a call for them to take seriously the one instance of intelligence (i.e., the biological world) seriously and how it still has much to offer towards designing better AI systems.

If attention is not one of the cognitive ingredients that makes up the intelligence recipe towards autonomous systems, I don't know what is.

The talk slides can be found here: https://www.dropbox.com/scl/fi/927f50bfvqpwtserizgl5/NeuroAI_Neurips_KS2023.pdf?rlkey=r3pgvsyoudwczapjijx80pj7l

#Neurips2023 #NeuroAI #Attention #Vision #BiologicalVision #ActiveVision #SpaceVariance #NonlinearCompression #EyeMovements #Neurodynamics #AutonomousSystems #AI #ML

NeuroAI_Neurips_KS2023.pdf

Shared with Dropbox

Dropbox

From #NeurIPS2023: EEG to speech generation. 40% of accuracy in the best case, but if you consider it is a wearable and not fMRI.... "DeWave: Discrete #EEG Waves Encoding for #Brain Dynamics to Text Translation"
https://arxiv.org/abs/2309.14030

#LLM #neuroscience

DeWave: Discrete EEG Waves Encoding for Brain Dynamics to Text Translation

The translation of brain dynamics into natural language is pivotal for brain-computer interfaces (BCIs). With the swift advancement of large language models, such as ChatGPT, the need to bridge the gap between the brain and languages becomes increasingly pressing. Current methods, however, require eye-tracking fixations or event markers to segment brain dynamics into word-level features, which can restrict the practical application of these systems. To tackle these issues, we introduce a novel framework, DeWave, that integrates discrete encoding sequences into open-vocabulary EEG-to-text translation tasks. DeWave uses a quantized variational encoder to derive discrete codex encoding and align it with pre-trained language models. This discrete codex representation brings forth two advantages: 1) it realizes translation on raw waves without marker by introducing text-EEG contrastive alignment training, and 2) it alleviates the interference caused by individual differences in EEG waves through an invariant discrete codex with or without markers. Our model surpasses the previous baseline (40.1 and 31.7) by 3.06% and 6.34%, respectively, achieving 41.35 BLEU-1 and 33.71 Rouge-F on the ZuCo Dataset. This work is the first to facilitate the translation of entire EEG signal periods without word-level order markers (e.g., eye fixations), scoring 20.5 BLEU-1 and 29.5 Rouge-1 on the ZuCo Dataset.

arXiv.org
Research in mechanistic interpretability and neuroscience often relies on interpreting internal representations to understand systems, or manipulating representations to improve models. I gave a talk at the UniReps workshop at NeurIPS on a few challenges for this area, summary thread: 1/12
#ai #ml #neuroscience #computationalneuroscience #interpretability #NeuralRepresentations #neurips2023

Ok, a confession about attending #NeurIPS2023. I was there for the cutting-edge AI/ML innovation and science, sure. But I was *also* there for the food, and to see old friends. But *also*, really maybe the first thing I thought of?

Jazz.

Saw a fantastic concert at Preservation Hall, lots of great music on Frenchman St. And I went to the Jazz museum, really nice.

What I wasn't expecting? Jazz museum is in the old Mint. Which had this very cool old calculator. #SeeImStillNerdy

HuggingFace talking now about Regulatable ML at #NeurIPS2023

My top three voice recognition errors from the #neurips live transcript (ML in structural biology workshop):

3. Kagglers => Cavaliers
2. AlphaFold => Alcohol
1. Generative models => Genital models

#neurips2023

Max Welling: “I’ve been long, long in denial that text based models could help you solve some physics problem… still actually believe that, but never mind. My uncertainty is getting bigger on that one” #neurips2023 #neurips
NeurIPS 2023 Day 7 (Main Conference + Workshops) in New Orleans, LA still going strong. Even on the last day, sessions are well-attended. #NeurIPS2023