Steven Atkinson

@Sdatkinson
19 Followers
76 Following
30 Posts
Statistics, machine learning, physics, engineering, scientific computing, music, woodworking.
Today, we open sourced Fortuna (https://github.com/awslabs/fortuna) a library for uncertainty quantification.
Deep neural networks are often overconfident and do not know what they don’t know. Quantifying the uncertainty in the predictions they make will help deploy deep learning more responsibly and more safely.
#responsibleAI #ConformalPrediction #BayesianInference #UncertaintyQuantification #deeplearning #opensource
GitHub - awslabs/fortuna: A Library for Uncertainty Quantification.

A Library for Uncertainty Quantification. Contribute to awslabs/fortuna development by creating an account on GitHub.

GitHub

Announcing the #ICLR2023 workshop on "Physics for Machine Learning"🔥

Send us: equivariant NNs, Lie algebra approaches, neural ODEs, fluid/molecular/particle/multi-scale physics - etc!

Site: https://physics4ml.github.io

OpenReview: https://openreview.net/group?id=ICLR.cc/2023/Workshop/Physics4ML

Deadline 3rd Feb!⚡️

#machinelearning #deeplearning

1/2

Overview

Physics4ML

ICLR 2023 Workshop on Physics for Machine Learning

I suspect that the distribution of chats people are having in private with #ChatGPT is quite different when they're not making content to share on the internet.

Can anyone point me to #HCI research on what people do when given a chatbot like this?

I'm curious what picture is emerging from the data from the >1M users #OpenAI has so far. Reflecting on my impressions, I have a few guesses.

*whispers* most people don't care about decentralization and governance models, they just want someplace to hang out that's not a hassle.

The Algorithmic Imprint

In this paper, we introduce the notion of the "algorithmic imprint" to illustrate how merely removing an algorithm does not necessarily undo or mitigate its consequences.

@upol @Riedl @jakemetcalf

#responsibleAI
https://arxiv.org/abs/2206.03275v1

The Algorithmic Imprint

When algorithmic harms emerge, a reasonable response is to stop using the algorithm to resolve concerns related to fairness, accountability, transparency, and ethics (FATE). However, just because an algorithm is removed does not imply its FATE-related issues cease to exist. In this paper, we introduce the notion of the "algorithmic imprint" to illustrate how merely removing an algorithm does not necessarily undo or mitigate its consequences. We operationalize this concept and its implications through the 2020 events surrounding the algorithmic grading of the General Certificate of Education (GCE) Advanced (A) Level exams, an internationally recognized UK-based high school diploma exam administered in over 160 countries. While the algorithmic standardization was ultimately removed due to global protests, we show how the removal failed to undo the algorithmic imprint on the sociotechnical infrastructures that shape students', teachers', and parents' lives. These events provide a rare chance to analyze the state of the world both with and without algorithmic mediation. We situate our case study in Bangladesh to illustrate how algorithms made in the Global North disproportionately impact stakeholders in the Global South. Chronicling more than a year-long community engagement consisting of 47 inter-views, we present the first coherent timeline of "what" happened in Bangladesh, contextualizing "why" and "how" they happened through the lenses of the algorithmic imprint and situated algorithmic fairness. Analyzing these events, we highlight how the contours of the algorithmic imprints can be inferred at the infrastructural, social, and individual levels. We share conceptual and practical implications around how imprint-awareness can (a) broaden the boundaries of how we think about algorithmic impact, (b) inform how we design algorithms, and (c) guide us in AI governance.

arXiv.org
lorna shore goes so hard, damn
This seems like an appropriate first comic for mastodon.

#Neural nets struggle to #guarantee that their predictions always conform to our prior knowledge!

We devise a drop-in replacement layer that injects given #symbolic #constraints while retaining #exact and #efficient gradient optimization in our #NeurIPS paper:

https://openreview.net/forum?id=o-mxIWAY1T8

1/🧵

Semantic Probabilistic Layers for Neuro-Symbolic Learning

We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic...

OpenReview

From #routing to #hierarchical multi-label classification and user #preference learning, SPLs outperform other baselines that relax constraints or use problem-specific architectures.

Even when they predict the wrong labels, they still form a valid configuration!

Join Kareem Ahmed, Stefano Teso, Kai-wei Chang, @guy
and me at
#NeurIPS2022 to talk about #SPLs and how to have #neural #nets to behave in the way we #expect them to do!

📜https://openreview.net/forum?id=o-mxIWAY1T8
🖥️https://github.com/KareemYousrii/SPL

6/6

Semantic Probabilistic Layers for Neuro-Symbolic Learning

We design a predictive layer for structured-output prediction (SOP) that can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic...

OpenReview