We're so excited to be part of this conversation and to have our work highlighted at #NeurIPS22.

We look forward to seeing how continuing collaborations create positive change! #MozFest #TAIWG

Thank you for your great work, @jen_gineered! ⭐️https://twitter.com/jen_gineered/status/1599093957224374272

Original tweet : https://twitter.com/mozillafestival/status/1622915177514426368

Jennifer Ding on Twitter

“Closing out the first week of #NeurIPS22 with this workshop on broadening ML collaborations. Happy to be able to highlight the great work of open collaboratives like @turingway @BigscienceW and @mozillafestival TAIWG in a talk!”

Twitter

A paper at #NeurIPS22

Random Rank: The One and Only Strategyproof and Proportionally Fair Randomized Facility Location Mechanism

https://bytez.com/read/neurips/53479 #NeurIPS2022 #bytez #friendly-papers via @[email protected]

Random Rank: The One and Only Strategyproof and Proportionally Fair Randomized Facility Location Mechanism

TLDR; Proportionality is an attractive fairness concept that has been applied to a range of problems including the facility location problem. In our work, we propose a concept called Strong Proportionality, which ensures that when there are two groups of agents at different locations, both groups incur the same total cost. We show that there is no deterministic strategyproof mechanism satisfying the property.

Bytez

Hello Mastodon world! New to this, still learning & hopeful for a kind platform.

As a first post, a late announcement of my #NeurIPS22 work 📢 (still available online if you want to check it out):

- Talk @ attention workshop, org:
@Neurograce @AkankshaSaran et al
- Keynote @ memory workshop, org:
@alex_ander @kennethanorman et al
- 2 panels
- 3 papers at the main conf & workshops

Not sick but still recovering my energy! Send Qs if you came across any of the work and had questions :)

celebrating along with everyone else that we have finally come up with a fantastic #bullshit generator that can help harried students and professors everywhere answer their bullshit exam questions, author their bullshit papers and write their bullshit funding proposals

#ai #agi #neurips22

Given the enormous attention people have been given to #ChatGPT in the last few days, it's fun to read the lukewarm reception of the technical innovations and data collection for the #InstructGPT paper it is based on.

(Although it did still get a solid accept for #Neurips22, of course)

https://openreview.net/forum?id=TG8KACxEON

Training language models to follow instructions with human feedback

We fine-tune GPT-3 using data collected from human labelers. The resulting model, called InstructGPT, outperforms GPT-3 on a range of NLP tasks.

OpenReview
Had so much fun at #PyTorch conference and even had the chance to meet and chat with Refik Anadol! Feeling starstruck 😊 #NeurIPS22 #NeurIPS2022 #AI #PTC2022
Very nice ending to the excellent #NeurIPS22 HILL Human-in-the-loop learning workshop: Best paper award to Differentiable User Models https://arxiv.org/abs/2211.16277. Congrats to Alex Hämäläinen and @mert_celikok @FCAI_fi @idsai_uom #TuringAIFellows
Differentiable User Models

Probabilistic user modeling is essential for building machine learning systems in the ubiquitous cases with humans in the loop. However, modern advanced user models, often designed as cognitive behavior simulators, are incompatible with modern machine learning pipelines and computationally prohibitive for most practical applications. We address this problem by introducing widely-applicable differentiable surrogates for bypassing this computational bottleneck; the surrogates enable computationally efficient inference with modern cognitive models. We show experimentally that modeling capabilities comparable to the only available solution, existing likelihood-free inference methods, are achievable with a computational cost suitable for online applications. Finally, we demonstrate how AI-assistants can now use cognitive models for online interaction in a menu-search task, which has so far required hours of computation during interaction.

arXiv.org
We then continued in mixed-mode, with Alex Hämäläinen remotely and me in person, on the contributed talk "Differentiable User Models". The idea is to meta-learn a surrogate for simulator-type user models having intractable likelihoods https://arxiv.org/pdf/2211.16277.pdf @FCAI_fi @OfficialUoM
---
RT @samikaski
Very happy to speak in the #NeurIPS22 HILL, Human-In-the-Loop Learning Workshop. Outstanding set of talks, and lots of interested…
https://twitter.com/samikaski/status/1598734467878637568
Very happy to speak in the #NeurIPS22 HILL, Human-In-the-Loop Learning Workshop. Outstanding set of talks, and lots of interested people present. I wonder how many remotely. @FCAI_fi @OfficialUoM
---
RT @xinw_ai
Prof @samikaski is giving a talk now on collaborative AI for assisting virtual laboratories.
https://twitter.com/xinw_ai/status/1598726338239766533
Xin Wang on Twitter

“Prof @samikaski is giving a talk now on collaborative AI for assisting virtual laboratories.”

Twitter
RT @octonion: The relaxation is interesting - for a relaxed logic gate parameters are probability of inputs, output is probability (or expectation) of output. Then a conventional categorical softmax to choose among the different types of relaxed logic gates. QT @alfcnz: I found rather interesting «Deep Differentiable Logic Gate Networks» by @fhkpetersen. Learning combinational networks comprising logic gates such as AND and XOR, which allow for very fast execution and hardware implementation. #NeurIPS22 https://arxiv.org/abs/2210.08277 2022-12-02 14:46:33 UTC
Deep Differentiable Logic Gate Networks

Recently, research has increasingly focused on developing efficient neural network architectures. In this work, we explore logic gate networks for machine learning tasks by learning combinations of logic gates. These networks comprise logic gates such as "AND" and "XOR", which allow for very fast execution. The difficulty in learning logic gate networks is that they are conventionally non-differentiable and therefore do not allow training with gradient descent. Thus, to allow for effective training, we propose differentiable logic gate networks, an architecture that combines real-valued logics and a continuously parameterized relaxation of the network. The resulting discretized logic gate networks achieve fast inference speeds, e.g., beyond a million images of MNIST per second on a single CPU core.

arXiv.org