Kimon Fountoulakis (@kfountou)

SOTA LVLM 모델들의 오픈월드 물체 개수 세기를 평가한 방대한 연구(약 40페이지 분량의 실험)가 TMLR에 수락되었다는 내용으로, 대규모 실험을 통해 LVLM의 물체 계수 성능을 체계적으로 검증했다는 소식입니다.

https://x.com/kfountou/status/2017294637661102366

#lvml #visionlanguage #objectcounting #tmlr #research

Kimon Fountoulakis (@kfountou) on X

The amount of work that has gone into evaluating SOTA LVLM models for open-world object counting is impressive. Basically, 40 pages of experiments. Accepted at TMLR.

X (formerly Twitter)

🧠 Just came across #BeyondPDF by #TMLR. It introduces a new submission format for #ScientificPublishing based on #Markdown and #HTML, supporting interactive figures, videos, and other rich media. This enables direct interaction with content beyond what static PDFs allow. Awesome idea!

🌍 https://tmlr-beyond-pdf.org/about

#OpenScience

Now out in #TMLR:

🍇 GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks 🍇

There's lots of work on sampling subgraphs for GNNs, but relatively little on making this sampling process _adaptive_. That is, learning to select the data from the graph that is relevant for your task.

We introduce an RL-based and a GFLowNet-based sampler and show that the approach performs well on heterophilic graphs.

https://openreview.net/forum?id=QI0l842vSq

#machinelearning #graphs #graph_learning #paper

I started to serve as action editor for #TMLR.

Although I was not really looking for additional editorial duties, I felt that perhaps such high-standard (no APC-collecting crap) online journals that can publish papers at scale are what is currently needed in ML to offer a more responsible alternative to gigantic conferences.

Our work towards the design of deeper and competitive Forward-Forward Networks has been accepted at #TMLR.
https://openreview.net/forum?id=a7KP5uo0Fp

This was joint work with Inton Tsang (@inton) and Thomas Dooms. Kudos to Thomas as this was work he conducted as part of his CS Master Thesis project @UAntwerpen

#locallearning #FF @IDLabResearch

The Trifecta: Three simple techniques for training deeper...

Massive backpropagated models can outperform humans on a variety of tasks but suffer from high power consumption and poor generalisation. Local learning, which focuses on updating subsets of a...

OpenReview

Our work on sensitivity-aware amortized Bayesian inference is now published in #TMLR: https://openreview.net/forum?id=Kxtpa9rvM0

TL;DR: Statistical analyses involve countless choices, but systematically evaluating the impact of these choices quickly becomes infeasible for complex models. Our framework enables amortized and thus efficient sensitivity analyses for all major choices in a (simulation-based) Bayesian workflow.

@ho @MarvinSchmitt @paul_buerkner

Sensitivity-Aware Amortized Bayesian Inference

Sensitivity analyses reveal the influence of various modeling choices on the outcomes of statistical analyses. While theoretically appealing, they are overwhelmingly inefficient for complex...

OpenReview
"(6) KyoさんはTwitterを使っています: 「単著論文「Self-Supervision is All You Need for Solving Rubik’s Cube」が #TMLR’23に採択されました🎉 「スクランブルの最終手を予測する」という単純なタスクによって、ルービックキューブのような組み合わせ探索問題を近最適に解けることを示しました(先行手法DeepCubeAを上回り、SOTAを達成) https://t.co/WD0OSyBWD3」 / X"
https://twitter.com/kyo_takano/status/1685200714119933952
Kyo on Twitter

“単著論文「Self-Supervision is All You Need for Solving Rubik’s Cube」が #TMLR’23に採択されました🎉 「スクランブルの最終手を予測する」という単純なタスクによって、ルービックキューブのような組み合わせ探索問題を近最適に解けることを示しました(先行手法DeepCubeAを上回り、SOTAを達成)”

Twitter

New work by Heiko, Fredrik, Jan-Willem and myself interpreting Generative Flow Networks (GFN) as generative models trained by variational inference!

#TMLR #GenerativeAI #GFN #MachineLearning

https://openreview.net/forum?id=AZ4GobeSLq

A Variational Perspective on Generative Flow Networks

Generative flow networks (GFNs) are a class of probabilistic models for sequential sampling of composite objects, proportional to a target distribution that is defined in terms of an energy...

OpenReview
I am happy to announce that our work about Evidential Deep Learning methods for Uncertainty Quantification with @ch_hardmeier
and @jesfrellsen
got accepted at #TMLR! 🥳 (1/3) 🧵 https://openreview.net/forum?id=xqS8k9E75c
Prior and Posterior Networks: A Survey on Evidential Deep Learning...

Popular approaches for quantifying predictive uncertainty in deep neural networks often involve distributions over weights or multiple models, for instance via Markov Chain sampling, ensembling, or...

OpenReview

I also need to mention the #RLDM and #TMLR communities. Having venues such as these, that welcome cross-disciplinary work, is such a benefit to our ML research community.

Our paper can be seen at: https://openreview.net/forum?id=oKlEOT83gI

And our code (soon!): https://github.com/MLforHealth/DistDeD

(7/7)

Risk Sensitive Dead-end Identification in Safety-Critical Offline...

In safety-critical decision-making scenarios being able to identify worst-case outcomes, or dead-ends is crucial in order to develop safe and reliable policies in practice. These situations are...

OpenReview