Kimon Fountoulakis (@kfountou)
SOTA LVLM 모델들의 오픈월드 물체 개수 세기를 평가한 방대한 연구(약 40페이지 분량의 실험)가 TMLR에 수락되었다는 내용으로, 대규모 실험을 통해 LVLM의 물체 계수 성능을 체계적으로 검증했다는 소식입니다.
Kimon Fountoulakis (@kfountou)
SOTA LVLM 모델들의 오픈월드 물체 개수 세기를 평가한 방대한 연구(약 40페이지 분량의 실험)가 TMLR에 수락되었다는 내용으로, 대규모 실험을 통해 LVLM의 물체 계수 성능을 체계적으로 검증했다는 소식입니다.
🧠 Just came across #BeyondPDF by #TMLR. It introduces a new submission format for #ScientificPublishing based on #Markdown and #HTML, supporting interactive figures, videos, and other rich media. This enables direct interaction with content beyond what static PDFs allow. Awesome idea!
Now out in #TMLR:
🍇 GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks 🍇
There's lots of work on sampling subgraphs for GNNs, but relatively little on making this sampling process _adaptive_. That is, learning to select the data from the graph that is relevant for your task.
We introduce an RL-based and a GFLowNet-based sampler and show that the approach performs well on heterophilic graphs.
I started to serve as action editor for #TMLR.
Although I was not really looking for additional editorial duties, I felt that perhaps such high-standard (no APC-collecting crap) online journals that can publish papers at scale are what is currently needed in ML to offer a more responsible alternative to gigantic conferences.
Our work towards the design of deeper and competitive Forward-Forward Networks has been accepted at #TMLR.
https://openreview.net/forum?id=a7KP5uo0Fp
This was joint work with Inton Tsang (@inton) and Thomas Dooms. Kudos to Thomas as this was work he conducted as part of his CS Master Thesis project @UAntwerpen
#locallearning #FF @IDLabResearch
Our work on sensitivity-aware amortized Bayesian inference is now published in #TMLR: https://openreview.net/forum?id=Kxtpa9rvM0
TL;DR: Statistical analyses involve countless choices, but systematically evaluating the impact of these choices quickly becomes infeasible for complex models. Our framework enables amortized and thus efficient sensitivity analyses for all major choices in a (simulation-based) Bayesian workflow.
New work by Heiko, Fredrik, Jan-Willem and myself interpreting Generative Flow Networks (GFN) as generative models trained by variational inference!
I also need to mention the #RLDM and #TMLR communities. Having venues such as these, that welcome cross-disciplinary work, is such a benefit to our ML research community.
Our paper can be seen at: https://openreview.net/forum?id=oKlEOT83gI
And our code (soon!): https://github.com/MLforHealth/DistDeD
(7/7)