Lenka Zdeborova (@zdeborova)

메모리화와 일반화의 균형을 탐구하기 위해 Rules-and-Facts 모델을 소개했다. 단순 암기가 아니라 규칙 학습과 사실 기억을 동시에 요구하는 실제 과제를 평가하는 데 초점을 맞춘 연구로 보인다.

https://x.com/zdeborova/status/2037522113750302758

#llm #machinelearning #memorization #generalization #research

Lenka Zdeborova (@zdeborova) on X

Memorization is often treated as something that can be tolerated without harming generalization - or studied in isolation. But many real tasks require *both learning rules and memorizing facts*. We introduce the Rules-and-Facts model to probe this: https://t.co/hPSYC2eyUV

X (formerly Twitter)

Lossfunk (@lossfunk)

Esolang-Bench에 대한 질문에 답하며, 이 프로젝트는 호기심 기반으로 시작됐고 인간의 sample-efficiency와 OOD generalization을 이해하는 데 관심이 있었다고 설명한다. 모델이 zero/few-shot으로 얼마나 학습하는지 보는 벤치마크의 목적을 공유한다.

https://x.com/lossfunk/status/2034832598930006135

#llm #benchmark #research #generalization

Lossfunk (@lossfunk) on X

@daniel_mac8 https://t.co/76JNICAMas

X (formerly Twitter)

fly51fly (@fly51fly)

YOR(Your Own Mobile Manipulator) 논문: 일반화 가능한 모바일 매니퓰레이터 설계 및 구현 제안. M. H. Anjaria, M. E. Erciyes, V. Ghatnekar, N. Navarkar 등 저자, New York University 관련(2026, arXiv). 로봇의 모듈성·이식성·환경 일반화 측면에서 기여를 목표로 함.

https://x.com/fly51fly/status/2023152096129044751

#robotics #mobilemanipulator #yor #generalization

fly51fly (@fly51fly) on X

[RO] YOR: Your Own Mobile Manipulator for Generalizable Robotics M H Anjaria, M E Erciyes, V Ghatnekar, N Navarkar... [New York University] (2026) https://t.co/wiBZtyOvj4

X (formerly Twitter)

Xin Eric Wang (@xwang_lk)

GEA(진화형 에이전트)의 일반화 성능을 테스트하기 위해, 평가 시점에 acting 모듈의 코딩 모델을 GPT 계열 및 Claude 계열 백본으로 교체해 실험했다. iteration-0(초기) 에이전트와 GEA로 진화한 최종 에이전트를 비교한 결과, GEA로 진화한 에이전트가 다양한 백본에서 일관되게 더 높은 성능을 보였다는 중간 보고 내용이다.

https://x.com/xwang_lk/status/2019969070129639752

#gea #gpt #claude #generalization #agents

Xin Eric Wang (@xwang_lk) on X

@WengZhaoti39773 @anton_iades @deepaknathani11 @zhenzhangzz @XiaoSophiaPu 🧵4/N GEA transfers across models To test generalization, we swapped the acting module’s coding model with different GPT-series and Claude-series backbones at eval time—then compared iteration-0 vs. the best GEA-evolved agent. Result: the GEA agent consistently beats the

X (formerly Twitter)

#ontology of
#time-qualified-objects
#Ingarden
#StudiaPhilosophica #1935

/3
Ingarden here tries to describe esp. a 'general structure'[formale Aufbau] of say every day macroscopic 'objects of discourse', anticipating them terminologically as "zeitbestimmte, individuelle, seinsautonome Gegenstände" [time-qualified, individual, autonomously existing objects]. And in the light of rather 90 years of "analytical philosophy" later
[ having sharpened our reservations wrt say "Seinsbestimmungen"]
this verbal packaging seems, as said above, unfortunate. But what (from my p.o.v.) Ingarden wants to draw attention to, is kind of an #ontological #generalization of what nowadays in #philbio is habitually discussed under the heading of #homeostatic systems/processes/objects.
#autopoiesis #homeostasis

It’s incredible that these two identities can explain so much.

Link: https://arxiv.org/abs/2505.03754

#math #calculus #euler #unification #generalization #integrals #integration

Ilya Sutskever argues that we’re shifting from the age of scaling to the age of research: today’s models excel on benchmarks but still generalize far worse than humans. The interview highlights why future progress will depend on new learning principles, continual learning, and a deeper understanding of generalization — not just more compute.
https://www.dwarkesh.com/p/ilya-sutskever-2
#AIResearch #Generalization #FutureOfAI
Ilya Sutskever – We're moving from the age of scaling to the age of research

“These models somehow just generalize dramatically worse than people. It's a very fundamental thing.”

Dwarkesh Podcast

Day 11 of the #30DayMapChallenge: minimal map.

For this one, I used the R package rmapshaper to generalise the German states using the Douglas-Peucker algorithm.

#gis #cartography #rstats #generalization

Researchers isolate memorization from problem-solving in AI neural networks

Basic arithmetic ability lives in the memorization pathways, not logic circuits.

Ars Technica
How does the #brain transfer #MotorSkills between hands? This study reveals that transfer relies on re-expressing the neural patterns established during initial learning in distributed higher-order brain areas, offering new insights into learning #generalization @PLOSBiology https://plos.io/41LOAWf