#Economists and others are used to building #forecasts on the assumption that the agents involved in what they’re forecasting are #rational #optimizers. Makes it difficult when the most important actor is a #narcissist with an inexhaustible need for ego gratification.
Sinon's Blog

The Optimizer Advantage?

This is not how I’d expect an optimizer system to work, at least based on how it’s advertised.

https://solarboi.com/2025/01/23/the-optimizer-advantage/

The Optimizer Advantage?

This is not how I’d expect an optimizer system to work, at least based on how it’s advertised.

derek the solarboi
This MicroAdam paper from #NeurIPS2024 is nicely written! The algorithm is walked through in plain language first, and all the equations and proofs placed in the appendix. Super understandable, kudos to the authors.
https://arxiv.org/abs/2405.15593
#AI #MachineLearning #LLMs #optimizers
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence

We propose a new variant of the Adam optimizer called MicroAdam that specifically minimizes memory overheads, while maintaining theoretical convergence guarantees. We achieve this by compressing the gradient information before it is fed into the optimizer state, thereby reducing its memory footprint significantly. We control the resulting compression error via a novel instance of the classical \emph{error feedback} mechanism from distributed optimization in which *the error correction information is itself compressed* to allow for practical memory gains. We prove that the resulting approach maintains theoretical convergence guarantees competitive to those of AMSGrad, while providing good practical performance. Specifically, we show that MicroAdam can be implemented efficiently on GPUs: on both million-scale (BERT) and billion-scale (LLaMA) models, MicroAdam provides practical convergence competitive to that of the uncompressed Adam baseline, with lower memory usage and similar running time. Our code is available at https://github.com/IST-DASLab/MicroAdam.

arXiv.org

'PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates', by Zachary Frangella, Pratik Rathore, Shipu Zhao, Madeleine Udell.

http://jmlr.org/papers/v25/23-1187.html

#optimizers #optimization #preconditioned

PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates

'PyPop7: A Pure-Python Library for Population-Based Black-Box Optimization', by Qiqi Duan et al.

http://jmlr.org/papers/v25/23-0386.html

#optimizers #optimization #pypop7

PyPop7: A Pure-Python Library for Population-Based Black-Box Optimization

'Multi-Objective Neural Architecture Search by Learning Search Space Partitions', by Yiyang Zhao, Linnan Wang, Tian Guo.

http://jmlr.org/papers/v25/23-1013.html

#optimizers #optimizer #optimizations

Multi-Objective Neural Architecture Search by Learning Search Space Partitions

'Robust Black-Box Optimization for Stochastic Search and Episodic Reinforcement Learning', by Maximilian Hüttenrauch, Gerhard Neumann.

http://jmlr.org/papers/v25/22-0564.html

#reinforcement #optimizers #optimizes

Robust Black-Box Optimization for Stochastic Search and Episodic Reinforcement Learning

'Neural Feature Learning in Function Space', by Xiangxiang Xu, Lizhong Zheng.

http://jmlr.org/papers/v25/23-1202.html

#features #feature #optimizers

Neural Feature Learning in Function Space

'Win: Weight-Decay-Integrated Nesterov Acceleration for Faster Network Training', by Pan Zhou, Xingyu Xie, Zhouchen Lin, Kim-Chuan Toh, Shuicheng Yan.

http://jmlr.org/papers/v25/23-1073.html

#accelerated #optimizers #adaptive

Win: Weight-Decay-Integrated Nesterov Acceleration for Faster Network Training