'A minimax optimal approach to high-dimensional double sparse linear regression', by Yanhang Zhang, Zhifan Li, Shixiang Liu, Jianxin Yin.

http://jmlr.org/papers/v25/23-0653.html

#sparse #thresholding #sparsity

A minimax optimal approach to high-dimensional double sparse linear regression

'skscope: Fast Sparsity-Constrained Optimization in Python', by Zezhi Wang, Junxian Zhu, Xueqin Wang, Jin Zhu, Huiyang Pen, Peng Chen, Anran Wang, Xiaoke Zhang.

http://jmlr.org/papers/v25/23-1574.html

#sparse #optimization #sparsity

skscope: Fast Sparsity-Constrained Optimization in Python

Let's start designing a new course for applied mathematics students in #UCLouvain, #EPL on high dimensional data analysis with 3 wonderful reference books #inverseproblem #highDimensional #statistics #optimization #Sparsity #teaching

#mistral's 8x22B is ~260GB

the trend is to get models smaller, not bigger

#pruning, #sparsity, #quantization, #distillation

so why such a huge model?

does mistral have no other models?

The last talk of the second day of #DIPOpt, by Remi Grinonval, “Rapture of the deep: highs and lows of sparsity in a world of depths”.
#Sparsity #inverseproblems
Yasuhisa Kuroda released a spectral data processing program for chemical analysis called SPANA https://www.eonet.ne.jp/~spana-lsq/index-e.html. He has been kind enough to incorporate our BEADS algorithm (baseline estimation & denoising w/ #sparsity) to separate peaks, baseline and noise using sparsity priors! https://doi.org/10.1016/j.chemolab.2014.09.014 #analyticalchemistry
Damian Bogunowicz, Neural Magic: On revolutionising deep learning with CPUs

AI News spoke with Damian Bogunowicz, a machine learning engineer at Neural Magic, to shed light on the company’s innovative approach to deep learning model optimisation and inference on CPUs.

AI News

Revisiting Sparsity Hunting in Federated Learning: Why the Sparsity Consensus Matters?

https://openreview.net/forum?id=iHyhdpsnyi

#sparse #sparsity #distributed

Revisiting Sparsity Hunting in Federated Learning: Why the Sparsity...

Edge devices can benefit remarkably from federated learning due to their distributed nature; however, their limited resource and compute power pose limitations in deployment. A possible solution to...

OpenReview

'Fundamental limits and algorithms for sparse linear regression with sublinear sparsity', by Lan V. Truong.

http://jmlr.org/papers/v24/21-0543.html

#sparse #sparsity #interpolation

Fundamental limits and algorithms for sparse linear regression with sublinear sparsity

New podcast from @thegradient with Hattie Zhou (twitter: https://twitter.com/oh_that_hat):

`Lottery Tickets and Algorithmic Reasoning in LLMs`

https://thegradientpub.substack.com/p/hattie-zhou-lottery-tickets-and-algorithmic

The first half is focused on the lottery ticket hypothesis, which is a favorite topic of mine.

#ML #sparsity #LLMs

Hattie Zhou (@oh_that_hat) / Twitter

Finding \hat{y}

Twitter