๐Ÿ”ฌ New #SpectralUnmixing paper by Ashesh et al. ( @florianjug): A #DeepLearning-based framework #MicroSplit for #FluorescenceMicroscopy that separates highly overlapping #fluorophore signals directly from multiplexed #ImagingData. Improves signal separation, reduces crosstalk, & enables more accurate multi-channel #imaging w/o requiring extensive reference measurements. Canโ€™t wait to try this out on our in vivo #2P/ #3P imaging data:

๐Ÿ“ https://doi.org/10.1038/s41592-026-03082-1

#Bioimaging

Now we learn to save and load your trained model.

#pytorch #deeplearning #machinelearning

Ah, the infinite wisdom of deep learning equated to a man who remembers every pointless leaf ๐Ÿƒ and ripple ๐ŸŒŠ yet can't think his way out of a wet paper bag! ๐Ÿค” Surely, this profound revelation will revolutionize our understanding of how machines shouldn't think! ๐Ÿš€
https://elonlit.com/scrivings/a-theory-of-deep-learning/ #deepLearning #absurdity #AIhumor #machineIntelligence #techIrony #HackerNews #ngated
A Theory of Deep Learning

We finally know why deep learning works.

elonlit.com | Elon Litman
A Theory of Deep Learning

We finally know why deep learning works.

elonlit.com | Elon Litman
A Theory of Deep Learning

We finally know why deep learning works.

elonlit.com | Elon Litman

Recent @DSLC club meetings:

:Python: Deep Learning with Python (3e): Image segmentation https://youtu.be/5VMR5NmsTKI #PyData #DeepLearning #AI

From the @DSLC โ€‹chives:

 R para Ciencia de Datos Club de Lectura: Capรญtulo 5 https://youtu.be/N4V2NL-TTg8 #RStats

 Fundamentals of Numerical Computation: Krylov methods in linear algebra https://youtu.be/3S1m0SIQZOY #JuliaLang

Support the Data Science Learning Community at https://patreon.com/DSLC

Deep Learning with Python (3e): Image segmentation (deeppy01 11)

YouTube

Hackable PyTorch RL Library with Distributional Algorithms (D4PG, DSAC, DPPO)
e3rl์€ PyTorch ๊ธฐ๋ฐ˜์˜ ๊ฐ•ํ™”ํ•™์Šต(RL) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ, GPU ์™„์ „ ํ™œ์šฉ์„ ๋ชฉํ‘œ๋กœ ์„ค๊ณ„๋˜์—ˆ์œผ๋ฉฐ D4PG, DSAC, DPPO ๋“ฑ ๋ถ„ํฌ์  ๊ฐ•ํ™”ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํฌํ•จํ•œ๋‹ค. CUDA, Apple Silicon(MPS), CPU๋ฅผ ์ง€์›ํ•˜๋ฉฐ, ๋‹ค์–‘ํ•œ gym ํ™˜๊ฒฝ์—์„œ ์‰ฝ๊ฒŒ ์‹คํ—˜ํ•  ์ˆ˜ ์žˆ๋„๋ก ์˜ˆ์ œ์™€ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ์—ฐ๊ตฌ ๋ฐ ๊ฐœ๋ฐœ์ž๋“ค์ด ๋ถ„ํฌ์  ๊ฐ•ํ™”ํ•™์Šต์„ ๋น ๋ฅด๊ฒŒ ์ ์šฉํ•˜๊ณ  ์‹คํ—˜ํ•  ์ˆ˜ ์žˆ๋Š” ์˜คํ”ˆ์†Œ์Šค ๋„๊ตฌ๋กœ ํ™œ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค.

https://github.com/e3ntity/e3rl

#reinforcementlearning #pytorch #distributionalrl #gpu #deeplearning

GitHub - e3ntity/e3rl: Fast and simple implementation of RL algorithms, designed to run fully on GPU.

Fast and simple implementation of RL algorithms, designed to run fully on GPU. - e3ntity/e3rl

GitHub

Elon Litman (@elon_lit)

๋”ฅ๋Ÿฌ๋‹์˜ ์ผ๋ฐ˜ํ™”์— ๋Œ€ํ•œ ํ†ตํ•ฉ ์ด๋ก ์„ ์ œ์‹œํ•˜๋ฉฐ grokking, double descent, benign overfitting, implicit bias๋ฅผ ํ•˜๋‚˜์˜ ํ‹€๋กœ ์„ค๋ช…ํ•˜๋Š” ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋‹ค. ์‹ ๊ฒฝ๋ง์˜ population risk ์ตœ์ ํ™”๊ฐ€ ์ž‘์€ ๋ณ€ํ™”๋กœ ๊ท€๊ฒฐ๋œ๋‹ค๋Š” ์ ์„ ์ œ์•ˆํ•ด ์ด๋ก ์ ์œผ๋กœ ์ค‘์š”ํ•œ ๋ฐœ๊ฒฌ์ด๋‹ค.

https://x.com/elon_lit/status/2051713061036167253

#deeplearning #generalization #research #neuralnetworks #machinelearning

Elon Litman (@elon_lit) on X

We developed a unified theory of generalization in deep learning. It explains grokking, double descent, benign overfitting, and implicit bias. But theory is only half the story. It turns out that optimizing the population risk of any neural network amounts to a small change to

X (formerly Twitter)

Build tiny models fast, minimize loss on FineWeb under limits.

#ai #research #deeplearning

RT @Michaelzsguo: Nutzer verรถffentlichen Qwen 3.6-Konfigurationen, die mit nur 12 GB VRAM eine hohe Transaktionsrate (TPS) erreichen. Wer die Bedeutung der dafรผr verwendeten Parameter versteht, kann das zugrundeliegende Prinzip nachvollziehen.

mehr auf Arint.info

#AI #DataScience #DeepLearning #MachineLearning #Qwen3 #TechTips #arint_info

https://x.com/Michaelzsguo/status/2050380832007721213#m

Arint - SEO+KI (@[email protected])

<p>RT @Michaelzsguo: Nutzer verรถffentlichen Qwen 3.6-Konfigurationen, die mit nur 12 GB VRAM eine hohe Transaktionsrate (TPS) erreichen. Wer die Bedeutung der dafรผr verwendeten Parameter versteht, kann das zugrundeliegende Prinzip nachvollziehen.</p> <p><a href="https://arint.info/@Arint/116521398451439397">mehr</a> auf <a href="https://arint.info/">Arint.info</a></p> <p>#AI #DataScience #DeepLearning #MachineLearning #Qwen3 #TechTips #arint_info</p> <p><a href="https://x.com/Michaelzsguo/status/2050380832007721213#m">https://x.com/Michaelzsguo/status/2050380832007721213#m</a></p>

Mastodon Glitch Edition