When Dimensionality Hurts: The Role of #LLM Embedding Compression for Noisy Regression Tasks https://d.repec.org/n?u=RePEc:arx:papers:2502.02199&r=&r=cmp
"… suggest that the optimal dimensionality is dependent on the signal-to-noise ratio, exposing the necessity of feature compression in high noise environments. The implication of the result is that researchers should consider the #noise of a task when making decisions about the dimensionality of text.

… findings indicate that sentiment and emotion-based representations do not provide inherent advantages over learned latent features, implying that their previous success in similar tasks may be attributed to #regularisation effects rather than intrinsic informativeness."
#ML #autoencoders #Overfitting

I just added some extra chapters on #ANN. Since we are using #autoencoders, I thought it could be useful to provide some general introduction on #NeuralNetworks and how they can be tuned.

'Manifold Learning by Mixture Models of VAEs for Inverse Problems', by Giovanni S. Alberti, Johannes Hertrich, Matteo Santacesaria, Silvia Sciutto.

http://jmlr.org/papers/v25/23-0396.html

#autoencoders #manifold #manifolds

Manifold Learning by Mixture Models of VAEs for Inverse Problems

'The Power of Contrast for Feature Learning: A Theoretical Analysis', by Wenlong Ji, Zhun Deng, Ryumei Nakada, James Zou, Linjun Zhang.

http://jmlr.org/papers/v24/21-1501.html

#autoencoders #supervised #generative

The Power of Contrast for Feature Learning: A Theoretical Analysis

'Be More Active! Understanding the Differences Between Mean and Sampled Representations of Variational Autoencoders', by Lisa Bonheme, Marek Grzes.

http://jmlr.org/papers/v24/21-1145.html

#autoencoders #disentangled #representations

Be More Active! Understanding the Differences Between Mean and Sampled Representations of Variational Autoencoders

New preprint from our group ! 🧠 πŸ’»

*Whole-brain modelling of low-dimensional manifold modes reveals organising principle of brain dynamics*
https://www.biorxiv.org/content/10.1101/2023.11.20.567824v1

#brain #modeling #autoEncoders #variationalAutoEncoder #restingStateNetworks #manifold

Real-Time Anomaly Detection of NAB Ambient Temperature Readings using the TensorFlow/Keras Autoencoder

Today we will discuss the anomaly detection in time series data using autoencoders. In this approach, anomalies are data points with considerable reconstruction errors. In the context of predictive…

Digital High Science

'Lifted Bregman Training of Neural Networks', by Xiaoyu Wang, Martin Benning.

http://jmlr.org/papers/v24/22-0934.html

#autoencoders #classifiers #denoising

Lifted Bregman Training of Neural Networks

Conditional Sampling of Variational Autoencoders via Iterated Approximate Ancestral Sampling

https://openreview.net/forum?id=I5sJ6PU6JN

#autoencoders #sampling #sampler

Conditional Sampling of Variational Autoencoders via Iterated...

Conditional sampling of variational autoencoders (VAEs) is needed in various applications, such as missing data imputation, but is computationally intractable. A principled choice for...

OpenReview

A simple, efficient and scalable contrastive masked autoencoder for learning visual representations

https://openreview.net/forum?id=pjdxPts6er

#autoencoders #autoencoder #imagenet

A simple, efficient and scalable contrastive masked autoencoder for...

Hybrid self-supervised learning methods that combine masked image modelling and contrastive learning have demonstrated state-of-the-art performance across many vision tasks. In this work we...

OpenReview