🧐 Künstlich Intelligent lernen 💻 – Theresa Eimer beschäftigt sich als Informatikerin damit, wie man künstlich intelligent lernen kann. Besonders wichtig dabei: Hyperparameter. Doch wie funktioniert maschinelles Lernen überhaupt?🎓
Hier gehts zum Vortrag: https://youtu.be/UQ4y5ovvp2Y

#forschung #wissenschaft #scienceslam #science #lernen #study #AI #informatik #maschinelleslernen #Hyperparameter #künstlicheintelligenz #daten #datenkrake #blackbox

Wie du künstlich intelligentes Lernen richtig umsetzt (Theresa Eimer - Science Slam)

YouTube

Sample Average Approximation for Black-Box Variational Inference

https://openreview.net/forum?id=Lvg10LZ5nL

#variational #optimization #hyperparameter

Sample Average Approximation for Black-Box Variational Inference

We present a novel approach for black-box VI that bypasses the difficulties of stochastic gradient ascent, including the task of selecting step-sizes. Our approach involves using a sequence of...

OpenReview

No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL

Han Wang, Archit Sakhadeo, Adam M White et al.

https://openreview.net/forum?id=AiOUi3440V

#hyperparameters #hyperparameter #learns

No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL

The performance of reinforcement learning (RL) agents is sensitive to the choice of hyperparameters. In real-world settings like robotics or industrial control systems, however, testing different...

OpenReview

2/10) #SSL models show great promise and can learn #representations from large-scale unlabelled data. But, identifying the best model across diff #hyperparameter configs requires measuring downstream task performance, which requires #labels and adds to the #compute time+resources. 😕

#AI #ML #deeplearning

I wonder if anyone has written about #ASHA #hyperparameter optimization vs restart strategies.

I've been playing around with ASHA in #ray on my CMA-ES optimization task. Turns out my hyperparameters (population_size, sigma_0) are almost uncorrelated to performance, but the final performance varies a lot per run.

So what ASHA actually does for me is just picking the lucky runs. Which helped a lot. Maybe I should try this BIPOP restarting strategy of CMA-ES instead.