IRIS Insights I Nico Formanek: Are hyperparameters vibes?
April 24, 2025, 2:00 p.m. (CEST)
Our second IRIS Insights talk will take place with Nico Formanek.
🟦
This talk will discuss the role of hyperparameters in optimization methods for model selection (currently often called ML) from a philosophy of science point of view. Special consideration is given to the question of whether there can be principled ways to fix hyperparameters in a maximally agnostic setting.
🟦
This is a WebEx talk to which everyone who is interested is cordially invited. It will take place in English. Our IRIS speaker, Jun.-Prof. Dr. Maria Wirzberger, will moderate it. Following Nico Formanek's presentation, there will be an opportunity to ask questions. We look forward to active participation.
🟦
Please join this Webex talk using the following link:
https://lnkd.in/eJNiUQKV
🟦
#Hyperparameters #ModelSelection #Optimization #MLMethods #PhilosophyOfScience #ScientificMethod #AgnosticLearning #MachineLearning #InterdisciplinaryResearch #AIandPhilosophy #EthicsInAI #ResponsibleAI #AITheory #WebTalk #OnlineLecture #ResearchTalk #ScienceEvents #OpenInvitation #AICommunity #LinkedInScience #TechPhilosophy #AIConversations
LinkedIn

This link will take you to a page that’s not on LinkedIn

'Empirical Design in Reinforcement Learning', by Andrew Patterson, Samuel Neumann, Martha White, Adam White.

http://jmlr.org/papers/v25/23-0183.html

#reinforcement #experiments #hyperparameters

Empirical Design in Reinforcement Learning

#CausalML update - I am now fitting my first #CausalForest on real data!

Does anyone have advice on the most important #hyperparameters (After the # of trees & tree depth.)

I'm working on large imbalanced data sets and a large number of treatment variables, so it's not like anything you see in the economics literature. 🤔 #ML #AI #causal

'On the Hyperparameters in Stochastic Gradient Descent with Momentum', by Bin Shi.

http://jmlr.org/papers/v25/22-1189.html

#sgd #hyperparameters #stochastic

On the Hyperparameters in Stochastic Gradient Descent with Momentum

'Pre-trained Gaussian Processes for Bayesian Optimization', by Zi Wang et al.

http://jmlr.org/papers/v25/23-0269.html

#priors #prior #hyperparameters

Pre-trained Gaussian Processes for Bayesian Optimization

'An Algorithmic Framework for the Optimization of Deep Neural Networks Architectures and Hyperparameters', by Julie Keisler, El-Ghazali Talbi, Sandra Claudel, Gilles Cabriel.

http://jmlr.org/papers/v25/23-0166.html

#forecasting #algorithmic #hyperparameters

An Algorithmic Framework for the Optimization of Deep Neural Networks Architectures and Hyperparameters

'Low-rank Variational Bayes correction to the Laplace method', by Janet van Niekerk, Haavard Rue.

http://jmlr.org/papers/v25/21-1405.html

#variational #hyperparameters #approximations

Low-rank Variational Bayes correction to the Laplace method

📢 Publicationalert: "The Role of Hyperparameters in Machine Learning Models and How to Tune Them" with with Luka Biedebach Andreas Küpfer and Marcel Neunhoeffer in Political Science Research and Methods. Margeret is loving #hyperparameters. Do you? #sciencerocks #machinelearning #socialdatascience https://doi.org/10.1017/psrm.2023.61 🧵 [1/5]
The role of hyperparameters in machine learning models and how to tune them | Political Science Research and Methods | Cambridge Core

The role of hyperparameters in machine learning models and how to tune them

Cambridge Core
New Workingpaper: "The Role of #Hyperparameters in #MachineLearning Models and How to Tune Them". We suggest: Handle HPs with the same loving care as parameter estimates---you could end up choosing the wrong model. https://tinyurl.com/mr2akrn3

'Beyond the Golden Ratio for Variational Inequality Algorithms', by Ahmet Alacaoglu, Axel Böhm, Yura Malitsky.

http://jmlr.org/papers/v24/22-1488.html

#ascent #constrained #hyperparameters

Beyond the Golden Ratio for Variational Inequality Algorithms