🚀 Our latest benchmark shows hyperparameter tuning with Optuna hits 0.9617 validation accuracy in just 64.59 seconds! Using Bayesian optimization and the Tree‑structured Parzen Estimator, we ran 100 trials to squeeze out every percent. Dive into the details of the experiment and see how you can apply these tricks to your own models. #HyperparameterTuning #Optuna #BayesianOptimization #ModelOptimization

🔗 https://aidailypost.com/news/hyperparameter-tuning-reaches-09617-accuracy-6459-seconds

Professor Jin Xu (East China Normal University) will give a talk on 17 December on Bayesian Optimization via Exact Penalty. 🎓📊

🕒 14:15–15:00
📍 TU Dortmund, M/E 21

EPBO shows how complex equality and resource constraints can be solved more efficiently – relevant for research, students, and anyone using data-driven methods. 🔍✨

What interests you most about optimization?

#Statistics #BayesianOptimization #Research #UARuhr #TUDortmund #DataScience

Meta releases Ax 1.0 for automated machine learning optimization: Meta launches Ax 1.0, an open-source platform using Bayesian optimization to automate complex experimentation across AI development, infrastructure tuning, and hardware design. https://ppc.land/meta-releases-ax-1-0-for-automated-machine-learning-optimization/ #Meta #MachineLearning #ArtificialIntelligence #BayesianOptimization #OpenSource
Meta releases Ax 1.0 for automated machine learning optimization

Meta launches Ax 1.0, an open-source platform using Bayesian optimization to automate complex experimentation across AI development, infrastructure tuning, and hardware design.

PPC Land

Myself and Yoshua Bengio are hiring a postdoctoral researcher @Mila_Quebec
! For this call, we are prioritizing candidates with experience in reinforcement learning, scientific discovery, or high-impact applications of ML. Apply here (https://docs.google.com/forms/d/e/1FAIpQLScqXiMClkgDBvrIZyxdtx60Pcbj3JzZeC-LFg3yiUOZlvgyLw/viewform?usp=sf_link)

We are looking for someone with skills in the areas of
#Reinforcementlearning
#ML4science
#Bayesianoptimization
#Foundationalmodels for #decisionmaking
#RealworldML

https://fracturedplane.notion.site/Open-PostDoc-Position-in-Machine-Learning-fcbcc0e8759441b2b6be12f8fe30080c

Google Forms: Sign-in

Access Google Forms with a personal Google account or Google Workspace account (for business use).

Model-based Causal Bayesian Optimization

How should we intervene on an unknown structural causal model to maximize a downstream variable of interest? This optimization of the output of a system of interconnected variables, also known as causal Bayesian optimization (CBO), has important applications in medicine, ecology, and manufacturing. Standard Bayesian optimization algorithms fail to effectively leverage the underlying causal structure. Existing CBO approaches assume noiseless measurements and do not come with guarantees. We propose model-based causal Bayesian optimization (MCBO), an algorithm that learns a full system model instead of only modeling intervention-reward pairs. MCBO propagates epistemic uncertainty about the causal mechanisms through the graph and trades off exploration and exploitation via the optimism principle. We bound its cumulative regret, and obtain the first non-asymptotic bounds for CBO. Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form, so we show how the reparameterization trick can be used to apply gradient-based optimizers. Empirically we find that MCBO compares favorably with existing state-of-the-art approaches.

arXiv.org

Meta-Learning Priors for Safe Bayesian Optimization

📜 https://arxiv.org/abs/2210.00762

❓ Task: Query-efficient Bayesian Optimization subject to safety constraints.

💡 Idea: Set prior kernel hyper-params based on stds and calibration frequencies observed on related data. Search by exploiting monotonicity to efficiently prune unsafe and safe but sub-optimal solutions.

📈 Result: >2x convergence speedup for tuning the controller of a high-speed wafer inspection robot.

#Robotics #BayesianOptimization

Meta-Learning Priors for Safe Bayesian Optimization

In robotics, optimizing controller parameters under safety constraints is an important challenge. Safe Bayesian optimization (BO) quantifies uncertainty in the objective and constraints to safely guide exploration in such settings. Hand-designing a suitable probabilistic model can be challenging, however. In the presence of unknown safety constraints, it is crucial to choose reliable model hyper-parameters to avoid safety violations. Here, we propose a data-driven approach to this problem by meta-learning priors for safe BO from offline data. We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity. As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner via empirical uncertainty metrics and a frontier search algorithm. On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches while maintaining safety.

arXiv.org