Bayesian optimization

Course type: Seminar
Time: Thursday, 12:30-14:00 (first meeting, October 24th, 12:15-13:45)
Location: Building 101, Room 01 018
Organizers: Frank Hutter, Katharina Eggensperger, Matthias Feurer, Noor Awad, Arbër Zela
Web page: , HisInOne

Seminar on Bayesian Optimization

The seminar language will be English (even if everyone is German-spoken, to practice presentation skills in English). *First meeting:
  • 24th October, 12:15-13:45, SR 01 018 (Building 101).
*Regular meetings:
  • Every Thursday, 12:30-14:00, SR 01 018 (Building 101).

Background

Bayesian optimization is a popular method for blackbox function optimization. Blackbox function are functions for which no assumptions are made, which means that neither the derivatives nor the smoothness are known. Furthermore, function evaluations might be noisy and are typically assumed to be expensive. Finally, the only knowledge about blackbox functions is the call signature which allows one to query the function for different input values (and observe the outcome). These properties make Bayesian optimization an ideal method for hyperparameter optimization. In this seminar we will read papers on both the foundations of Bayesian optimization and recent research aiming to apply Bayesian optimization to state-of-the-art deep learning models.

Schedule

##

14.11

## 1. Practical Bayesian Optimization of Machine Learning Algorithms – Spearmint 2. A Tutorial on Bayesian Optimization – with focus on the knowledge gradient and entropy search ##

21.11

## 1. Scalable Bayesian Optimization Using Deep Neural Networks 2. Automating Bayesian optimization with Bayesian optimization ##

28.11

## 1. Portfolio Allocation for Bayesian Optimization 2. Meta-Learning Acquisition Functions for Transfer Learning in Bayesian Optimization ##

05.12

## 1. Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets 2. BOHB: Robust and Efficient Hyperparameter Optimization at Scale ##

09.01

## 1. Neural Architecture Search with Bayesian Optimization and Optimal Transport 2. Scalable Hyperparameter Transfer Learning ##

16.01

## 1. A Flexible Framework for Multi-Objective Bayesian Optimization using Random Scalarizations 2. High-Dimensional Bayesian Optimization via Additive Models with Overlapping Groups ##

23.01

## 1. Constrained Bayesian Optimization with Noisy Experiments 2. Hyperparameter Importance Across Datasets ##

13.02

## 1. Scalable Meta-Learning for Bayesian Optimization using Ranking-Weighted Gaussian Process Ensembles 2. Scalable Global Optimization via Local Bayesian Optimization

Requirements

  • Machine Learning
  • Statistical Pattern Recognition
  • Automated Machine Learning would be good

Material

Further information

For questions, please send an email to one of the organizers: Arbër Zela