You can register for the seminar via HISinOne.
Background
Hyperparameter optimization is a powerful approach to achieve the best performance on many different problems. However, automated approaches to solve this problem tend to ignore the iterative nature of many algorithms. With the dynamic algorithm configuration (DAC) framework we can generalize over prior optimization approaches, as well as handle optimization of hyperparameters that need to be adjusted over multiple time-steps. In this seminar, we will discuss applications (such as temporally extended epsilon greedy exploration in RL) and domains (e.g., reinforcement learning, evolutionary algorithms or deep learning) that can benefit from dynamic configuration methods. A large portion of the seminar will be dedicated to discussing papers that describe DAC methods that employ reinforcement learning to learn hyperparameter optimization policies for various domains.
Requirements
We require that you have taken lectures on
- Machine Learning, and/or
- Deep Learning
We strongly recommend that you have heard lectures on
- Automated Machine Learning
- Reinforcement Learning
Organization
Every week all students read the relevant literature. Two students will prepare presentations for the topics of the week and present it in the session. After each presentation, we will have time for a question & discussion round and all participants are expected to take part in these. Each student has to write a short paper about their assigned topic which is to be handed in one week after their presentation.
Grading
- Presentation: 40% (20min + 20min Q&A)
- Paper: 40% (4 pages in AutoML Conf format, due one week after your presentation)
- Participation in Discussions: 20%
Schedule
Literature
Relevant literature can be found at https://www.automl.org/automated-algorithm-design/dac/literature-overview. This list contains many, though not all of the papers that we intend to cover in the seminar.