Uni-Logo

Please note: We do not offer any summer internships. The theses and projects listed here are only available to students enrolled at the University of Freiburg. If you are not enrolled and are interested in working with our group, we encourage you to apply to our MSc programme. Information on this programme and how to apply can be found here: http://www.tf.uni-freiburg.de/studies/degree_programmes/master/msc_informatik.html

Getting in Touch about Projects and Theses

Since machine learning is currently one of the hottest topics around, our small group is flooded with requests, and we may not be able to offer projects to every interested student. Generally, it helps for finding a matching project if you have taken and excelled at our courses. All projects in our research group should be carried out in teams of at least 2 students (max. 4). Theses can also be done in synergy with theses/projects by others, but each student has to write their thesis by themselves, appropriately acknowledging work done by others. We strongly recommend doing a MSc project with our group before inquiring about a MSc thesis.

In order to make the selection process of students and topics as effective as possible, we ask potential candidates for projects/theses to send us an email (optimally, already as a team) to aad-staff@cs.uni-freiburg.de (or only to the person(s) in charge of a specific project you're interesting in) with the following information (per team member):

1) Are you a BSc or MSc student? Which term? If BSc, are you planning on staying in Freiburg for a MSc?
2) Which ML-related courses have you taken?
3) Can you please attach your transcript of records?
4) Which projects have you done so far (in Freiburg and elsewhere)?
5) Which topics interest you most?

Since different projects require different skill sets, please also rate your skills in the following categories on a scale from ++ (very good) to -- (no knowledge/skill):
6. Creativity / ideas for developing new algorithms
7. Getting someone else's large code base to run
8. Running comprehensive experimental studies / keeping track of results
9. Self-motivation to push through even if things don't work for a while
10. Coding skills
         a. Python
         b. TensorFlow
         c. Keras
         d. PyTorch
         e. C/C++
11. Ability to read a RL paper, implement it and get it to work
12. Ability to read a DL paper, implement it and get it to work
13. Formal background, linear algebra
14. Formal background, proofs

Possible Topics

Here, we collect possible topics for BSc/MSc projects or theses. Please note that the list of projects below is typically incomplete, since we always have new ideas as time goes by and we do not always update the website right away. The most up-to-date list of projects can be found in the following (partially overlapping) presentations: Open student projects (Deep Learning course WS 2017/18) and Open student projects (Reinforcement learning course WS 2017/18. Below, you can find additional details for some projects.

Joint Architecture and Hyperparameter Search for Deep Neural Networks

Deep learning has celebrated many successes, establishing new state-of-the-art performance in many important domains (e.g., image recognition and speech recognition). However, applying it in practice can be challenging because it requires many crucial design decisions, such as choosing a neural network architecture and its hyperparameters. Recent advances in Automated Machine Learning (AutoML) have led to automated tools that can compete with machine learning experts on supervised learning tasks. However, current AutoML tools do not yet support modern neural networks effectively.

Implementation of a new, state-of-the-art Auto-Net

Finding a well-performing architecture of a deep neural network is not an easy task. These days that task is still mainly solved by a combination of expert knowledge and many experiments. However, as shown in recent years, different approaches can be used to solve this task as good or sometimes even better than human experts can do. The goal of this project is to build a new, state-of-the-art, easy-to-use AutoNet tool based on several approaches and tools that were developed in our group.

Requirements

  • Some hands-on experience in deep learning
  • [preferable] Practical experience in applying these designs to data sets
  • Basic knowledge of hyperparameter optimization/automated algorithm design
  • Strong Python programming skills

Literature

Contact: Marius Lindauer and Frank Hutter

Hyperparameter Importance for Neural Architecture Search

Hyperparameter tuning and neural architecture search are hot topics for the ML community. However, it is still unknown which hyperparameter or architecture decisions are really crucial to achieve good performance. For example, it is well-known that a good learning rate or good learning rate schedule is very important. However, what about number of layers, number of neurons, type of activation function and so on? Which of these are important? Using tools for parameter importance analysis, the goal of this project is to answer this question based on thorough empirical study.

Requirements

  • Some hands-on experience in deep learning
  • [preferable] Practical experience in applying these designs to data sets
  • Basic knowledge of hyperparameter optimization/automated algorithm design
  • Strong Python programming skills

Literature

Contact: Marius Lindauer and Frank Hutter

Algorithm Configuration and Hyperparameter Optimization

The performance of many algorithms depends on their parameter settings which have to be adjusted based on the set of input instances at hand. For example, the parameters of AI planners have to be changed based on the planning task at hand. Unfortunately, manual parameter tuning is a tedious and error-prone task. Therefore, automated algorithm configuration tools (e.g., our state-of-the-art tool SMAC) can help users by automatically optimizing the parameters of their algorithms.

Multi-Objective Algorithm Configuration

So far, our algorithm configuration tool SMAC is only able to optimize a single objective. But what if a user is interested in multiple objectives (e.g., accuracy, runtime and memory consumption)? A common approach for multi-objective problems is that not a single solution is returned (here parameter configuration) but a front of non-dominated solutions (a so-called Pareto front). The goal of this project is to extend our tool SMAC such that it can also return a Pareto front.

Requirements

  • Basic knowledge of hyperparameter optimization/automated algorithm design
  • Strong Python programming skills

Literature

Contact: Marius Lindauer and Frank Hutter

Predicting and Optimizing Algorithm Runtime using Deep Learning

Our algorithm configuration (AC) tool SMAC uses random forests to predict the running time of an algorithm given an instance. Recent advances indicate that Neural Networks (NNs) could be an alternative to random forests worthwhile to investigate: They can handle large amounts of data and dimensions which we have in AC. This project starts with evaluating the performance of NN for simple AC data and then based on these findings stepwise extending towards using NNs within SMAC.

Requirements

  • Basics in algorithm configuration
  • Basic understanding of neural networks
  • Strong Python programming skills

Literature

Contact Katharina Eggensperger

Constructing Better Benchmark Problems

Progress in hyperparameter optimization and algorithm configuration relies on the availability of relevant and realistic benchmark problems. Unfortunately these are often expensive to evaluate, e.g. when optimizing the hyperparameters of a neural network on ehas to train and evaluate multiple networks. Having cheap-to-evaluate, but meaningful problems, new approaches could be evaluated in early stages within a fraction of time it would take to run the real benchmark problem. This project is about evaluating and constructing machine learning models to use them as surrogate benchmark problems.

Requirements

  • Basics in algorithm configuration and/or hyperparameter tuning
  • Strong Python programming skills

Literature

Contact Katharina Eggensperger

Efficient Meta-Learning in Bayesian optimization with Deep Neural Networks

Meta-Learning in Bayesian optimization is an active field of research which speeds up Bayesian optimization by making use of previously solved optimization tasks. Recently, Perrone et al. (2017) proposed to use a deep neural network with Bayesian linear regression output layers to jointly model several tasks in Bayesian optimization and thereby obtain large speed-ups. While the proposed method solves the issue of scalability, it does only capture the task information in the task-dependent output layer. Goal of this project is to re-implement the proposed model, extend it with a special input layer which can be cheaply adapted for the task at hand during the optimization process and compare it against several state-of-the-art methods.

Requirements

  • Knowledge of deep learning and Bayesian hyperparameter optimization
  • Strong Python programming skills

Literature

Contact Matthias Feurer

Deep Reinforcement Learning for learning programs

Deep reinforcement learning is arguably the hottest topic in machine learning, and this set of projects will explore its use for learning and improving programs.

Improving Deep RL methods for architecture search

This is a core methods project.

Requirements

  • Solid knowledge of reinforcement learning
  • Good knowledge of deep learning and recurrent neural networks
  • Strong Python programming skills

Literature

Contact: Aaron Klein, Stefan Falkner, Frank Hutter and Joschka Bödecker

BLBT: Deep Learning for EEG Data

Decoding EEG data is a challenging task and has recently been approached with deep learning. All of the following projects aim to improve an existing decoding pipeline by applying recent state-of-the-art techniques.

Requirements

  • Understanding of neural networks
  • Strong Python programming skills
  • Understanding of EEG (optional, but beneficial)

Literature

Recurrent neural networks for decoding motor control data

Currently, our pipeline uses feedforward networks, which naturally do not capture information over time. As EEG data can be considered as a time series, a promising extension is to use recurrent neural networks instead.

Architecture Search

The current pipeline has been handcrafted and strongly builds on domain expert knowledge. This project aims at exploiting hyperparameter search to automatically design well performing decoders and with that supports and accelerate research.

Data Augmentation for EEG data

EEG data to train a model is usually scarce as one recording contains less than 1000 samples. One way to overcome this limitation is to do data augmentation by applying transformations to the training data and make the resulting model invariant to these transformations. Well performing transformations, such as rotation and rescaling are known to improve generalization performance for image classification, but for EEG data it is not clear which transformations will work. In this project you will create and evaluate possible transformations for EEG data.

Contact Robin Schirrmeister and Katharina Eggensperger