Menu

Winter Semester 2024

Please note: This page is under currently under construction and all info listed is subject to change.

Foundations of Deep Learning

Course type: Lecture + Exercise
Time: Lecture: Tuesday, 10:15 - 11:45; Optional exercises: Friday, 10:15 - 11:45
Location: The course will be in-person.
- Weekly flipped classroom sessions will be held on Tuesday in HS 00 006 (G.-Köhler-Allee 082)
- Optional exercise sessions will take place on Friday in HS 00 006 (G.-Köhler-Allee 082)
Organizers: Steven Adriaensen , Abhinav Valada , Mahmoud Safari , Rhea Sukthanker , Johannes Hog
Web page: ILIAS - under construction (please make sure to also register for all elements of this course module in HISinOne)

Foundations of Deep Learning

Deep learning is one of the fastest growing and most exciting fields. This course will provide you with a clear understanding of the fundamentals of deep learning including the foundations to neural network architectures and learning techniques, and everything in between.

Course Overview

The course will be taught in English and will follow a flipped classroom approach.

Every week there will be:

  • a video lecture
  • an exercise sheet
  • a flipped classroom session (Tuesdays, 10:15 - 11:45)
  • an attendance optional exercise session (Fridays, 10:15 - 11:45)

At the end, there will be a written exam (likely an ILIAS test).

Exercises must be completed in groups and must be submitted 2 weeks (+ 1 day) after their release.
Your submissions will be graded and you will receive weekly feedback.
Your final grade will be solely based on a written examination, however, a passing grade for the exercises is a prerequisite for passing the course.

Course Material: All material will be made available in ILIAS and course participation will not require in-person presence. That being said, we offer ample opportunity for direct interaction with the professors during live Q & A sessions and with our tutors during weekly attendance optional in-class exercise sessions.

Exam: The exam will likely be a test you complete on ILIAS. In-person presence will be required .

Course Schedule

The following are the dates for the flipped classroom sessions (tentative, subject to change):

15.10.23 - Kickoff: Info Course Organisation
22.10.24 - Week 1: Intro to Deep Learning
29.10.24 - Week 2: From Logistic Regression to MLPs
5.11.24 - Week 3: Backpropagation
12.11.24 - Week 4: Optimization
19.11.24 - Week 5: Regularization
26.11.24 - Week 6: Convolutional Neural Networks (CNNs)
03.12.24 - Week 7: Recurrent Neural Networks (RNNs)
10.12.24 - Week 8: Attention & Transformers
17.12.24 - Week 9: Practical Methodology
07.01.25 - Week 10: Auto - Encoders, Variational Auto - Encoders, GANs
14.01.25 - Week 11: Uncertainty in Deep Learning
21.01.25 - Week 12: AutoML for DL
28.01.25 - Round - up / Exam Q & A

In the first session (on 15.10.24) you will get additional information about the course and get the opportunity to ask general questions. While there is no need to prepare for this first session, we encourage you to already think about forming teams.
The last flipped classroom session will be held on 28.01.25.

Questions?

If you have a question, please post it in the ILIAS forum (so everyone can benefit from the answer).

Seminar: Automated Reinforcement Learning


Course type: Block Seminar
Time: Kickoff Session: 17.10.24 14:00 - 16:00
Presentation Sessions: TBD (likely first week of February)
Location: Kickoff Session: R 00 017 (G.-Köhler-Allee 101)
Presentation Sessions: TBD
Organizers: André Biedenkapp , Noor Awad , Raghu Rajan , M Asif Hasan , Baohe Zhang
Web page: HISinOne , Local Page
more

Seminar: Large Language Models, Deep Learning, and Foundation Models for Tabular Data

The field of tabular das has recently been exploding with advances through large language models (LLMs), deep learning algorithms, and foundation models. In this seminar, we want to dive deep into these very recent advances to understand them.

Course type: Seminar
Time Five slots, to be determined with all participants. Kick-off is likely on the 23rd of October at 10 to 11 am.
Location in-person; Meeting Room in our ML Lab
Organizers Lennart Purucker
Registration Via HISinOne (maximal six students, registration opens 14th of October)
Language English

Prerequisites

We require that you have taken lectures on or are familiar with the following:

  • Machine Learning
  • Deep Learning
  • Automated Machine Learning

Organization

After the kick-off meeting, everyone is assigned a paper about recent advances in deep learning (one or multiple papers, depending on the content). Then, everyone is expected to understand and digest their assigned papers and prepare two presentations. The first presentation is given in midterms (two separate slots), and the second during the endterms (two separate slots).

  • The first presentation will focus on the relationship between the papers, any relevant related work, any background to understand the paper, and the greater context of the work.
  • The second presentation will focus on the paper's contributions, describing them in detail.

In addition to the second presentation, students are expected to contribute an "add-on" related to the paper for the final report. This includes but is not limited to reproducing some experiments, implementing a part of the paper, providing a greater literature survey, fact-checking citations, experiments, or methodology, building a WebUI or demo for the paper, etc. Students can (e-)meet with Lennart Purucker for feedback and any questions (e.g., to discuss a potential "add-on").


Grading

  • Presentations: 40% (two times 20min + 20min Q&A)
  • Report: 40% (4 pages in AutoML Conf format, due one week after last end term)
  • Add-on: 20%

Short(er) List of Potential Papers / Directions:

LLMs

Deep Learning

Foundation Models / In-Context Learning

Multi-Modal

Seminar: Pruning and Efficiency in LLMs

Large language models (LLMs) exhibit remarkable reasoning abilities, allowing them to generalize across a wide range of downstream tasks, such as commonsense reasoning or instruction following. However, as LLMs scale, inference costs become increasingly prohibitive, accumulating significantly over their life cycle. In this seminar we will dive into methods like quantization, pruning and knowledge distillation to optimize LLM inference.

Course type: Seminar
Time Five slots, to be determined with all participants. Kick-off is likely on the 24th of October at 10 to 11 am.
Location in-person; Meeting Room in the ML Lab
Organizers Rhea Sukthanker
Registration Via HISinOne (maximum six students, registration opens 14th of October)
Language English

Prerequisite

We require that you have taken lectures on or are familiar with the following:

  • Machine Learning
  • Deep Learning
  • Automated Machine Learning

Organization

After the kick-off meeting, everyone is assigned a paper (one or two depending on the content). Then, everyone understands the paper(s) assigned to them and prepares two presentations.

  • The first presentation will focus on establishing, the background, motivation for the work and a concise overview of the approach proposed in the paper
  • The second presentation will focus on the details of the approach, the results and takeaways from the paper and an “add-on” described below

Students will contribute an "add-on" related to the paper for the final report. This includes but is not limited to reproducing some experiments, profiling inference latency of the LLMs, implementing a part of the paper or providing a colab demo on applying the method in the paper to a different LLM. Students can (e-)meet with Rhea Sukthanker for feedback and any questions (e.g., to discuss a potential "add-on").


Grading

  • Presentations: 40% (two times 20min + 20min Q&A)
  • Report: 40% (4 pages in AutoML Conf format, due one week after last end term)
  • Add-on: 20%

List of Potential Papers