1st Lecture 15.02.2022 – Introduction, history, and base structures (SLP, MLP) – not recorded
2nd Lecture 22.02.2022 – Feed Forward, Backprop, Introduction to training (Igor Jánoš)
3rd Lecture 01.03.2022 – Optimization of NN, regularization techniques (Igor Jánoš)
4th Lecture 08.03.2022 – Optimizers, Optimization, Regularization
5th Lecture 15.03.2022 – Convolutional Neural Networks – SOTA
6th Lecture 22.03.2022 – CNN Modifications (AE, Siamese, …)
7th Lecture 22.03.2022 – Midterm + CNN Modifications (finishing)
8th Lecture 05.04.2022 – Generative models (VAE + GANs) (Maroš Kollár)
9th Lecture 12.04.2022 – Sequence Data, Recurrent models
10th Lecture 19.04.2022 – Reinforcement Learning (Igor Jánoš)
11th Lecture 26.04.2022 – Interpretability and Explainability
12th Lecture 02.05.2022 – DL Good practices, DL in industrial projects (Lukáš Hudec + guest Marek Šebo)
1st Lecture 16.02.2021 – Introduction and history of DL+NN
2nd Lecture 23.02.2021 – Base structures, SLP, MLP
3rd Lecture 02.03.2021 – Feed Forward, introduction to training (we apologize for the low quality of the recording)
4th Lecture 09.03.2021 – Optimization of NN, regularization techniques
5th Lecture 16.03.2021 – Optimizers, Optimization, Regularization
6th Lecture 23.03.2021 – Finishing Regularization, Convolutional Neural Networks
7th Lecture 30.03.2021 – SOTA CNNs, CNN architecture modifications
8th Lecture 06.04.2021 – Sequences, RNN, LSTM, Transformer networks
9th Lecture 13.04.2021 – (Midterm) Generative models – VAE, CNN
10th Lecture 20.04.2021 – Generative models – GANs, Reinforcement Learning
11th Lecture 27.04.2021 – Visualization of NN, Interpretability and Explainability
12th Lecture 04.05.2021 – Best practices for DL projects, DL debugging
The semester is over, all lectures are publicly here, all that remains is the final exam. Good Luck on 4.6.2021.
|Course supervisor:||Doc. Ing. Wanda Benešová, PhD|
|Course lecturers:||Ing. Lukáš Hudec
Ing. Igor Jánoš
Ing. Štefan Grivalský
|Course objective:||After completing the course, the student will understand the basic principles of connectionism (of artificial neural networks), know the basic neural network models, and know how to use them for solving various problems (e.g. pattern recognition, classification, time series prediction, pattern memorization, etc.). Lectures are combined with computer modeling during exercises.|
|Keywords:||deep learning, neural networks|
|Form of teaching:||lecture, seminar, 3 weekly tasks, 3 projects|
|Course methods:||Form of study: lectures and seminars
Weekly: 2-hour lecture + 2-hour seminars
|Time allowance (lecture/seminar):||2/3|
|Course completion:||Mode of assessment and completing the course study: credit and examination
Weekly assignments: min 2/7 pts
Semestral projects: min 11/38 pts
Mid-term assessment: min 3/15 pts
Necessary minimum 16 pts out of 60
Final assessment: final exam min 5pts out of 40
|Mode of completion and credits:||Exam (6 credits)|
|Type of study:||usual|
|Taught for the form of:||full-time, attendance method|
|Prerequisites for registration:||none|
|Regular assessment:||weekly assignments (7%)
semester project (38%)
midterm test (15%)
|Final assessment:||final test (40%)|
- Lectures1. Introduction to neural networks (NN): History, inspiration from biology, basic concepts of connectionism, learning approaches, nonlinearity.
- Structures of neural networks: binary/continuous perceptron, MLP, activation functions, types of architectures.
- Optimization algorithms and learning: classification, regression, forward propagation and vectorization, the algorithm of backward error propagation, SGD. Visualization tool for monitoring of training NN.
- Optimization algorithms (2): implementation
- Optimization and regularization: recommended practices.
- Convolutional neural networks.
- Generative models and other modifications of CNN.
- Networks with memory, recursive models, models with attention.
- Reinforcement learning.
- Prerequisites for the use of NN in practice: building intuition and NN mindset, best practices, troubleshooting.
- Visualization and Interpretability of decisions: visualization of weights and activations, interpretability, explainability of decisions.
- Other architectures and NN use-cases.
Tensorflow, PyTorch, Numpy in Exercises