Neural Networks

Lectures

1st Lecture 16.02.2021
2nd Lecture 23.02.2021

Tuesday, 14:00 – 15:50    MS Teams

Seminars 2021

Syllabus

Course supervisor: Doc. Ing. Wanda Benešová, PhD
Course lecturers: Ing. Lukáš Hudec
Ing. Igor Jánoš
Ing. Štefan Grivalský
Supervising department:

Institute of Computer Engineering and Applied Informatics (UPAI FIIT)

Course objective: After completing the course, the student will understand the basic principles of connectionism (of artificial neural networks), know the basic neural network models, and know how to use them for solving various problems (e.g. pattern recognition, classification, time series prediction, pattern memorization, etc.). Lectures are combined with computer modeling during exercises.
Keywords: deep learning, neural networks
Form of teaching: lecture, seminar, 3 weekly tasks, 3 projects
Course methods: Form of study: lectures and seminars
Weekly: 2-hour lecture + 2-hour seminars
Time allowance (lecture/seminar): 2/3
Course completion: Mode of assessment and completing the course study: credit and examination
Weekly assignments: min 2/7 pts
Semestral projects: min 11/38 pts
Mid-term assessment: min 3/15 pts
Necessary minimum 16 pts out of 60

Final assessment: final exam min 5pts out of 40
Mode of completion and credits: Exam (6 credits)
Type of study: usual
Taught for the form of: full-time, attendance method
Prerequisites for registration: none
Regular assessment: weekly assignments (7%)
semester project (38%)
midterm test (15%)
Final assessment: final test (40%)
  1. Lectures1. Introduction to neural networks (NN): History, inspiration from biology, basic concepts of connectionism, learning approaches, nonlinearity.
  2. Structures of neural networks: binary/continuous perceptron, MLP, activation functions, types of architectures.
  3. Optimization algorithms and learning: classification, regression, forward propagation and vectorization, the algorithm of backward error propagation, SGD. Visualization tool for monitoring of training NN.
  4. Optimization algorithms (2): implementation
  5. Optimization and regularization: recommended practices.
  6. Convolutional neural networks.
  7. Generative models and other modifications of CNN.
  8. Networks with memory, recursive models, models with attention.
  9. Reinforcement learning.
  10. Prerequisites for the use of NN in practice: building intuition and NN mindset, best practices, troubleshooting.
  11. Visualization and Interpretability of decisions: visualization of weights and activations, interpretability, explainability of decisions.
  12. Other architectures and NN use-cases.

Tensorflow, PyTorch, Numpy in Exercises