generated by TTT-TomasTeachingTools on 2023-07-25, 15:39:24
Formal and informal analysis of the memory and time complexity of all data sructures and algorithms taught is an integral part of the course, it is not expicitely listed under particular topics. 1. Asymptotic complexity of algorithms. Graphs, their properties and memory representation. 2. Minimum spanning tree. Union-Find problem. 3. Euler paths. Directed graphs: connectivity, acyclic graphs. 4. Heaps. Fibonacci heap. Heaps performance comparison. 5. Dynamic data structures. Garbage collector. 6. Generating, enumeration aand isomorphism of data structures and combinatorial objects. Permutations, combinations, variations, trees. 7. Generating other combinatorial structures: k-element subsets, Gray code, non-isomorphic graphs. 8. Search in sequences - linear and quadratic interpolation. Median search. 9. Finite automata, implementation, automaton reduction. 10. Regular expressions and text search using regular expressions. 11. Approximate text search using finite automata, dictionary automata. 12. Search in higher dimensions, K-D trees, Quadtree. 13. Search trees: B a B+; 2-3-4 a R-B trees. 14. Search trees: Trie, suffix tree, splay tree.labs/seminars:
Exercises and related homeworks are devoted mostly to implementation of lecture topics. Consequently, the themes of each exercise formally correspond to those of respective lecture.literature:
R. Sedgewick: Algoritmy v C, SoftPress 2003, T. H. Cormen, C. E. Leiserson, R. L. Rievest, C. Stein: Introduction to Algorithms, 2nd ed., MIT Press, 2001 B. Melichar: Jazyky a překlady, Praha , ČVUT 1996 J. E. Hopcroft, R. Motwani, J. D. Ullman: Introduction to Automata Theory, Languages, and Computation, 2nd ed., Addison-Wesley, 2001
The course will cover the following topics - Empirical risk minimization, consistency, bounds - Maximum Likelihood estimators and their properties - Unsupervised learning, EM algorithm, mixture models - Bayesian learning - Deep (convolutional) networks - Supervised learning for deep networks - Hidden Markov models - Structured output SVMs - Ensemble learning, random forestslabs/seminars:
Labs will be dedicated to practical implementations of selected methods discussed in the course as well as seminar classes with task-oriented assignments.literature:
1. M. Mohri, A. Rostamizadeh and A. Talwalkar, Foundations of Machine Learning, MIT Press, 2012 2. K.P. Murphy, Machine Learning: A Probabilistic Perspective, MIT Press, 2012 3. T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning, Springer, 2010 4. I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT Press, 2016
1. Introduction to multi-agent systems 2. Reactive Agents 3. Belief-Desire-Intention (BDI) architecture 4. Introduction to Game Theory 5. Solving Normal-Form Games 6. Games in Extensive Form 7. Solving Extensive-Form Games 8. Cooperative Game Theory 9. Distributed constraint reasoning 1 (DCSP) 10. Distributed constraint reasoning 2 (DCOP) 11. Social Choice, Voting 12. Resource allocation and Auctions 13. Mechanism Design 14. Wrap-uplabs/seminars:
1. Agent architectures 2. Belief-Desire-Intention, Jason 3. Jason 4. Introduction to Game Theory 5. Solving Normal-Form Games 6. Extensive-Form Games 7. Solving Extensive-Form Games 8. Cooperative Game Theory 9. Distributed constraint satisfaction (DCSP) 10. Distributed constraint optimization (DCOP) 11. Social Choice, Voting 12. Introduction to Auctions 13. Auctions, Mechanism Design 14. Wrap-upliterature:
Shoham, Y. and Leyton-Brown, K.: Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, Cambridge University Press, 2008, ISBN 9780521899437. Weiss, G. (eds): Multiagent Systems, second edition, MIT Press, 2013 Vidal, J. M.: Fundamentals of Multiagent Systems with NetLogo Examples, 2009
1. Digital image processing vs. computer vision. Role of interpretation. Objects in images. Digital image. Concepts. 2. Physical foundation of images. Image acquisition from geometric and radiometric point of view. 3. Brightness and geometric transformations, interpolation. 4. Fourier transform. Derivation of the sampling theorem. Frequency filtration of images. Image restauration. 5. Processing in the spatial domain. Convolution. Correlation. Noise filtration. Homomorphic filtration. 6. Edge detection. Multiscale image processing. Canny detector. 7. Principal component analysis. Wavelets transformation. 8. Color images and processing of color images. 9. Image compression. Video compression. 10. Mathematical morphology. 11. Image segmentation - thresholding, k-means, EM algorithm. 12. Image segmentation - mean shift, seek for the optimal graph cut. 13. Registration of images and of objects in images.labs/seminars:
1. MATLAB. Homework 1 assignment (image acquisition). 2. Constultations. Solving the homework. 3. Constultations. Solving the homework. 4. Constultations. Solving the homework. 5. Homework 1 handover. Homework 2 assignment (Fourier transformation). 6. Constultations. Solving the homework. 7. Constultations. Solving the homework. 8. Constultations. Solving the homework. 9. Homework 2 handover. Homework 3 assignment (image segmentation). 10. Constultations. Solving the homework. 11. Constultations. Solving the homework. 12. Consultations. Homework 3 handover. 13. Written test. Presentation of several best student homeworks.literature:
1. Šonka M., Hlaváč V., Boyle R.: Image Processing, Analysis and Machine vision, 4th edition, Thomson Learning, Toronto, Canada, 2015, 912p., ISBN-10: 1133593607. 2. Svoboda, T., Kybic, J., Hlaváč, V.: Image processing, analysis and machine vision. The MATLAB companion, Thomson Learning, Toronto, Canada, 2007.
1.The pattern recognition problem. Overview of the Course. Basic notions. 2.The Bayesian decision-making problem, i.e. minimization of expected loss. 3.Non-bayesian decision problems. 4.Parameter estimation. The maximum likelihood method. 5.The nearest neighbour classifier. 6.Linear classifiers. Perceptron learning. 7.The Adaboost method. 8.Learning as a quadratic optimization problem. SVM classifiers. 9.Feed-forward neural nets. The backpropagation algorithm. 10.Decision trees. 11.Logistic regression. 12.The EM (Expectation Maximization) algorithm. 13.Sequential decision-making (Wald´s sequential test). 14.Recap.labs/seminars:
Students solve four or five pattern recognition problems, for instance a simplified version of OCR (optical character recognition), face detection or spam detection using either classical methods or trained classifiers. 1.Introduction to MATLAB and the STPR toolbox, a simple recognition experiment 2.The Bayes recognition problem 3.Non-bayesian problems I: the Neyman-Pearson problem. 4.Non-bayesian problems II: The minimax problem. 5.Maximum likelihood estimates. 6.Non-parametric estimates, Parzen windows. 7.Linear classifiers, the perceptron algorithm 8.Adaboost 9.Support Vector Machines I 10.Support Vector Machines II 11.EM algoritmus I 12.EM algoritmus II 13.Submission of reports. Discussion of results. 14.Submission of reports. Discussion of results.literature:
1.Duda, Hart, Stork: Pattern Classification, 2001. 2.Bishop: Pattern Recognition and Machine Learning, 2006. 3.Schlesinger, Hlavac: Ten Lectures on Statistical and Structural Pattern Recognition, 2002.
1. Analyzing algorithms and problems, classifying functions by their growth rates, time and space complexity. 2. Correctness of algorithms, variants and invariants. 3. Decision problems and optimization problems. 4. Turing machine and its variants. 5. Relation between Turing machine and RAM machine. 6. Classes P and NP. 7. Reduction and polynomial reduction of problems. 8. NP-complete problems, Cook's Theorem. 9. Classes PSPACE and NPSPACE.. 10. Randomized algorithms with polynomial time complexity. 11. Classes RP and ZZP. 12. Undecidable problems. 13. Reserve.labs/seminars:
1. Determining time and space complexity of well known algorithms. 2. Verifying correctness of algorithms using variants and invariants. 3. Turing machines. 4. Polynomial reductions of problems. 5. Examples of randomized algorithms. 6. Examples of undecidable problems.literature:
[1] Kozen, D. C.: The design and Analysis of Algorithms, Springer-Vrelag, 1991 [2] Harel, D: Algorithmics: The Spirit of Computing, Addison-Wesleyt Inc., Reading MA 2002 [3] Talbot, J., Welsh, D.: Complexity and Cryptography, Cambridge University Press, 2006
1. Introduction to Basic Terms of Combinatorial Optimization, Example Applications and a Test of Preliminary Knowledge 2. Integer Linear Programming - Algorithms 3. Problem Formulation by Integer Linear Programming 4. The Shortest Paths. Problem Formulation by Shortest Paths. 5. Problem Formulation by Shortest Paths. 6. Flows and Cuts - Algorithms and Problem Formulation. Test I. 7. Multicommodity network flows 8. Knapsack Problem and Pseudo-polynomial Algorithms 9. Traveling Salesman Problem and Approximation Algorithms 10. Monoprocessor Scheduling 11. Scheduling on Parallel Processors. Test II. 12. Project Scheduling with Time Windows. 13. Constraint Programming. 14. Reservedlabs/seminars:
1. Policy and Individual Project Market 2. Introduction to the Experimental Environment and Optimization Library 3. Integer Linear Programming 4. Individual Project I - Assignment and Problem Classification 5. Modeling Languages for Solving Combinatorial Problems 6. Individual Project II - Related Work and Solution 7. Applications of Network Flows and Cuts 8. Individual Project III - Consultation 9. Test III 10. Scheduling 11. Advanced Methods for Solving Combinatorial Problems 12. Individual Project IV - hand in a code and a written report 13. Ungraded Assessment 14. Reservedliterature:
B. H. Korte and J. Vygen, Combinatorial Optimization: Theory and Algorithms. Springer, third ed., 2006. J. Blazevicz, Scheduling Computer and Manufacturing Processes. Springer, second ed., 2001. J. Demel, Grafy a jejich aplikace. Academia, second ed., 2002. TORSCHE http://rtime.felk.cvut.cz/scheduling-toolbox/
1. Introduction to the problematic of automated planning in artificial intelligence 2. Representation in form of search in the space of states (state-space planning) 3. Heuristic planning using relaxations 4. Heuristic planning using abstractions 5. Structural heuristics 6. The Graphplan algorithm 7. Compilation of planning problems 8. Representation of the planning problem in form of search in the space of plans (plan-space planning) 9. Hierarchical planning 10. Planning under uncertainty 11. Model of a planning problem as a Markov Decision Process (MDP) 12. Model of a planning problem as a Partially Observable Markov Decision Process (POMDP) 13. Introduction to planning in robotics 14. Applications of automated planninglabs/seminars:
1. Planning basics, representation, PDDL and planners 2. State-space planning, Assignment 1 3. Relaxation heuristics, Assignment 1 Consultations 4. Abstraction heuristics, Assignment 1 Deadline 5. Landmark heuristics, Assignment 1 Results/0-point Deadline 6. Linear Program formulation of heuristics 7. Compilations 8. Partial-order planning 9. Hierarchical Planning 10. Planning with uncertainty, Assignment 2 11. Planning for MDPs, Assignment 2 Consultations 12. Planning for POMDPs, Assignment 2 Consultations 13. Monte Carlo tree search, Assignment 2 Deadline 14. Consultations of exam topics, Assignment 2 Results/0-point Deadline, Creditliterature:
* Malik Ghallab, Dana Nau, Paolo Traverso: Automated Planning: Theory & Practice, Elsevier, May 21, 2004 * https://www.coursera.org/course/aiplan
1. Agent-environment model, interaction principles 2. Concept learning, on-line learnability, version space 3. Learning from i.i.d. data, PAC-learnability, VC-dimension 4. Learnability of propositional-logic concepts 5. Learning a graphical probabilistic model 6. Learning a graphical probabilistic model (2) 7. PAC-learning predicate-logic CNF 8. Learning predicate-logic clauses 9. Learning a relational graphical probabilistic model 10. Learning a relational graphical probabilistic model (2) 11. Active learning 12. Reinforcemenent learning 13. Reinforcemenent Learning (2) 14. Solomonoff induction and universal AIlabs/seminars:
NAliterature:
Course textbook available at https://cw.fel.cvut.cz/wiki/courses/smu/start Stuart Russell and Peter Norvig: Artificial Intelligence: A Modern Approach, Prentice Hall 2010 Luc De Raedt: Logical and Relational Learning, Springer 2008 Marcus Hutter: Universal artificial intelligence, Springer 2005
1.Introduction. Course map. Overview of covered problems and application areas. 2.Detectors of interest points and distinguished regions. Harris interest point (corner) detector, Laplace detector and its fast approximation as Difference of Gaussians, maximally stable extremal regions (MSER).Descriptions of algorithms, analysis of their robustness to geometric and photometric transformations of the image. 3.Descriptors of interest regions. The local reference frame method for geometrically invariant description. The SIFT (scale invariant feature transform) descriptor, local binary patterns (LBP). 4.Detection of geometric primitives, Hough transfrom. RANSAC (Random Sample and Consensus). 5.Segmentation I. Image as a Markov random field (MRF). Algorithms formulating segmentation as a min-cut problem in a graph. 6.Segmentation II. Level set methods. 7.Inpainting. Semi-automatic simple replacement of a content of an image region without any visible artifacts. 8.Object detection by the "scanning window" method, the Viola-Jones approach. 9. Using local invariant description for object recognition and correspondence search. 10.Tracking I. KLT tracker, Harris and correlation. 11.Tracking II. Mean-shift, condensation. 12.Image Retrieval I. Image descriptors for large databases. 13.Image Retrieval II: Search in large databases, idexation, geometric verification 14.Reservelabs/seminars:
1. - 5. Image stitching. Given a set of images with some overlap, automatically find corresponding points and estimate the geometric transformation between images. Create a single panoramic image by adjusting intensities of individual images and by stitching them into a single frame. 6. - 9. Segmentation and impainting. Implement a simple impainting method, i.e. a method allowing semi-automatic simple replacement of a content of an image region without any visible artifacts. 7. -12. Detection of a instance of a class of objects (faces, cars, etc.) using the scanning window approach (Viola-Jones type detector). 13.-14. Submission and review of reports.literature:
1.M. Sonka, V. Hlavac, R. Boyle. Image Processing, Analysis and Machine Vision. Thomson 2007 2.D. A. Forsyth, J. Ponce. Computer Vision: A Modern Approach. Prentice Hall 2003
NAlabs/seminars:
NAliterature:
NA
1 Introduction, propositional logic, and SAT 2 SAT solving - resolution, DPLL, and CDCL 3 Prolog 4 Prolog 5 Prolog 6 Prolog 7 Answer set programming 8 First-order logic and semantic tableaux 9 Model finding methods 10 Resolution and equality in FOL 11 ANL loop, superposition calculus 12 Proof assistants 13 Complexity and decidability issues, SMTlabs/seminars:
NAliterature:
Bundy, A.: The Computational Modelling of Mathematical Reasoning, Academic Press 1983 (Bundy). Clarke, E.M. Jr., Grumberg, O. and Peled, D. A.: Model Checking, The MIT Press, 1999, Fourth Printing 2002. Newborn, M.: Automated Theorem Proving: Theory and Practice Robinson, J.A., Voronkov, A. (Eds.): Handbook of Automated Reasoning (in 2 volumes). Elsevier and MIT Press 2001 Weidenbach, Ch.: SPASS: Combining Superposition, Sorts and Splitting (1999) Wos, L. and Pieper, G.W.: A Fascinating Country in the World of Computing: Your Guide to Automated Reasoning Flach P.: Simply Logical ? Intelligent Reasoning by Example, John Wiley, 1998 Bratko I.: Prolog Programming for Artificial Intelligence, Addison-Wesley, 2011
- Computational models of autonomous systems - Path planning, randomized search techniques, multi-goal path planning, and informative path planning - Robotic exploration, online decision-making, persistent environmental monitoring, decision-making with limited resources - Methods of game theory and safety games in mobile robotics tasks, solving antagonistic conflict - Reactive and behavioral models in tasks of collective robotics - Coordination and cooperation in autonomous systemslabs/seminars:
In laboratory exercises, students will practice their problem formulations of robotic challenges and practical solutions in a realistic robotic simulator or with consumer mobile robots. - Computational models of autonomous systems - Path planning, randomized search techniques, multi-goal path planning, and informative path planning - Robotic exploration, online decision-making, persistent environmental monitoring, decision-making with limited resources - Methods of game theory and safety games in mobile robotics tasks, solving antagonistic conflict - Reactive and behavioral models in tasks of collective robotics - Coordination and cooperation in autonomous systemsliterature:
1st chapter: Robin R. Murphy: Introduction to AI Robotics, MIT Press, Cambridge, MA, 2001 Steven M. LaValle: Planning Algorithms, Cambridge University Press, 2006 (http://planning.cs.uiuc.edu )
1. 3D computer vision, its goals and applications, course overview. 2. Geometry of points and lines in plane, ideal points and lines, point and line representations, line intersection and point join, homography. 3. Perspective camera model, center of projection, principal point, optical axis, optical ray and optical plane. Vanishing point and vanishing line, cross-ratio of four colinear points and its use for camera calibration. Radial distortion models. 4. Camera resection from six points, external camera orientation from three points. 5. Epipolar geometry, its representation by the fundamental matrix, essential matrix and its decomposition. 6. The seven-point algorithm for fundamental matrix estimation and the five-point algorithm for essential matrix estimation. Triangulation of points in 3D space from image correspondences. 7. The concept of algebraic and reprojection errors and Sampson approximation for reprojection error. Sampson error for fundamental matrix estimation. 8. Local optimization of Sampson error, derivation of a robust error by marginalization of a probabilistic model. 9. Robust optimization of geometric problems in 3D vision by MCMC methods. 10. Reconstruction of a multi-camera system, the bundle adjustment method, minimal and non-minimal representations for some basic geometric objects and mappings on them. 11. Introduction to stereoscopic vision, epipolar rectification of images. 12. The uniqueness, occlusion, ordering, coherence and continuity constraints in stereo and their representation in stereoscopic matching table. 13. Marroquin's greedy algorithm and the maximum-likelihood algorithm for stereoscopic matching. 14. Photometric stereo, its calibrated and uncalibrated versions.labs/seminars:
1. Introduction, term project specification, instructions on how to select an object suitable for 3D reconstruction, on image capture, and on camera calibration. 2. An introductory computer programming exercise with points and lines in a plane. 3. An exercise on the geometric description of perspective camera. Robust maximum likelihood estimation of a planar line. 4. Computing sparse correspondences by WBS matcher. 5. A computer exercise with matching and estimation of two homographies in an image pair. 6. Calibration of poses of a set of cameras. 7. Midterm test. 8. Sparse point cloud reconstruction. 9. Optimization of point and camera estimates by bundle adjustment. 10. Epipolar rectification and dense stereomatching. Dense point cloud reconstruction. 11. 3D surface reconstruction. 12. Presentation and submission of resulting models.literature:
R. Hartley and A. Zisserman. Multiple View Geometry. 2nd ed. Cambridge University Press 2003.
NAlabs/seminars:
NAliterature:
NA