Pole | KOD | Znaczenie kodu |
---|
Zamierzone efekty uczenia się | WM-WI_1-_??_W01 | Student has an elementary knowledge on AI problems and algorithmic techniques applicable to solve them. |
---|
Cel przedmiotu | C-3 | Building up the understing of such notions as: heuristics, pay-off, strategy, search horizon. |
---|
C-6 | Giving a historical background on AI and problems within it. |
C-1 | Familiarization with various search techniques for practical problems. |
C-5 | Teaching a possibility of solving optimization problems by means of randomized methods (genetic algorithms). |
C-2 | Introducing elements of two-person games of perfect information and algorithms for that purpose. |
C-4 | Familiarization with classification and approximation as exemplary tasks within machine learning. Introducing simple artificial neural networks for that purpose. |
Treści programowe | T-L-5 | Programming the simple perceptron (in MATLAB). Two-class separation of points on a plane. Observing the number of update steps in learning algorithm influenced by: learning rate coefficient, number of data points (sample size), changes in separation margin. Posing the homework task - implementation of non-linear separation using the simple perceptron together with the kernel trick. |
---|
T-L-1 | Gatting familiar with Java, Eclipse IDE, and a set of classes prepared for implementations of search algorithms. Initial implementation of the sudoku solver. |
T-W-1 | Definitions of AI and problems posed within it, e.g.: graph and game tree search problems - n-queens, sliding puzzle, sudoku, minimal sudoku, jeep problem, knapsack problem, traveling salesman problem, prisonner's dilemma, iterated prisonner's dilemma, pattern recognition / classification, imitation game (Turing's test), artificial life and cellular automata, Conway's game of life. Minsky's views on AI. |
T-L-3 | Testing homework programs - sliding puzzle solvers. Getting familiar with Java classes prepared for game tree searches (alpha-beta pruning engine). Posing the homework task - programming an AI playing the connect4 game. |
T-L-4 | Testing homework programs - connect4 program: experimentations with different search depths, program vs program games, comments on introduced heuristics (position evaluation). |
T-W-7 | Exam. |
T-W-3 | Algorithms for two-person games of perfect information: MIN-MAX, alpha-beta pruning, and their computational complexity. Horizon effect. |
T-L-6 | Implementation of MLP neural network (in MATLAB) for approximation of a function of two variables. Testing accuracy with respect to: number of neurons, learning coefficient, number of update steps. Posing the homework task: complexity selection for MLP via cross-validation. |
T-W-6 | Genetic algorithms for optimization problems. Scheme of main genetic loop. Fitness function. Selection methods in GAs: roulette selection, rank selection, tournaments. "Exploration vs. exploitation" problem. Remarks on convergence, premature convergence (population diversity). Crossing-over methods: one-point, two-points, multiple-point crossing-over. Mutation andits role in GAs (discrete and continuous). Examples of problems: knapsack problem, TSP. Exact solution of knapsack problem via dynamic programming. |
T-W-2 | Graph search algorithms: Breadth-First-Search, Best-First-Search, A*, Dijkstra's algorithm. Notion of heuristics. Efficient data structures for implementations of above algorithms: hash map, priority queue (heap). |
T-W-5 | Multi-Layer-Perceptron (MLP) artificial neural network. Sigmoid as activation function. On-line vs off-line learning. Derivation of the back-propagation algorithm. Possible variants. Overfitting and complexity selection for MLP via testing or cross-validation. |
T-L-7 | Genetic algorithm implementation for the knapsack problem, including: at least two selection methods, and two crossing-over methods. Posing the homework task: comparison of GA solutions with exact solutions based on dynamic programming (computation times). |
T-W-4 | Data classification (binary, linear) using the simple perceptron (Rosenblatt's perceptron).
Forward pass. Learning algorithm. Linear separability of data. Novikoff's theorem on learning convergence (with the proof). |
T-L-2 | Implementation of sudoku solver. Testing - varations on the initial state (making the sudoku harder). Observing the number of visited states and the number of solution.
Posing the homework task - programming the solver for the sliding puzzle. |
Metody nauczania | M-1 | Lecture. |
---|
Sposób oceny | S-4 | Ocena podsumowująca: Final grade for lectures from the test (1.5 h). |
---|
Kryteria oceny | Ocena | Kryterium oceny |
---|
2,0 | |
3,0 | Obtaining at least 50% in the final test. |
3,5 | |
4,0 | |
4,5 | |
5,0 | |