KOD | Treść programowa | Godziny |
---|
laboratoria |
---|
T-L-1 | Gatting familiar with Java, Eclipse IDE, and a set of classes prepared for implementations of search algorithms. Initial implementation of the sudoku solver. | 2 |
T-L-2 | Implementation of sudoku solver. Testing - varations on the initial state (making the sudoku harder). Observing the number of visited states and the number of solution.
Posing the homework task - programming the solver for the sliding puzzle. | 2 |
T-L-3 | Testing homework programs - sliding puzzle solvers. Getting familiar with Java classes prepared for game tree searches (alpha-beta pruning engine). Posing the homework task - programming an AI playing the connect4 game. | 3 |
T-L-4 | Testing homework programs - connect4 program: experimentations with different search depths, program vs program games, comments on introduced heuristics (position evaluation). | 2 |
T-L-5 | Genetic algorithm implementation for the knapsack problem, including: at least two selection methods, and two crossing-over methods. Posing the homework task: comparison of GA solutions with exact solutions based on dynamic programming (computation times). | 2 |
T-L-6 | Programming the simple perceptron (in MATLAB). Two-class separation of points on a plane. Observing the number of update steps in learning algorithm influenced by: learning rate coefficient, number of data points (sample size), changes in separation margin. Posing the homework task - implementation of non-linear separation using the simple perceptron together with the kernel trick. | 2 |
T-L-7 | Implementation of MLP neural network (in MATLAB) for approximation of a function of two variables. Testing accuracy with respect to: number of neurons, learning coefficient, number of update steps. Posing the homework task: complexity selection for MLP via cross-validation. | 2 |
T-L-8 | Applications of RBF neural networks in modeling of technical and economic problems. Applications of RBF neural networks in classification tasks. | 4 |
T-L-9 | Application of unsupervised learning networks to the data clustering problem. | 2 |
T-L-10 | Hopfield network - application to the pattern recognition problem. | 2 |
T-L-11 | Discovering fuzzy phenomena, fuzzy variables, fuzzy notions in the world. Identification of membership functions for own detected uncertain values from science, technique, medicine, economics, biology etc. Describing membership fumctions by mathematical formulas. | 2 |
T-L-12 | Creating rule bases for real systems. Design and implementation of the simple SISO fuzzy system. | 2 |
T-L-13 | Design and implementation of the MISO fuzzy system. Application of the fuzzy model in the control system. | 3 |
| 30 |
---|
wykłady |
---|
T-W-1 | Definitions of AI and problems posed within it, e.g.: graph and game tree search problems - n-queens, sliding puzzle, sudoku, minimal sudoku, jeep problem, knapsack problem, traveling salesman problem, prisonner's dilemma, iterated prisonner's dilemma, pattern recognition / classification, imitation game (Turing's test), artificial life and cellular automata, Conway's game of life. Minsky's views on AI. | 2 |
T-W-2 | Graph search algorithms: Breadth-First-Search, Best-First-Search, A*, Dijkstra's algorithm. Notion of heuristics. Efficient data structures for implementations of above algorithms: hash map, priority queue (heap). | 2 |
T-W-3 | Algorithms for two-person games of perfect information: MIN-MAX, alpha-beta pruning, and their computational complexity. Horizon effect. | 2 |
T-W-4 | Genetic algorithms for optimization problems. Scheme of main genetic loop. Fitness function. Selection methods in GAs: roulette selection, rank selection, tournaments. "Exploration vs. exploitation" problem. Remarks on convergence, premature convergence (population diversity). Crossing-over methods: one-point, two-points, multiple-point crossing-over. Mutation andits role in GAs (discrete and continuous). Examples of problems: knapsack problem, TSP. Exact solution of knapsack problem via dynamic programming. | 2 |
T-W-5 | Data classification (binary, linear) using the simple perceptron (Rosenblatt's perceptron).
Forward pass. Learning algorithm. Linear separability of data. Novikoff's theorem on learning convergence (with the proof). | 3 |
T-W-6 | Multi-Layer-Perceptron (MLP) artificial neural network. Sigmoid as activation function. On-line vs off-line learning. Derivation of the back-propagation algorithm. Possible variants. Overfitting and complexity selection for MLP via testing or cross-validation. | 3 |
T-W-7 | Neural networks with radial basis function - RBF neural networks. Structure and learning methods. Examples of applications. Probabilistic neural networks. | 3 |
T-W-8 | Self-organizing networks - unsupervised learning algorithms. The strucrure and operation of networks. Kohonen's network and learning algorithm. Examples of applications of self-organizing networks. | 2 |
T-W-9 | Recursive networks - Hopfield network, Hamming network. Construction, operation, learning methods. Examples of network applications. | 3 |
T-W-10 | Diffrence between classical and fuzzy logic. Examples of fuzziness in the real world. Mathematical models of fuzzy linguistic and numerical evaluations: membership functions. Examples of membership functions. Identification of membership functions by experts. | 2 |
T-W-11 | Fuzzy models of systems. Components of fuzzy models: fuzzification, premise evaluation, determination of activated membership functions of paricular rules, determining of the resulting membership function of the rule base and its defuzzification. Constructing fuzzy models for chosen real problems and calculating model ouputs for give model inputs. Fuzzy control and its structure. | 4 |
T-W-12 | Exam. | 2 |
| 30 |
---|