Zachodniopomorski Uniwersytet Technologiczny w Szczecinie

Administracja Centralna Uczelni - Wymiana międzynarodowa (S1)

Sylabus przedmiotu Artificial Intelligence:

Informacje podstawowe

Kierunek studiów Wymiana międzynarodowa
Forma studiów studia stacjonarne Poziom pierwszego stopnia
Tytuł zawodowy absolwenta
Obszary studiów
Profil
Moduł
Przedmiot Artificial Intelligence
Specjalność przedmiot wspólny
Jednostka prowadząca Katedra Metod Sztucznej Inteligencji i Matematyki Stosowanej
Nauczyciel odpowiedzialny Przemysław Klęsk <pklesk@wi.zut.edu.pl>
Inni nauczyciele Joanna Kołodziejczyk <Joanna.Kolodziejczyk@zut.edu.pl>
ECTS (planowane) 5,0 ECTS (formy) 5,0
Forma zaliczenia zaliczenie Język angielski
Blok obieralny Grupa obieralna

Formy dydaktyczne

Forma dydaktycznaKODSemestrGodzinyECTSWagaZaliczenie
wykładyW1 30 2,00,30zaliczenie
laboratoriaL1 30 3,00,70zaliczenie

Wymagania wstępne

KODWymaganie wstępne
W-1mathematics
W-2algorithms and data structures
W-3programming
W-4object oriented programming

Cele przedmiotu

KODCel modułu/przedmiotu
C-1Familiarization with various search techniques for practical problems.
C-2Introducing elements of two-person games of perfect information and algorithms for that purpose.
C-3Building up the understing of such notions as: heuristics, pay-off, strategy, search horizon.
C-4Familiarization with classification and approximation as exemplary tasks within machine learning. Introducing simple artificial neural networks for that purpose.
C-5Teaching a possibility of solving optimization problems by means of randomized methods (genetic algorithms).
C-6Giving a historical background on AI and problems within it.
C-7Acquirement of competence and practice in construction of fuzzy models of systems, fuzzy calculations and fuzzy control of plants.

Treści programowe z podziałem na formy zajęć

KODTreść programowaGodziny
laboratoria
T-L-1Gatting familiar with Java, Eclipse IDE, and a set of classes prepared for implementations of search algorithms. Initial implementation of the sudoku solver.2
T-L-2Implementation of sudoku solver. Testing - varations on the initial state (making the sudoku harder). Observing the number of visited states and the number of solution. Posing the homework task - programming the solver for the sliding puzzle.2
T-L-3Testing homework programs - sliding puzzle solvers. Getting familiar with Java classes prepared for game tree searches (alpha-beta pruning engine). Posing the homework task - programming an AI playing the connect4 game.3
T-L-4Testing homework programs - connect4 program: experimentations with different search depths, program vs program games, comments on introduced heuristics (position evaluation).2
T-L-5Genetic algorithm implementation for the knapsack problem, including: at least two selection methods, and two crossing-over methods. Posing the homework task: comparison of GA solutions with exact solutions based on dynamic programming (computation times).2
T-L-6Programming the simple perceptron (in MATLAB). Two-class separation of points on a plane. Observing the number of update steps in learning algorithm influenced by: learning rate coefficient, number of data points (sample size), changes in separation margin. Posing the homework task - implementation of non-linear separation using the simple perceptron together with the kernel trick.2
T-L-7Implementation of MLP neural network (in MATLAB) for approximation of a function of two variables. Testing accuracy with respect to: number of neurons, learning coefficient, number of update steps. Posing the homework task: complexity selection for MLP via cross-validation.2
T-L-8Applications of RBF neural networks in modeling of technical and economic problems. Applications of RBF neural networks in classification tasks.4
T-L-9Application of unsupervised learning networks to the data clustering problem.2
T-L-10Hopfield network - application to the pattern recognition problem.2
T-L-11Discovering fuzzy phenomena, fuzzy variables, fuzzy notions in the world. Identification of membership functions for own detected uncertain values from science, technique, medicine, economics, biology etc. Describing membership fumctions by mathematical formulas.2
T-L-12Creating rule bases for real systems. Design and implementation of the simple SISO fuzzy system.2
T-L-13Design and implementation of the MISO fuzzy system. Application of the fuzzy model in the control system.3
30
wykłady
T-W-1Definitions of AI and problems posed within it, e.g.: graph and game tree search problems - n-queens, sliding puzzle, sudoku, minimal sudoku, jeep problem, knapsack problem, traveling salesman problem, prisonner's dilemma, iterated prisonner's dilemma, pattern recognition / classification, imitation game (Turing's test), artificial life and cellular automata, Conway's game of life. Minsky's views on AI.2
T-W-2Graph search algorithms: Breadth-First-Search, Best-First-Search, A*, Dijkstra's algorithm. Notion of heuristics. Efficient data structures for implementations of above algorithms: hash map, priority queue (heap).2
T-W-3Algorithms for two-person games of perfect information: MIN-MAX, alpha-beta pruning, and their computational complexity. Horizon effect.2
T-W-4Genetic algorithms for optimization problems. Scheme of main genetic loop. Fitness function. Selection methods in GAs: roulette selection, rank selection, tournaments. "Exploration vs. exploitation" problem. Remarks on convergence, premature convergence (population diversity). Crossing-over methods: one-point, two-points, multiple-point crossing-over. Mutation andits role in GAs (discrete and continuous). Examples of problems: knapsack problem, TSP. Exact solution of knapsack problem via dynamic programming.2
T-W-5Data classification (binary, linear) using the simple perceptron (Rosenblatt's perceptron). Forward pass. Learning algorithm. Linear separability of data. Novikoff's theorem on learning convergence (with the proof).3
T-W-6Multi-Layer-Perceptron (MLP) artificial neural network. Sigmoid as activation function. On-line vs off-line learning. Derivation of the back-propagation algorithm. Possible variants. Overfitting and complexity selection for MLP via testing or cross-validation.3
T-W-7Neural networks with radial basis function - RBF neural networks. Structure and learning methods. Examples of applications. Probabilistic neural networks.3
T-W-8Self-organizing networks - unsupervised learning algorithms. The strucrure and operation of networks. Kohonen's network and learning algorithm. Examples of applications of self-organizing networks.2
T-W-9Recursive networks - Hopfield network, Hamming network. Construction, operation, learning methods. Examples of network applications.3
T-W-10Diffrence between classical and fuzzy logic. Examples of fuzziness in the real world. Mathematical models of fuzzy linguistic and numerical evaluations: membership functions. Examples of membership functions. Identification of membership functions by experts.2
T-W-11Fuzzy models of systems. Components of fuzzy models: fuzzification, premise evaluation, determination of activated membership functions of paricular rules, determining of the resulting membership function of the rule base and its defuzzification. Constructing fuzzy models for chosen real problems and calculating model ouputs for give model inputs. Fuzzy control and its structure.4
T-W-12Exam.2
30

Obciążenie pracą studenta - formy aktywności

KODForma aktywnościGodziny
laboratoria
A-L-1Participation in lab classes.30
A-L-2Programming the sliding puzzle solver in Java. Preparation for the short test on searching graphs.8
A-L-3Programming an AI for the connect4 game. Preparation for the short test on searching game trees.8
A-L-4Programming a complexity selection method for MLP via cross-validation. Preparation for a short test on MLP.4
A-L-5Programming a comparison of GAs vs dynamic programming approach for the knapsack problem. Preparation for a short test on GAs.4
A-L-6Getting familiar with pdf materials on non-linear classification by means of the kernel trick (Gaussian kernels + Rosenblatt's perceptron).4
A-L-7Getting familiar with SaC Java library and its documentation.16
74
wykłady
A-W-1Participation in lectures.30
A-W-2Self-preparation for the exam.18
A-W-3Sitting for the exam.2
50

Metody nauczania / narzędzia dydaktyczne

KODMetoda nauczania / narzędzie dydaktyczne
M-1Lecture.
M-2Case study method.
M-3Didactic games.
M-4Computer programming.
M-5Demonstration.

Sposoby oceny

KODSposób oceny
S-1Ocena formująca: Short tests (10 minutes long) at the end of each topic during the lab.
S-2Ocena formująca: Grades for the programs written as homeworks.
S-3Ocena podsumowująca: Final grade for the lab calculated as a weighted mean from partial grades: - tests (weight: 40%), - programs (weight: 60%).
S-4Ocena podsumowująca: Final grade for lectures from the test (1.5 h).

Zamierzone efekty uczenia się - wiedza

Zamierzone efekty uczenia sięOdniesienie do efektów kształcenia dla kierunku studiówOdniesienie do efektów zdefiniowanych dla obszaru kształceniaCel przedmiotuTreści programoweMetody nauczaniaSposób oceny
WM-WI_1-_??_W01
Student has an elementary knowledge on AI problems and algorithmic techniques applicable to solve them.
C-1, C-2, C-3, C-4, C-5, C-6T-W-12, T-W-1, T-W-2, T-W-3, T-W-5, T-W-6, T-W-4, T-L-2, T-L-4, T-L-3, T-L-6, T-L-7, T-L-5, T-L-1M-1S-4

Zamierzone efekty uczenia się - umiejętności

Zamierzone efekty uczenia sięOdniesienie do efektów kształcenia dla kierunku studiówOdniesienie do efektów zdefiniowanych dla obszaru kształceniaCel przedmiotuTreści programoweMetody nauczaniaSposób oceny
WM-WI_1-_??_U01
Student can design and implement elementary AI algorithms.
C-1, C-2, C-3, C-4, C-5, C-6T-W-12, T-W-1, T-W-2, T-W-3, T-W-5, T-W-6, T-W-4, T-L-2, T-L-4, T-L-3, T-L-6, T-L-7, T-L-5, T-L-1M-4S-2

Kryterium oceny - wiedza

Efekt uczenia sięOcenaKryterium oceny
WM-WI_1-_??_W01
Student has an elementary knowledge on AI problems and algorithmic techniques applicable to solve them.
2,0
3,0Obtaining at least 50% in the final test.
3,5
4,0
4,5
5,0

Kryterium oceny - umiejętności

Efekt uczenia sięOcenaKryterium oceny
WM-WI_1-_??_U01
Student can design and implement elementary AI algorithms.
2,0
3,0Obtaining a positive average grade for hoemework programming tasks.
3,5
4,0
4,5
5,0

Literatura podstawowa

  1. S. Russel, P. Norvig, Introduction to Artificial Intelligence, A Modern Approach, Prentice Hall, 2010, 3rd edition
  2. A. Piegat, Fuzzy modelling and control, Physica-Verlag, A Springer-Verlag Company, 2001
  3. D. Kriesel, A Brief Introduction to Neural Networks, 2012

Literatura dodatkowa

  1. P. Klęsk, Electronic materials available at: http://wikizmsi.zut.edu.pl, 2015

Treści programowe - laboratoria

KODTreść programowaGodziny
T-L-1Gatting familiar with Java, Eclipse IDE, and a set of classes prepared for implementations of search algorithms. Initial implementation of the sudoku solver.2
T-L-2Implementation of sudoku solver. Testing - varations on the initial state (making the sudoku harder). Observing the number of visited states and the number of solution. Posing the homework task - programming the solver for the sliding puzzle.2
T-L-3Testing homework programs - sliding puzzle solvers. Getting familiar with Java classes prepared for game tree searches (alpha-beta pruning engine). Posing the homework task - programming an AI playing the connect4 game.3
T-L-4Testing homework programs - connect4 program: experimentations with different search depths, program vs program games, comments on introduced heuristics (position evaluation).2
T-L-5Genetic algorithm implementation for the knapsack problem, including: at least two selection methods, and two crossing-over methods. Posing the homework task: comparison of GA solutions with exact solutions based on dynamic programming (computation times).2
T-L-6Programming the simple perceptron (in MATLAB). Two-class separation of points on a plane. Observing the number of update steps in learning algorithm influenced by: learning rate coefficient, number of data points (sample size), changes in separation margin. Posing the homework task - implementation of non-linear separation using the simple perceptron together with the kernel trick.2
T-L-7Implementation of MLP neural network (in MATLAB) for approximation of a function of two variables. Testing accuracy with respect to: number of neurons, learning coefficient, number of update steps. Posing the homework task: complexity selection for MLP via cross-validation.2
T-L-8Applications of RBF neural networks in modeling of technical and economic problems. Applications of RBF neural networks in classification tasks.4
T-L-9Application of unsupervised learning networks to the data clustering problem.2
T-L-10Hopfield network - application to the pattern recognition problem.2
T-L-11Discovering fuzzy phenomena, fuzzy variables, fuzzy notions in the world. Identification of membership functions for own detected uncertain values from science, technique, medicine, economics, biology etc. Describing membership fumctions by mathematical formulas.2
T-L-12Creating rule bases for real systems. Design and implementation of the simple SISO fuzzy system.2
T-L-13Design and implementation of the MISO fuzzy system. Application of the fuzzy model in the control system.3
30

Treści programowe - wykłady

KODTreść programowaGodziny
T-W-1Definitions of AI and problems posed within it, e.g.: graph and game tree search problems - n-queens, sliding puzzle, sudoku, minimal sudoku, jeep problem, knapsack problem, traveling salesman problem, prisonner's dilemma, iterated prisonner's dilemma, pattern recognition / classification, imitation game (Turing's test), artificial life and cellular automata, Conway's game of life. Minsky's views on AI.2
T-W-2Graph search algorithms: Breadth-First-Search, Best-First-Search, A*, Dijkstra's algorithm. Notion of heuristics. Efficient data structures for implementations of above algorithms: hash map, priority queue (heap).2
T-W-3Algorithms for two-person games of perfect information: MIN-MAX, alpha-beta pruning, and their computational complexity. Horizon effect.2
T-W-4Genetic algorithms for optimization problems. Scheme of main genetic loop. Fitness function. Selection methods in GAs: roulette selection, rank selection, tournaments. "Exploration vs. exploitation" problem. Remarks on convergence, premature convergence (population diversity). Crossing-over methods: one-point, two-points, multiple-point crossing-over. Mutation andits role in GAs (discrete and continuous). Examples of problems: knapsack problem, TSP. Exact solution of knapsack problem via dynamic programming.2
T-W-5Data classification (binary, linear) using the simple perceptron (Rosenblatt's perceptron). Forward pass. Learning algorithm. Linear separability of data. Novikoff's theorem on learning convergence (with the proof).3
T-W-6Multi-Layer-Perceptron (MLP) artificial neural network. Sigmoid as activation function. On-line vs off-line learning. Derivation of the back-propagation algorithm. Possible variants. Overfitting and complexity selection for MLP via testing or cross-validation.3
T-W-7Neural networks with radial basis function - RBF neural networks. Structure and learning methods. Examples of applications. Probabilistic neural networks.3
T-W-8Self-organizing networks - unsupervised learning algorithms. The strucrure and operation of networks. Kohonen's network and learning algorithm. Examples of applications of self-organizing networks.2
T-W-9Recursive networks - Hopfield network, Hamming network. Construction, operation, learning methods. Examples of network applications.3
T-W-10Diffrence between classical and fuzzy logic. Examples of fuzziness in the real world. Mathematical models of fuzzy linguistic and numerical evaluations: membership functions. Examples of membership functions. Identification of membership functions by experts.2
T-W-11Fuzzy models of systems. Components of fuzzy models: fuzzification, premise evaluation, determination of activated membership functions of paricular rules, determining of the resulting membership function of the rule base and its defuzzification. Constructing fuzzy models for chosen real problems and calculating model ouputs for give model inputs. Fuzzy control and its structure.4
T-W-12Exam.2
30

Formy aktywności - laboratoria

KODForma aktywnościGodziny
A-L-1Participation in lab classes.30
A-L-2Programming the sliding puzzle solver in Java. Preparation for the short test on searching graphs.8
A-L-3Programming an AI for the connect4 game. Preparation for the short test on searching game trees.8
A-L-4Programming a complexity selection method for MLP via cross-validation. Preparation for a short test on MLP.4
A-L-5Programming a comparison of GAs vs dynamic programming approach for the knapsack problem. Preparation for a short test on GAs.4
A-L-6Getting familiar with pdf materials on non-linear classification by means of the kernel trick (Gaussian kernels + Rosenblatt's perceptron).4
A-L-7Getting familiar with SaC Java library and its documentation.16
74
(*) 1 punkt ECTS, odpowiada około 30 godzinom aktywności studenta

Formy aktywności - wykłady

KODForma aktywnościGodziny
A-W-1Participation in lectures.30
A-W-2Self-preparation for the exam.18
A-W-3Sitting for the exam.2
50
(*) 1 punkt ECTS, odpowiada około 30 godzinom aktywności studenta
PoleKODZnaczenie kodu
Zamierzone efekty uczenia sięWM-WI_1-_??_W01Student has an elementary knowledge on AI problems and algorithmic techniques applicable to solve them.
Cel przedmiotuC-1Familiarization with various search techniques for practical problems.
C-2Introducing elements of two-person games of perfect information and algorithms for that purpose.
C-3Building up the understing of such notions as: heuristics, pay-off, strategy, search horizon.
C-4Familiarization with classification and approximation as exemplary tasks within machine learning. Introducing simple artificial neural networks for that purpose.
C-5Teaching a possibility of solving optimization problems by means of randomized methods (genetic algorithms).
C-6Giving a historical background on AI and problems within it.
Treści programoweT-W-12Exam.
T-W-1Definitions of AI and problems posed within it, e.g.: graph and game tree search problems - n-queens, sliding puzzle, sudoku, minimal sudoku, jeep problem, knapsack problem, traveling salesman problem, prisonner's dilemma, iterated prisonner's dilemma, pattern recognition / classification, imitation game (Turing's test), artificial life and cellular automata, Conway's game of life. Minsky's views on AI.
T-W-2Graph search algorithms: Breadth-First-Search, Best-First-Search, A*, Dijkstra's algorithm. Notion of heuristics. Efficient data structures for implementations of above algorithms: hash map, priority queue (heap).
T-W-3Algorithms for two-person games of perfect information: MIN-MAX, alpha-beta pruning, and their computational complexity. Horizon effect.
T-W-5Data classification (binary, linear) using the simple perceptron (Rosenblatt's perceptron). Forward pass. Learning algorithm. Linear separability of data. Novikoff's theorem on learning convergence (with the proof).
T-W-6Multi-Layer-Perceptron (MLP) artificial neural network. Sigmoid as activation function. On-line vs off-line learning. Derivation of the back-propagation algorithm. Possible variants. Overfitting and complexity selection for MLP via testing or cross-validation.
T-W-4Genetic algorithms for optimization problems. Scheme of main genetic loop. Fitness function. Selection methods in GAs: roulette selection, rank selection, tournaments. "Exploration vs. exploitation" problem. Remarks on convergence, premature convergence (population diversity). Crossing-over methods: one-point, two-points, multiple-point crossing-over. Mutation andits role in GAs (discrete and continuous). Examples of problems: knapsack problem, TSP. Exact solution of knapsack problem via dynamic programming.
T-L-2Implementation of sudoku solver. Testing - varations on the initial state (making the sudoku harder). Observing the number of visited states and the number of solution. Posing the homework task - programming the solver for the sliding puzzle.
T-L-4Testing homework programs - connect4 program: experimentations with different search depths, program vs program games, comments on introduced heuristics (position evaluation).
T-L-3Testing homework programs - sliding puzzle solvers. Getting familiar with Java classes prepared for game tree searches (alpha-beta pruning engine). Posing the homework task - programming an AI playing the connect4 game.
T-L-6Programming the simple perceptron (in MATLAB). Two-class separation of points on a plane. Observing the number of update steps in learning algorithm influenced by: learning rate coefficient, number of data points (sample size), changes in separation margin. Posing the homework task - implementation of non-linear separation using the simple perceptron together with the kernel trick.
T-L-7Implementation of MLP neural network (in MATLAB) for approximation of a function of two variables. Testing accuracy with respect to: number of neurons, learning coefficient, number of update steps. Posing the homework task: complexity selection for MLP via cross-validation.
T-L-5Genetic algorithm implementation for the knapsack problem, including: at least two selection methods, and two crossing-over methods. Posing the homework task: comparison of GA solutions with exact solutions based on dynamic programming (computation times).
T-L-1Gatting familiar with Java, Eclipse IDE, and a set of classes prepared for implementations of search algorithms. Initial implementation of the sudoku solver.
Metody nauczaniaM-1Lecture.
Sposób ocenyS-4Ocena podsumowująca: Final grade for lectures from the test (1.5 h).
Kryteria ocenyOcenaKryterium oceny
2,0
3,0Obtaining at least 50% in the final test.
3,5
4,0
4,5
5,0
PoleKODZnaczenie kodu
Zamierzone efekty uczenia sięWM-WI_1-_??_U01Student can design and implement elementary AI algorithms.
Cel przedmiotuC-1Familiarization with various search techniques for practical problems.
C-2Introducing elements of two-person games of perfect information and algorithms for that purpose.
C-3Building up the understing of such notions as: heuristics, pay-off, strategy, search horizon.
C-4Familiarization with classification and approximation as exemplary tasks within machine learning. Introducing simple artificial neural networks for that purpose.
C-5Teaching a possibility of solving optimization problems by means of randomized methods (genetic algorithms).
C-6Giving a historical background on AI and problems within it.
Treści programoweT-W-12Exam.
T-W-1Definitions of AI and problems posed within it, e.g.: graph and game tree search problems - n-queens, sliding puzzle, sudoku, minimal sudoku, jeep problem, knapsack problem, traveling salesman problem, prisonner's dilemma, iterated prisonner's dilemma, pattern recognition / classification, imitation game (Turing's test), artificial life and cellular automata, Conway's game of life. Minsky's views on AI.
T-W-2Graph search algorithms: Breadth-First-Search, Best-First-Search, A*, Dijkstra's algorithm. Notion of heuristics. Efficient data structures for implementations of above algorithms: hash map, priority queue (heap).
T-W-3Algorithms for two-person games of perfect information: MIN-MAX, alpha-beta pruning, and their computational complexity. Horizon effect.
T-W-5Data classification (binary, linear) using the simple perceptron (Rosenblatt's perceptron). Forward pass. Learning algorithm. Linear separability of data. Novikoff's theorem on learning convergence (with the proof).
T-W-6Multi-Layer-Perceptron (MLP) artificial neural network. Sigmoid as activation function. On-line vs off-line learning. Derivation of the back-propagation algorithm. Possible variants. Overfitting and complexity selection for MLP via testing or cross-validation.
T-W-4Genetic algorithms for optimization problems. Scheme of main genetic loop. Fitness function. Selection methods in GAs: roulette selection, rank selection, tournaments. "Exploration vs. exploitation" problem. Remarks on convergence, premature convergence (population diversity). Crossing-over methods: one-point, two-points, multiple-point crossing-over. Mutation andits role in GAs (discrete and continuous). Examples of problems: knapsack problem, TSP. Exact solution of knapsack problem via dynamic programming.
T-L-2Implementation of sudoku solver. Testing - varations on the initial state (making the sudoku harder). Observing the number of visited states and the number of solution. Posing the homework task - programming the solver for the sliding puzzle.
T-L-4Testing homework programs - connect4 program: experimentations with different search depths, program vs program games, comments on introduced heuristics (position evaluation).
T-L-3Testing homework programs - sliding puzzle solvers. Getting familiar with Java classes prepared for game tree searches (alpha-beta pruning engine). Posing the homework task - programming an AI playing the connect4 game.
T-L-6Programming the simple perceptron (in MATLAB). Two-class separation of points on a plane. Observing the number of update steps in learning algorithm influenced by: learning rate coefficient, number of data points (sample size), changes in separation margin. Posing the homework task - implementation of non-linear separation using the simple perceptron together with the kernel trick.
T-L-7Implementation of MLP neural network (in MATLAB) for approximation of a function of two variables. Testing accuracy with respect to: number of neurons, learning coefficient, number of update steps. Posing the homework task: complexity selection for MLP via cross-validation.
T-L-5Genetic algorithm implementation for the knapsack problem, including: at least two selection methods, and two crossing-over methods. Posing the homework task: comparison of GA solutions with exact solutions based on dynamic programming (computation times).
T-L-1Gatting familiar with Java, Eclipse IDE, and a set of classes prepared for implementations of search algorithms. Initial implementation of the sudoku solver.
Metody nauczaniaM-4Computer programming.
Sposób ocenyS-2Ocena formująca: Grades for the programs written as homeworks.
Kryteria ocenyOcenaKryterium oceny
2,0
3,0Obtaining a positive average grade for hoemework programming tasks.
3,5
4,0
4,5
5,0