CS3491
ARTIFICIAL
INTELLIGENCE AND MACHINE LEARNING
COURSE
OBJECTIVES:
The
main objectives of this course are to:
•
Study about uninformed and Heuristic search techniques.
•
Learn techniques for reasoning under uncertainty
•
Introduce Machine Learning and supervised learning algorithms
•
Study about ensembling and unsupervised learning algorithms
•
Learn the basics of deep learning using neural networks
UNIT
I PROBLEM SOLVING
Introduction
to AI - AI Applications - Problem solving agents – search algorithms –
uninformed search strategies – Heuristic
search strategies – Local search and optimization problems – adversarial search
– constraint satisfaction problems (CSP)
UNIT
II PROBABILISTIC REASONING
Acting
under uncertainty – Bayesian inference – naïve bayes models. Probabilistic
reasoning – Bayesian networks – exact inference in BN – approximate inference
in BN – causal networks.
UNIT
III SUPERVISED LEARNING
Introduction
to machine learning – Linear Regression Models: Least squares, single &
multiple variables, Bayesian linear
regression, gradient descent, Linear Classification Models: Discriminant function – Probabilistic discriminative model
- Logistic regression, Probabilistic generative model – Naive Bayes, Maximum
margin classifier – Support vector machine, Decision Tree, Random forests
UNIT
IV ENSEMBLE TECHNIQUES AND UNSUPERVISED LEARNING
Combining
multiple learners: Model combination schemes, Voting, Ensemble Learning -
bagging, boosting, stacking,
Unsupervised learning: K-means, Instance Based Learning: KNN, Gaussian mixture models and Expectation maximization
UNIT
V NEURAL NETWORKS
Perceptron
- Multilayer perceptron, activation functions, network training – gradient
descent optimization – stochastic
gradient descent, error backpropagation, from shallow networks to deep networks –Unit saturation (aka the vanishing
gradient problem) – ReLU, hyperparameter tuning, batch normalization, regularization, dropout.
PRACTICAL
EXERCISES:
1.
Implementation of Uninformed search algorithms (BFS, DFS)
2.
Implementation of Informed search algorithms (A*, memory-bounded A*) 3.
Implement naïve Bayes models
4.
Implement Bayesian Networks
5.
Build Regression models
6.
Build decision trees and random forests
7.
Build SVM models
8.
Implement ensembling techniques
9.
Implement clustering algorithms
10.
Implement EM for Bayesian networks
11.
Build simple NN models
12.
Build deep learning NN models
COURSE
OUTCOMES:
At
the end of this course, the students will be able to:
CO1:
Use appropriate search algorithms for problem solving
CO2:
Apply reasoning under uncertainty
CO3:
Build supervised learning models
CO4:
Build ensembling and unsupervised models
CO5:
Build deep learning neural network models
TEXT
BOOKS:
1.
Stuart Russell and Peter Norvig, “Artificial Intelligence – A Modern Approach”,
Fourth Edition, Pearson Education, 2021.
2.
Ethem Alpaydin, “Introduction to Machine Learning”, MIT Press, Fourth Edition,
2020.
REFERENCES:
1.
Dan W. Patterson, “Introduction to Artificial Intelligence and Expert Systems”,
Pearson Education,2007
2.
Kevin Night, Elaine Rich, and Nair B., “Artificial Intelligence”, McGraw Hill,
2008
3.
Patrick H. Winston, "Artificial Intelligence", Third Edition, Pearson
Education, 2006
4.
Deepak Khemani, “Artificial Intelligence”, Tata McGraw Hill Education,
2013 (http://nptel.ac.in/)
5.
Christopher M. Bishop, “Pattern Recognition and Machine Learning”, Springer,
2006.
6.
Tom Mitchell, “Machine Learning”, McGraw Hill, 3rd Edition,1997.
7.
Charu C. Aggarwal, “Data Classification Algorithms and Applications”, CRC
Press, 2014
8.
Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar, “Foundations of
Machine Learning”, MIT Press, 2012.
9.
Ian Goodfellow, Yoshua Bengio, Aaron Courville, “Deep Learning”, MIT Press,
2016