Course Syllabus

Instructor:

Professor Chandrajit Bajaj

  • Lecture Hours -- Mon, Wed- 3:30 - 5:00 pm.  GDC 6.202 or Zoom
  • Office hours – Mon, Wed 5:00 p.m. - 6:00 p.m. or by appointment ( Zoom  or POB 2.324)
  • Contact: bajaj@oden.utexas.edu

NOTE: . All questions related to class should be posted through Piazza. Here is the link to register for Piazza: You can also join via the Piazza Tab on the canvas course page

Teaching Assistant

Trung Nguyen

  • Office hours – Mon/Wed 2:00 p.m. - 3:00 p.m. GDC 1.302 - TA station Desk 2 or Zoom
  • Contact: trungnguyen@utexas.edu

Note: Please attempt to make reservations a day before for office hours to avoid conflicts. 

Course Motivation and Synopsis

This fall course is on foundational mathematical, statistical and computational learning theory and application of data learned predictive models. Students shall be exposed to  modern machine learning approaches in optimized decision making and multi-player games, involving stochastic dynamical systems,  and optimal control. These latter topics are foundational to  the training of multiple neural networks (agents) both cooperatively and in adversarial scenarios helping optimize the learning of all the agents.

An initial listing of lecture topics  is given in the syllabus below. This is subject to modification, given the background and speed at which we cover ground.  Homework exercises shall be given almost  bi-weekly.  Assignment solutions that are turned in late shall suffer a  10% per day reduction in credit, and a 100% reduction once solutions are posted. There will be a mid-term exam in class. The content will be similar to the homework exercises. A list of  topics will also be assigned as take home final projects, to train the best of  machine learned  decision making (agents). The projects will involve ML programming, oral presentation, and a written report submitted at the end of the semester.

This project shall  be graded, and be in lieu of a final exam.

The course is aimed at senior undergraduates and junior graduate students. Those in the 5-year master's program students, especially in the CS, CSEM, ECE, STAT and MATH. are welcome. You’ll need algorithms, data structures, numerical methods and programming experience (e.g. Python ) as a CS senior, mathematics and statistics at the level of CS, Math, Stat, ECE, plus linear algebra, computational geometry, plus introductory functional analysis and combinatorial and numerical optimization (CS, ECE, CSEM , Stat and Math. students).  

Course Material.

  1. [B1] Chandrajit Bajaj (frequently updated)  A Mathematical Primer for Computational Data Sciences 
  2. [BHK] Avrim Blum, John Hopcroft and Ravindran Kannan. Foundations of Data Science
  3. [BV] Stephen Boyd and Lieven Vandenberghe Convex Optimization
  4. [M] Kevin Murphy Machine Learning: A Probabilistic Perspective
  5. [MU] Michael Mitzenmacher, Eli Upfal Probability and Computing (Randomized Algorithms and Probabilistic Analysis)
  6. [SD] Shai Shalev-Shwartz, Shai Ben-David Understanding Machine Learning, From Theory to Algorithms
  7. [SB] Richard Sutton, Andrew Barto Reinforcement Learning
  8. [Basar] Tamer Basar  Lecture Notes on Non-Cooperative Game Theory.
  9. Extra reference materials .

TENTATIVE  COURSE OUTLINE (in Flux). 

Date Topic Reading Assignments

Mon

08-22-2022

1. Introduction to Geometry of Data, High Dimensional Spaces,  Belief and Decision Making Spaces,    [Lec1]

Dynamical Systems and Deep Learning [notes]

Modern Statistical Machine Learning [notes]

[M] Ch 1.1, 1.2, 1.3

 

 

Wed

08-24-2022

2. Learning High-Dimensional Regression and  Dynamic Models  [Lec2]

Geometry of Norms  and Approximations [notes];

[SD] Ch 9, Ch 14

[BHK] Chap 12.2,12.3

[A1] with [latex template] out today

due by 09-07-2022, 11:59pm

Mon

08-29-2022

3. Learning Theory and Model Selection [Lec3]

PAC learning, Complexities [notes]

Probability, Information and Probabilistic Inequalities [notes]

 

[M] Ch 1.4.7, 1.4.8

[MU] Ch 1-3

[B1] Appendix

Wed

08-31-2022

4. Sampling in High Dimensional Space-Time 1 : MonteCarlo vs Quasi Monte-Carlo, Relationship to Integration Error H-K Inequality [Lec4-part1][Lec4-part2]

High-Dimensional Sampling, Concentration of Measure  [[notes]

[MU] Chap 4, 24.2

[BHK] Chap 12.4,12.6

 

Wed

09-07-2022

5. Sampling in High Dimensional  Space-Time 2:    [Lec-part1]

Intro to Optimal Control of Dynamic System [notes]

Learning Dynamics,  Lyapunov Stability  and connections to Training Deep Networks [notes]

[SD] Chap 12

[A1] due by tomorrow midnight .

 

Mon

09-12-2022

6. Statistical Machine Learning 1: Introduction to Markov Chains, Page Rank, MCMC [Lec-notes, notes2]

Learning by Random Walks on Graphs  [notes-BHK]

 

 

Wed

09-14-2022

7.  Statistical Machine Learning 2: Sampling and Learning  with MCMC Variations [Lec7] (More MCMC & Implementation Notes) 

Bayesian Inference with MCMC  and Variational Inference [notes]

[BHK] Chap 4

[MU] Ch 7, 10

 

[A1 solution] out so you can train / learn from this.

[A2] out 

due by 09-28-2022, 11:59pm

Mon

09-19-2022

8.  Statistical Machine Learning 3: Bayesian Inference and Generative Models (VAEs and GANs) [notes1]

[notes2]

[SD] Chap 24

[BV] Chap 1-5

Wed

09-21-2022

9. Statistical Machine Learning 4: Transform Sampling, Sampling Non-Linear Probability Distributions [notes].   Generative Adversarial Networks [notes]  

[BHK] Chap 2.7

[SD] Chap 23,24

 

Mon

09-26-2022

10.  Statistical Machine Learning 5: Gaussian Processes I [notes]  [notes2]

Learning with Normalizing Flows [notes]

[M] Chap 11

 

Wed

09-28-2022

11. Statistical Machine Learning 6: Gaussian Processes II [notes]

[M]  Ch 2, 5

[A2] due today by 11:59pm. 

Mon

10-03-2022

12. Robust Sample based Bayesian Dynamic Learning using Sparse Gaussian Processes  [notes]

Connections to Variational AutoEncoders (VAEs) [notes]

[M]  Ch 4

[A2 solution] out

[A3] out tomorrow

due by 10-19-2022, 11:59pm

Wed

10-05-2022

 13. Learning Models with Latent Variables / Expectation Maximization [notes]

MCE-VAE-Invariance based Equivariant Clustering [paper]

 

 

[M]  Ch 15

 

Mon

10-10-2022

14: Learning SVM via Continuous Stochastic Gradient Descent Optimization [notes]

Continuous Stochastic Gradient Descent (SGD)  -- Simulated Annealing, Fockker-Planck [notes]

[M]  Ch 15

 

 

Wed

10-12-2022

 15. Learning with  SGD variations, Adagrad, RMSProp, Adam, ...] [notes]

[BHK] Chap 5

 

 

Mon

10-17-2022

16.  Learning Dynamics with Neural ODEs (NODEs)  : Adjoint Method for BackProp [notes]

 

[M] Chap 14

Wed

10-19-2022

 17. Learning Dynamics with Stochastic Processes [notes] 

Learning Dynamics with Stochastic Neural ODEs (SNODEs) : Stochastic Adjoint Methods I [notes] 

 

 

[A3] due Fri 10/21 by 5:00pm

[A3] Solution shall post Fri night

Mon

10-24-2022

 [MIDTERM] (Hybrid)

 

Wed

10-26-2022

18. Robust Continuous learning of PDE's using Sparse Gaussian Processes [arxiv]

Diffusion Models with Stochastic Langevin Dynamics [notes]

 

 

 

 

Mon

10-31-2022

 19 RL1: Learning Dynamics : Kalman Filtering, Machine Learning  [notes]

 

Non-convex Projected Gradient Descent [notes-references]

 

 

Wed

11-02-2022

20.  RL2: Learning Dynamics with Optimal Control:  Dynamics  LQR, iLQR [notes] 

See references cited in notes

 

Final PROJECT topics posted [here]

Part 1: First PROJECT Report due before Nov 21-2022, 11:59pm

 

Mon

11-07-2022

21. RL 3:   Bandit Algorithms , Thompson Sampling[notes]

 

See references cited in notes

[A4] Out  Nov 7

Due By Nov 20, 11:59pm

Wed

11-09-2022

22. RL 4: Markov (Reward, Decision) Processes: MPs, MRPs, MDPs and POMDPs [notes]

See references cited in notes and paper

 

 

 

Mon

11-14-2022

 23. Game Theoretic Learning 1:  MARL -Markov Games  [notes].

Markov Decision Process (MDPs)  and  Markov Games -- [notes]

See references cited in [notes]

Wed

11-16-2022

24. Games & MARL  II [notes]

Game Theoretic Learning 2: Stackelberg Equilibrium [notes]

 

See references cited in [notes]

Part 1 of Project Due before Nov 21, 11:59pm

Mon

11-28-2022

25.  Energy Based Learning: Hopfield Networks, Boltzmann Machines, Restricted Boltzmann Machines. [notes]

Actionable Learning [notes] 

 

 [SB]  See Chap 3 

Assignment 5

Due by  11:59pm Dec 5.

 

A5 Solution Template

 

 

A5  (Optional Extra Credit)  Released on Nov 28 and Due by 11:59pm Dec 5, 2022

Wed

11-30-2022

26.  Active Learning 2 :  Dynamic POMDPS ,  Longitudnal VAEs [notes]

 

[SB] See Chap 9, 10, 11

 

 

Mon

12-05-2022

27. NeuralPMP: Reinforcement Learning with Stochastic Hamiltonian Dynamics , Pontryagin Maximum Principle [arxiv]

 [Basar] See Lectures 1, 2, 3 

Final Project Report (Part II) Due December 9

11:59pm

Addtl. Material

 

Robust Sparse Recovery; Alternating Minimization  [notes2]

Non-convex Optimization : Projected Stochastic Policy Gradient [Notes] [Notes] [Notes]

Random Projections, Johnson-Lindenstrauss, Compressive Sensing, Sketching  in Space-Time[notes] 

Spectral Methods for Learning Dimension Reduction -KPCA , Eigen- Fischer-Faces[notes] [notes]   E.  KSVM [Notes], Fischer LDA, KDA [notes] 

Statistical Machine Learning : (a) Separating Mixture of Gaussians   [notes]  (b) Expectation Maximization   [notes]

Some important Classical Machine Learning Background.

Addtl. Material

Robustness Guarantees for Bayesian Inference and Gaussian Processes [paper]

Risk Averse No Regret Learning for Convex Games [paper]

Some Theoretical Bounds on Bayesian Optimization and Reinforcement Learning.

 

 

Project FAQ

1. How long should the project report be?

Answer: See directions in the Class Project List. For full points, please address each of the evaluation questions as succinctly as possible. Note the deadline for the report is December 07 midnight. You will get feedback on your presentations, that should also be incorporated in your final report.

Assignments, Exam, Final Project

There will be five take-home bi-weekly assignments, one in-class midterm exam and one take home final project (in lieu of a final exam). The important deadline dates are:

  • Midterm: Wednesday, October 10, 3:30pm - 5:00pm, BUR 2.220
  • Final Project Written Report, Due: December 07, 11:59pm

Assignments

There will be five written take-home HW assignments and one take-home final project report. Please refer to the above schedule for assignments and final project report due time.

Assignment solutions that are turned in late shall suffer a 10% per day reduction in credit, and a 100% reduction once solutions are posted.

Course Requirements and Grading

Grades will be based on these factors:

  • In-class attendance and participation (5%)
  • HW assignments (60% and with potential to get extra credit) 

5 assignments. Some assignments may have extra questions for extra points you can earn. (They will be specified in the assignment sheet each time.)

  • In-class midterm exam (15%) 
  • First Presentation & Report (10%)
  • Final Presentation & Report (15%)  

Students with Disabilities. Students with disabilities may request appropriate academic accommodations from the Division of Diversity and Community Engagement, Services for Students with Disabilities, 471-6259, http://www.utexas.edu/diversity/ddce/ssd . 

 

Accommodations for Religious Holidays. By UT Austin policy, you must notify the instructor of your pending absence at least fourteen days prior to the date of observance of a religious holiday. If you must miss a class or an examination in order to observe a religious holiday, you will be given an opportunity to complete the missed work within a reasonable time before or after the absence, provided proper notification is given.

 

Statement on Scholastic Dishonesty. Anyone who violates the rules for the HW assignments or who cheats in in-class tests or the final exam is in danger of receiving an F for the course. Additional penalties may be levied by the Computer Science department,  CSEM  and the University. See http://www.cs.utexas.edu/academics/conduct/

Course Summary:

Date Details Due