Course Syllabus

This course is co-linked with CSE392(#63150).

Instructor:

Professor Chandrajit Bajaj

NOTE: Please do not send messages (questions or concerns) through Canvas because I rarely don’t check email messages on Canvas. All questions related to class should be posted through Piazza or bring them to the office hour. Here is the link to register for Piazza: Links to an external site. You can also join via the Piazza Tab on the Canvas course page

Teaching Assistant

Eshan Balachandar

  • Office hours – Thursday 11:00 AM - 12:00 PM GDC Basement,  TA Station 3, 
  • Contact: eshan@cs.utexas.edu

Note: Please attempt to make reservations a day before for office hours to avoid conflicts. 

Course Motivation and Synopsis

This course is on the geometric foundations of modern deep and reinforcement learning. In particular we shall dive deep into the mathematical, statistical and computational optimization fundamentals that are the basis of computational, data driven machine learning models (e.g. classification, clustering, generation, recommendation, prediction, forecasting)  and Markov decision making processes (single and multi-player game-playing, sequential and repeated forecasting).   We shall thus learn how data efficient and continuous action spaces are harnessed to learn the free energy Hamiltonian underlying  dynamical systems, and multi-player games. These latter topics lead to the training of multiple neural networks (agents) learning  cooperatively and in adversarial scenarios to help solve any computational problem better.

An initial listing of lecture topics is given in the syllabus below. This is subject to modification, given the background and speed at which we cover ground.  Homework exercises shall be given almost bi-weekly.  Assignment solutions that are turned in late shall suffer a 10% per day reduction in credit, and a 100% reduction once solutions are posted. There will be a mid-term exam in class. The content will be similar to the homework exercises. A list of topics will also be assigned as take home final projects, to train, cross-validate and test the best of machine learned decision making agents. The projects will involve ML programming, oral presentation, and a written report submitted at the end of the semester.  This project shall be graded, and be in lieu of a final exam.

The course is aimed at junior and senior undergraduates  students. Those in the 5-year master's program students, especially in the CS, CSEM, ECE, STAT and MATH. are welcome if they would like to bolster their foundational knowledge. You’ll need algorithms, data structures, numerical methods and programming experience (e.g. Python) as a CS senior, mathematics and statistics at the level of CS, Math, Stat, ECE, plus linear algebra, computational geometry, plus introductory functional analysis and combinatorial and numerical optimization (CS, ECE, CSEM, Stat and Math. students). 

Late Policy

For submission 1 day later from deadline  - 25% deduction. For 2 days later - 50% deduction. We will be revealing assignment on the 3rd day. Therefore 100% deduction on 3rd day.

 

 

 

 

Course Material.

  1. [B1] Chandrajit Bajaj (frequently updated)  A Mathematical Primer for Computational Data Sciences  
  2. [PML1] Kevin Murphy Probabilistic Machine Learning: An Introduction
  3. [PML2] Kevin Murphy Probabilistic Machine Learning: Advanced Topics
  4. [BHK] Avrim Blum, John Hopcroft and Ravindran Kannan. Foundations of Data Science
  5. [BV] Stephen Boyd and Lieven Vandenberghe Convex Optimization Links to an external site.
  6. [B] Christopher Bishop Pattern Recognition and Machine Learning Links to an external site.
  7. [M] Kevin Murphy Machine Learning: A Probabilistic Perspective Download Machine Learning: A Probabilistic Perspective
  8. [SB] Richard Sutton, Andrew Barto Reinforcement Learning
  9. [SD] Shai Shalev-Shwartz, Shai Ben-David Understanding Machine Learning, From Theory to Algorithms
  10. Extra reference materials .

COURSE OUTLINE 

Date Topic Reading Assignments

Mon

01-13-2025

1. Introduction to Data Science, Geometry of Data, High Dimensional Spaces,  Belief Spaces  [Lec1]

[BHK] Ch 1,2
[PML1] Ch 1

Supplementary Notes  [Note1]

 

Wed

01-15-2025

2. Learning High-Dimensional Linear Regression Models [Lec2]  

Geometry of Vector, Matrix, Functional Norms  and Approximations (Introductory functional analysis) [notes];

 

[SD] Ch 9, Appendix C

[BHK] Chap 12.2,12.3

[A1 Download A1] with [latex solution template Download latex solution template] out today;

Fri

01-17-2025

3. Learning Theory and Model Selection [Lec3 Download Lec3]

Probability, Information and Probabilistic Inequalities [notes Download notes]

[MU] Ch 1-3

[B] Chap 1

[PML1] Chap 2, 3, 4

 

Friday

01-24-2025

Help session for Assignment 1

 

 Zoom Links to an external site.

Mon

01-27-2025

4. Stochastic Machine Learning |:  Cross, Conditional and Relative Entropy,  [Lec 4]

Log-Sum-Exponential-Stability [notes]

Entropy-Based-Uncertainty[SuppNotes]

[MU] Chap 4, 24.2

[BHK] Chap 12.4,12.6

 

Wed

01-29-2025

5. Bayesian Deep Learning [Notes]

 

[M] Chap 3, 4

 

Friday
01-31-2025

 TA Session

 

[A2] Released

Mon

02-03-2025

6.  Quasi-Monte-Carlo Methods, Integration Error H-K Inequality  [notes]

 

Statistical Machine Learning using MonteCarlo and Quasi-MonteCarlo[Lec6]

[M] Chap 23, 24

[PML2] Chap 11

 

Wed

02-05-2025

7. Probabilistic Distribution Sampling in High Dimensional  Spaces [Lec5]

Concentration of Measure  [notes]

[M] Chap 24

[PML2] Chapter 12

 

Mon

02-10-2025

8. Transforming and Sampling Probability Distributions [lec notes].

Normalizing Flow Slides [supp notes]

[BHK] Chap 4

[MU] Chap 7, 10

 

 

Wed

02-12-2025

9. Learning Dynamics I  - Markov Chain Monte Carlo Sampling [Lec7]

MCMC and Bayesian Inference [Notes]

Learning Dynamics II - Random Walk  [notes]

MCMC Demo Links to an external site.

[SD] Chap 24

[BV] Chap 1-5

 

Mon

02-17-2025

10. Optimization for Machine Learning I  [notes]

SVM via Stochastic Gradient Optimization [notes]

Spectral Methods for Learning : KSVM [Supp Notes], Links to an external site.

[BHK] Chap 2.7

[SD] Chap 23,24

Wed

02-19-2025

11. Variations of Gradient Descent in Machine Learning: ADA Grad, RMS Prop, Adam[notes]

[M] Chap 11

[A3] will be out;

Mon

02-24-2025

12.  Optimization for Machine Learning II: Constrained Optimization , KKT  [notes]

Non-Convex optimization 2: Projected Gradient Descent and Variations [notes Download notes]

[M} Chap 14

 

 

Wed

02-26-2025

13. Statistical Machine Learning I : Separating Mixture of Gaussians  - Expectation Maximization   [notes

[M] Chap 2, 5

 

 

Mon

03-03-2025

14. Statistical Machine Learning I : Separating Mixture of Gaussians  - Expectation Maximization   [notes
Connections to MCMC and Variational Inference (VAE) [notes]

[M]  Chap 4

 

Wed

03-05-2025

15.   Johnson Lindenstrauss and Compressive Sensing [notes]

Compressive Sensing and Optimization [notes]

Robust Sparse Recovery; Alternating Minimization    AMRR Download AMRR 

[M]  Chap 15, [BHK] Chap 5

 

Mon

03-10-2025

16.  Statistical Foundations of Generative  Architecture (VAE, Flows, GANs, Diffusion ) Models from Data [Review]

[M] Chap 15

 

[PML2] Chapter 20. Go through the introductions of each of the subsequent chapters (21-26). 

 

Wed

03-12-2025

 17.    Multi-Armed Bayesian Bandit[notes]

[PML2] Chapter 34

Mon

03-24-2025

 18.   Matrix Sampling and Sketching [notes]

[M] Chap 14

Wed

03-26-2025

Midterm in Class

 

Mon

03-31-2025

19.  Data Clustering with Hamiltonians [notes]

 

See references cited in notes. 
[PML2] 12.5.1 has a primer on Hamiltonian Mechanics. 

 

 

 

Wed

04-02-2025

 20.   Learning (Gradient Descent) Dynamics with Optimal Control  [notes]

Non-convex Projected Gradient Descent [notes-references]

 

[PML2] Chapter 35

 

[A4] will be out;

Mon

04-07-2025

21.  Gaussian Process Regression [notes]

See references cited in notes

[PML1] Section 17.2

Project details Download Project details released.

Video and Final Report is due May 3

Wed

04-09-2025

22.  The role of Sensors and Optimal Sensor Fusion:

Illustrated Kalman Filters [notes]

See references cited in notes

[PML2] Chapter 8, primarily 8.1 and 8.2

 

Mon

04-14-2025

23.  Game Theory Intro:  Strategic Decision Making

See references cited in notes

 

 

 

Wed

04-16-2025

 24.  Stochastic Matrix Games

See references cited in [notes]

 

Mon

04-21-2025

25. Reinforcement Learning I:  Optimal Control, Hamilton-Jacobi-Bellman Optimality Principle [notes]

Guided Policy Search [notes]

[SB]  See Chap 3 

Wed

04-23-2025

26.  Reinforcement Learning II:  Learning with Trajectory Optimization   iLQR, ilQG a   [notes].    

 

Mon

04-28-2025

27. Reinforcement Learning III:  MDP, POMDP, Bellman Equation, Policy Learning  [notes]

 

 

Video and Final Report is due May 3

Addtl. Material

Non-convex Optimization , Projected Gradient Descent [Notes]

Statistical Machine Learning II: Bayesian Modeling

[notes]

Statistical Machine Learning III: Bayesian Inference,  Multivariate Gaussians [notes1] [notes]

Spectral Methods in Dimension Reduction -KPCA [notes]

Spectral Methods for Learning : Fischer LDA, KDA [notes]

Addtl. Material

 

Connections to Variational AutoEncoders [notes]

Statistical Machine Learning IV: Gaussian Processes  [notes] 

Stochastic Gradient Descent-- Simulated Annealing, Fockker-Planck [notes Download notes]

Other Gradient Descent Methods [Adagrad, RMSProp, Adam, ...] [notes]

Statistical Machine Learning V: Non-Gaussian Processes, Conjugate Priors [notes]

Principled Reinforcement Learning with Hamiltonian-Dynamics-PMP-OCF  [notes]

Reward Reshaping with Optimal Control [notes]

 

Project FAQ

1. How long should the project report be?

Answer: See directions in the Class Project List.  For full points, please address each of the evaluation questions as succinctly as possible. You will get feedback on your presentations,  that should also be incorporated in your final report.

Assignments, Exam, Final Project

There will be four take-home bi-weekly assignments,  one in-class midterm exam, and one take-home final project (in lieu of a final exam).  The important deadline dates are:

  • Midterm: March 26th, 3:30pm - 5:00pm, In Class
  • Final Project Written Report, Part 1, Due: April 20th, 11:59pm
  • Final Project Written Report, Part 2, Due: May 1st, 11:59pm

 

Assignments

There will be four written take-home HW assignments and one take-home final project report. Please refer to the above schedule for assignments and final project report due time.

 

Extra Credit: All extra credit points accumulated from assignments will be used for later point deductions in future assignments. 

Course Requirements and Grading

Grades will be based on these factors:

  • In-class participation (5%)
  • HW assignments (50% and with potential to get extra credit) 

4 assignments. Some assignments may have extra questions for extra points you can earn. (They will be specified in the assignment sheet each time.)

  • In-class midterm exam (15%) 
  • First Report (10%)
  • Final Presentation Video & Report (20%)  

Students with Disabilities. Students with disabilities may request appropriate academic accommodations from the Division of Diversity and Community Engagement, Services for Students with Disabilities, 471-6259, http://www.utexas.edu/diversity/ddce/ssd . 

 

Accommodations for Religious Holidays. By UT Austin policy, you must notify the instructor of your pending absence at least fourteen days prior to the date of observance of a religious holiday. If you must miss a class or an examination in order to observe a religious holiday, you will be given an opportunity to complete the missed work within a reasonable time before or after the absence, provided proper notification is given.

 

Statement on Scholastic Dishonesty. Anyone who violates the rules for the HW assignments or who cheats in in-class tests or the final exam is in danger of receiving an F for the course. Additional penalties may be levied by the Computer Science department,  CSEM  and the University. See http://www.cs.utexas.edu/academics/conduct