Fa25 - PREDICTIVE MACHINE LEARNING (55390)

This course is co-linked with CSE392(#70025) and M393C(#59290)

Instructor:

Professor Chandrajit Bajaj

  • Lecture Hours – Mon, Wed- 1:00 - 2:30 pm. ETC 2.132. If Online, go to Zoom panel.
  • Office hours -- Tue 1:00 p.m. - 3:00 p.m. or by appointment ( Zoom or POB 2.324)
  • Contact: bajaj@cs.utexas.edu bajaj@oden.utexas.edu 

NOTE: All questions related to class should be posted through Piazza. Here is the link to register for Piazza: You can also join via the Piazza Tab on the Canvas course page.

Teaching Assistant

Shubham Bhardwaj

Note: Please attempt to make reservations a day before to avoid conflicts. 

Course Motivation and Synopsis

The Fall Predictive Machine Learning course  will teach you the latest on reinforcement learned , risk averse stochastic decision making process useful in diverse dynamical environments. These stochastic machine learned  trained, verified  and validated on  signals and information, filtered from noisy observation data distributions collected from various multi-scale dynamical systems. The principal performance metrics will be on online and energy efficient training, verification and validation protocols that achieve principled and stable learning for maximal generalizability .  The emphasis will be on  possibly corrupted data and/or  the lack of full information for the learned stochastic decision making dynamic algorithmic  process. Special emphasis will also  be given to the underlying  mathematical and statistical physics principles  of  Free Energy and stochastic Hamiltonian  dyamics . Students shall  thus be exposed to the latest stochastic  machine learning  modeling approaches for  optimized decision-making, multi-player games involving stochastic dynamical systems and optimal stochastic control. These latter topics are foundational to the training of multiple neural networks (agents) both cooperatively and in adversarial scenarios to optimize the learning process of all the agents.

An initial listing of lecture topics and reference material are given in the syllabus below. This is subject to some modification, given the background and speed at which we cover ground.  Homework exercises shall be given almost bi-weekly.  Assignment solutions that are turned in late shall suffer a 10% per day reduction in credit and a 100% reduction once solutions are posted. There will be a mid-term exam in class. The exam content will be similar to the homework exercises. A list of topics will also be assigned as take-home final projects to train the best of scientific machine-learned decision-making (agents). The projects will involve modern ML programming, an oral presentation, and a written report submitted at the end of the semester.

This project shall be graded and be in lieu of a final exam.

The course is open to graduate students in all disciplines. Those in the 5-year master's program students, and in the CS, CSEM, ECE, MATH, STAT, PHYS, CHEM, and BIO, are welcome. You’ll need an undergraduate level  background  in  the intertwined topics of algorithms, data structures, numerical methods, numerical optimization,  functional analysis, algebra, geometry, topology, statistics, stochastic processes . You will need programming experience (e.g., Python ), at  a CS undergraduate senior level.

 

Course Reference Material (+ reference papers cited in lectures )

  1. [B1] Chandrajit Bajaj (frequently updated)  A Mathematical Primer for Computational Data Sciences 
  2. [PML1] Kevin Murphy Probabilistic Machine Learning: An Introduction.
  3. [PML2] Kevin Murphy Probabilistic Machine Learning: Advanced Topics.
  4. [M1] Peter S. Maybeck Stochastic Models, Estimation and Control Volume 1
  5. [M2] Peter S. Maybeck Stochastic Models, Estimation and Control Volume 2
  6. [M3] Peter S. Maybeck Stochastic Models, Estimation and Control Volume 3
  7. [MU] Michael Mitzenmacher, Eli Upfal Probability and Computing (Randomized Algorithms and Probabilistic Analysis)
  8. [SB] Richard Sutton, Andrew Barto Reinforcement Learning
  9. [SD] Shai Shalev-Shwartz, Shai Ben-David Understanding Machine Learning, From Theory to Algorithms
  10. [Basar] Tamer Basar  Lecture Notes on Non-Cooperative Game Theory.
  11. [BHK] Avrim Blum, John Hopcroft, and Ravindran Kannan. Foundations of Data Science
  12. [BV] Stephen Boyd and Lieven Vandenberghe Convex Optimization.
  13. [DSML] Qianxiao Li - Dynamical System and Machine Learning
  14. Extra reference materials.

TENTATIVE  COURSE OUTLINE (in Flux). 

Date Topic Reading Assignments
Module 1: Foundations of Stochastic Processes & Dynamical Systems

Mon

08-25-2025

Lecture 1. Foundations of Sequential Learning and Estimation

[Lec1] [colab]

[M1] 1, 2, 3, 4
[DSML]
Dynamical Systems and Deep Learning [slides]

 

Wed

08-27-2025

Lecture 2: From Bayesian Thinking to the Kalman Filter
Grounding State Estimation in Bayesian Filtering [Lec2]

 

[M1] - Ch 3

2.2 Geometry of Norms and Approximations - [notes]

[A1] with [latex template] out today; [style.sty]

Wed

09-03-2025

Lecture 3: Nonlinear Kalman Filtering: EKF, UKF

 [lect 3]

[M1] Ch 3

[PML2] Ch 18

Probability Primer

Mon

09-08-2025

 

Lecture 4: Mathematical Foundations of Bayesian Filtering - Kushner-Stratonovich and Zakai equations

 [lec]

 

Refer to the references present in the lecture

 

 

 

Module 2: Sequential Models & Filtering

 

 

Wed

09-10-2025

 


Lecture 5: Sequential Monte Carlo Methods - Particle filter, Sequential Importance Sampling, Rao-Blackwellized Particle Filter

[lec]  


 

 

[PROBABILITY PRIMER 2 -  ADVANCED]

 

 

 

Mon

09-15-2025

Lecture 6. Stochastic Dynamics and Smart Optimization -SGLD

[lec][colab notebook]

Refer to the references present in the lecture

[A2] released

[A2 pdf]

[A2 latex]
[tensor.npy]

 

Wed

09-17-2025

Lecture 7. From Random Walks to Ballistic Exploration - SGHMC
 [lec]
[colab notebook]
Refer to the references present in the lecture

 

Mon

09-22-2025

 

Lecture 8. Dynamic Mode Decomposition - A Mathematical Primer [DMD - Dynamic Mode Decomposition]

 

Refer to the references present in the lecture

Wed

09-24-2025

 

Lecture 9.  The Transport View of Optimization -
Why Your Optimizer is Moving Probability Mass [lec]

Refer to the references present in the lecture

 

Module 3: Stochastic Optimization & Variational Methods

 

[A2] due Sep 28 midnight

 

Mon

09-29-2025

Lecture 10.  MCMC Foundations - From Random Walk to Hamiltonian Flow. The Journey from Discrete Jumps to Continuous Dynamics [lec]

Refer to the references present in the lecture

Wed

10-01-2025

 

Lecture 10 (contd).  MCMC Foundations - From Random Walk to Hamiltonian Flow. The Journey from Discrete Jumps to Continuous Dynamics [lec] 

 

 

Mon

10-06-2025

Lecture 11.   Riemannian MCMC - Geometry-Aware Sampling From Constant to Position-Dependent Metrics in MCMC [lec]

Refer to the references present in the lecture

 

Tue
10-07-2025

Assignment 3 released

 

[A3 latex] 
[A3 pdf]

 

Wed

10-08-2025

Lecture 12: The Density Formulation of Riemannian MCMC:

Why Γ(θ) is Inevitable: From Particles to Densities to Geometry

 [lec]
Refer to the references present in the lecture

 

 

Mon

10-13-2025

Lecture 13: From Geometric Sampling to Geometric Learning - Volume-Preserving Transformers and the Universal Role of Geometric Structure [lec]

Refer to the references present in the lecture

 

 

[A1 solutions] [A2 solutions]

 

Module 4: Manifolds, Hamiltonians, & Learning Dynamics

 

 

 

Wed

10-15-2025

 

Lecture 14: The Unified Learning Theory
Geometric Stochastic Navigation on Manifolds [lec]

Refer to the references present in the lecture

 

Mon

10-20-2025

 

Lecture 15: The Friction Knob - UnderstandingYour Optimizer - Connections to Kramers equation

[lec]
[colab notebook]

Refer to the references present in the lecture

 

Wed

10-22-2025

 

Review of Lectures 1 - 15 


 

 

 

Mock midterm [pdf]

[A3 solutions]

 

 

Mon

10-27-2025

Lecture 15 (extra): Low discrepancy (Importance Sampling) with applications [lec]

 

 

 

Wed

10-29-2025

 MIDTERM

 

 

 

 

 

 

Module 5: Reinforcement Learning & Inverse Problems

 

 

 

Mon

11-03-2025

Lec 16 - From Estimation to Control — Foundations of Optimal Control. The Bellman Principle and Dynamic Programming [lec]

see references in the lecture 

 

 

Wed

11-05-2025

 

Lec 17 - Extensions of Optimal Control
Soft Bellman, Belief-State Control, and Integration [lec]

 

 

see references in the lecture

Project template:
[pdf][latex]

 

Mon

11-10-2025

 

Lec 18 -Estimation and Control — LQR/LQG and EnKF with Control Separation Principle, Riccati and Kalman, and Constrained EnKF+MPC [lec]


 

 

Wed

11-12-2025

 

Presentations:
1. Thribhuvan Rapolu  - Port-Hamiltonian Neural ODE Networks on Lie
Groups For Robot Dynamics Learning and Control
2 Zhiyi Chen  - SeaThru-NeRF - Neural radiance fields in scattering media
3. Lang Lin  - Swin Transformer V2

Lec 19 - Continuous Control — The Hamilton-Jacobi-Bellman Equation [lec] (Part 1)
see references in the lecture

 

 

 

 

Mon

11-17-2025

Presentations:
1. Calihan - Wasserstein Hamiltonian Flows
2. Allan Zhou - Distributional Policy Optimization - An alternate approach to continuous control
3. Genhui Zhang - Parameterized Wasserstein Hamiltonian Flows

Lec 20 - Continuous Control — The Temperature Knob - Soft HJB and Z-Transform Linearization [lec] (Part 2)


see references in the lecture
Final Project Assignment details  [here]

 

Wed

11-19-2025

Presentations:
1. David Swanson - Conditional Neural Processes

2. Guatam Rao  - DNBP: Differentiable Nonparametric Belief Propagation 

3. Zachary Richey:  Simpler Flag Optimization

Lec 21 :  In Context  Learning I

 

 

Mon

11-24-2025

 Lec 22 : Hamiltonian Learning with PMP  (In- Context  Learning II)

 

 Final Project Phase I is due Nov 24

 

Mon

12-01-2025

 

 

Presentations:
1. Yang Zhao - GUD: Generation with Unified Diffusion
2. David Bockelman - Motion Code: Robust Time Series Classification and Forecasting via Sparse Variational Stochastic Process Learning

3. Jake Wellington -- Multi-Task Learning for Stochastic Interpolants

Lec 23 : Hamiltonian Learning with PMP  for Inverse Problems  (In- Context  Learning III)

Wed

12-03-2025

 

 

Lec 23 : Hamiltonian Learning with PMP  for Inverse Problems  (In- Context  Learning III)

 

Mon

12-08-2025

 

 

Addtl. Material

Some important Machine Learning Background.

Probability, Information and Probabilistic Inequalities [notes]

Log-Sum-Exponential-Stability [notes]

PML1] Ch 4.1, 4.2, 4.5, 4.7, 6.1, 6.2.

[PML2] Ch 3.3, 3.8, 5.1, 5.2

[PML1] Ch 3.2, 5.2

 Final Project Phase II is due Dec 12

Addtl. Material

Learning by Random Walks on Graphs  [notes-BHK]
The Markov-chain Monte Carlo Interactive Gallery

Wasserstein Gradient Flows and the Fokker - Planck Equation[notes] [not present]

.Learning Dynamics with Stochastic Processes [notes] 

 

S

 

Important Topics on Bayesian and Reimannian Manifold Optimization and Reinforcement Learning.

 

Project FAQ

1. How long should the project report be?

Answer: See directions in the Project section in assignments. For full points, please address each of the evaluation questions as succinctly as possible. You will get feedback on your presentations, which should also be incorporated into your final report.

Assignments, Exam, Final Project, and Presentation

There will be four take-home bi-weekly assignments, one in-class midterm exam, one take-home final project (in lieu of a final exam), and one presentation based on your project progress. The important deadline dates are:

  • Midterm: Wednesday, March 26, 2:00 pm - 3:30 pm.
  • Final Project Written Report Part 1: Due April 21st, 11:59 pm.
  • Final Project Written Report, and Presentation Video, Due May 3rd, 11:59 pm

Assignments

There will be four written take-home HW assignments and one take-home final project report. Please refer to the above schedule for assignments and the final project report due time.

Assignment solutions that are turned in late shall suffer a 10% per day reduction in credit and a 100% reduction once solutions are posted.

Course Requirements and Grading

Grades will be based on these factors:

  • In-class attendance and participation (5%)
  • HW assignments (40% and with the potential to get extra credit) 

4 assignments. Some assignments may have extra questions for extra points you can earn. (They will be specified in the assignment sheet each time.)

  • In-class midterm exam (15%) 
  • First Presentation & Report (10%)
  • Final Presentation & Report (30%)  

Students with Disabilities. Students with disabilities may request appropriate academic accommodations from the Division of Diversity and Community Engagement, Services for Students with Disabilities, 471-6259, http://www.utexas.edu/diversity/ddce/ssd . 

 

Accommodations for Religious Holidays. By UT Austin policy, you must notify the instructor of your pending absence at least fourteen days prior to the date of observance of a religious holiday. If you must miss a class or an examination in order to observe a religious holiday, you will be given an opportunity to complete the missed work within a reasonable time before or after the absence, provided proper notification is given.

 

Statement on Scholastic Dishonesty. Anyone who violates the rules for the HW assignments or who cheats on in-class tests or the final exam is in danger of receiving an F for the course. Additional penalties may be levied by the Computer Science department, CSEM, and the University. See http://www.cs.utexas.edu/academics/conduct

Public Domain This course content is offered under a Public Domain license. Content in this course can be considered under this license unless otherwise noted.