Each student has to scribe notes for a single lecture. Each scribe for a given lecture will prepare a LaTeX document written in full prose with figures, understandable to a student who may have missed class. Please use this LaTeX template file and style file. All the relevant files should be submitted to Vivek (vbagaria@stanford.edu) no later than 72 hours after the lecture.
Please sign up to scribe a specific lecture through this spreadsheet.
Part I: Information measures
Lecture 1: Historical context, introduction to entropy.
Lecture 2: Entropy, mutual information and the chain rule.
Lecture 3: Relative entropy and its nonnegativity.
Part II: Compression
Part III: Communication
Lecture 8: Introduction to communication over noisy channels and channel capacity.
Lecture 9: Channel capacity and the converse to the noisy channel coding theorem.
Lecture 10: Sphere packing view of the coding theorem.
Lecture 11: Achieving capacity via random coding.
Lecture 12: Achieving capacity efficiently: polar codes I (Slides for polar codes).
Lecture 13: Polar codes II
Lecture 14: Gaussian channel and information measures for continuous variables I.
Lecture 15: Information measures for continuous variables II and entropy maximization.
Part IV: Machine Learning and statistics
Lecture 16: Maximum entropy principle, exponential families and Gaussian graphical models.
Lecture 17: The Ising model, maximum conditional entropy approach to supervised learning.
Lecture 18: Supervised learning II, decision theoretic interpretation of MAXENT.
Lecture 19: Information theory, machine learning and statistics: three views of the same coin.
|