Course Description
This course --- The Math of AI (Course 2 of 2): Deep Learning --- provides a rigorous in-depth coverage of the mathematics of Deep Learning. It starts with neural network basics. Next it delves into deep reinforcement learning, covering Monte Carlo Tree Search, AlphaGo, AlphaZero, and Alpha Tensor. Next it covers Generative AI, in particular, Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Diffusion Models. Finally, it covers generative language models including Word2Vec, Attention Mechanism, Transformer, Large Language Models (LLMs), and Contrastive Language Image Pretraining (CLIP).
Course Curriculum
Available in
days
days
after you enroll
- Neural Networks (Part 1): Linear Regression; Basic NN structure; and a brief history of Deep Learning (35:10)
- Neural Networks (Part 2): Activation functions; Loss functions; Cross-Entropy; and One-hot-encoding (33:25)
- Neural Networks (Part 3): Cross-Entropy Loss Explained; Gradient Descent; and SGD. (24:59)
- Neural Networks (Part 4): Backpropagation; Convolutional Neural Networks. (25:33)
Available in
days
days
after you enroll
- Deep Reinforcement Learning Part 1: Intro, Exploration vs Exploitation; Monte Carlo Tree Search (MCTS). (25:29)
- Deep Reinforcement Learning Part 2: Monte Carlo Tree Search (MCTS) example worked out. (24:18)
- Deep Reinforcement Learning Part 3: AlphaGo intro and SL net & roll out net training eqns. (24:20)
- Deep Reinforcement Learning Part 4: more on AlphaGo -- RL net and value net training eqns. (15:45)
- Deep Reinforcement Learning Part 5: AlphaGo Zero (17:19)
- Deep Reinforcement Learning Part 6: Alpha Tensor (27:07)
Available in
days
days
after you enroll
- Generative Adversarial Networks (Part 1): Discriminator vs Generator; Objective function; and KL Divergence. (23:47)
- Generative Adversarial Networks (Part 2): Jensen-Shannon Metric; Analyzing the GAN Objective. (16:14)
- Variational Autoencoders (Part 1): Encoder-Decoder; Reparametrization Trick; Bayesian Inference. (26:07)
- Variational Autoencoders (Part 2): Likelihood Probability; Evidence Lower Bound (ELBO); VAE Objective. (25:07)
- Diffusion Models (Part 1): Forward & Reverse Processes. (17:40)
- Diffusion Models (Part 2): The LeapFrog Property. (23:02)
- Diffusion Models (Part 3): DDPM Objective via KL Divergence of forward and reverse processes. (18:57)
- Diffusion Models (Part 4): Deriving the DDPM Objective (contd). (13:28)
- Diffusion Models (Part 5): Gaussian Forms (21:12)
- Diffusion Models (Part 6): Deriving the Mean and Variance of the Fwd Process Posterior. (22:52)
- Diffusion Models (Part 7): DDPM Loss Function Assuming Fixed Variance. (16:56)
- Diffusion Models (Part 8): DDPM Simple Loss; DDPM Training and Sampling Pseudocodes. (17:40)
Available in
days
days
after you enroll
All Courses by Dr. Stephen G. Odaibo
Courses by Dr. Stephen G. Odaibo
Check your inbox to confirm your subscription