Sources:

  1. B. P. Lathi & Roger Green. (2021). Chapter 2: Time-Domain Analysis of Continuous-Time Systems. Signal Processing and Linear Systems (2nd ed., pp. 168-195). Oxford University Press.

This section is devoted to the determination of the zero-state response(see past post) of an LTIC system. We shall assume that the systems discussed in this section are in the zero state unless mentioned otherwise.

Read more »

Sources:

  1. B. P. Lathi & Roger Green. (2021). Chapter 2: Time-Domain Analysis of Continuous-Time Systems. Signal Processing and Linear Systems (2nd ed., pp. 168-195). Oxford University Press.

Stability is an important system property. Two types of system stability are generally considered: external (BIBO) stability and internal (asymptotic) stability. Let us consider both stability types in turn.

Read more »

Sources:

  1. For MoCo:
    1. MoCo v1 2019 paper
    2. MoCo v2 2020 paper
    3. Paper explained — Momentum Contrast for Unsupervised Visual Representation Learning [MoCo] by Nazim Bendib
  2. For BYOL:
    1. BYOL 2020 paper
    2. Neural Networks Intuitions: 10. BYOL- Paper Explanation by Raghul Asokan
    3. Understanding self-supervised and contrastive learning with "Bootstrap Your Own Latent" (BYOL) by imbue

MoCo and BYOL are both famous contrastive learning and self-supervised learning frameworks. They introduce an interesting designs:

  1. Using a online network and a fixed target network to mitigate the collapse problem in contrastive learning.
  2. Use the same architecture for these two networks while only training the online network. The target network is updated via EMA (expenential moving average).

This design is heavily used in further research like DINO as is worth learning about.

You may need to read my post for SimCLR to get a deeper understanding of contrastive learning.

Read more »

Deep Q-learning or deep Q-network (DQN) is one of the earliest and most successful algorithms that introduce deep neural networks into RL.

DQN is not Q learning, at least not the specific Q learning algorithm introduced in my post, but it shares the core idea of Q learning.

Sources:

  1. Shiyu Zhao. Chapter 8: Value Function Approximation. Mathematical Foundations of Reinforcement Learning.
  2. DQN 2013 paper
  3. Reinforcement Learning Explained Visually (Part 5): Deep Q Networks, step-by-step by Ketan Doshi
  4. My github repo for DQN
Read more »

Sources:

  1. SimCLR v1 2020 paper
  2. SimCLR v2 2020 paper
  3. Contrastive Representation Learning by Lilian
  4. UVA's SimCLR implementation(Both Pytorch and Jax versions are implemented)
Read more »
0%