Sources:

  1. B. P. Lathi & Roger Green. (2021). Chapter 2: Time-Domain Analysis of Continuous-Time Systems. Signal Processing and Linear Systems (2nd ed., pp. 193-195). Oxford University Press.

In this section I will illustrate that, for a system specified by the differential equation \[ \begin{equation} \label{eq_2_2} Q(D) y(t)=P(D) x(t) , \end{equation} \]

its transfer function is \(H(s)\), the bilateral Laplace transform of \(h(t)\), which is the unit impulse response of the system, and that \(H(s)\) also satisfies: \[ \color{teal} {H(s)=\frac{P(s)}{Q(s)}} . \]

Read more »

Sources:

  1. B. P. Lathi & Roger Green. (2021). Chapter 2: Time-Domain Analysis of Continuous-Time Systems. Signal Processing and Linear Systems (2nd ed., pp. 151-162). Oxford University Press.
Read more »

Sources:

  1. B. P. Lathi & Roger Green. (2021). Chapter 2: Time-Domain Analysis of Continuous-Time Systems. Signal Processing and Linear Systems (2nd ed., pp. 168-195). Oxford University Press.

This section is devoted to the determination of the zero-state response(see past post) of an LTIC system. We shall assume that the systems discussed in this section are in the zero state unless mentioned otherwise.

Read more »

Sources:

  1. B. P. Lathi & Roger Green. (2021). Chapter 2: Time-Domain Analysis of Continuous-Time Systems. Signal Processing and Linear Systems (2nd ed., pp. 168-195). Oxford University Press.

Stability is an important system property. Two types of system stability are generally considered: external (BIBO) stability and internal (asymptotic) stability. Let us consider both stability types in turn.

Read more »

Sources:

  1. For MoCo:
    1. MoCo v1 2019 paper
    2. MoCo v2 2020 paper
    3. Paper explained — Momentum Contrast for Unsupervised Visual Representation Learning [MoCo] by Nazim Bendib
  2. For BYOL:
    1. BYOL 2020 paper
    2. Neural Networks Intuitions: 10. BYOL- Paper Explanation by Raghul Asokan
    3. Understanding self-supervised and contrastive learning with "Bootstrap Your Own Latent" (BYOL) by imbue

MoCo and BYOL are both famous contrastive learning and self-supervised learning frameworks. They introduce an interesting designs:

  1. Using a online network and a fixed target network to mitigate the collapse problem in contrastive learning.
  2. Use the same architecture for these two networks while only training the online network. The target network is updated via EMA (expenential moving average).

This design is heavily used in further research like DINO as is worth learning about.

You may need to read my post for SimCLR to get a deeper understanding of contrastive learning.

Read more »
0%