Existence of The Fourier Series

Sources:

B. P. Lathi & Roger Green. (2018). Chapter 6: Continuous-Time Signal Analysis: The Fourier Series. Signal Processing and Linear Systems (3nd ed., pp. 612-620). Oxford University Press.

For the existence of the Fourier series, coefficients \(a_0, a_n\), and \(b_n\) in Eq. (6.8) must be finite. It follows from Eq. (6.8) \[ a_0=\frac{1}{T_0} \int_{T_0} x(t) d t, \quad a_n=\frac{2}{T_0} \int_{T_0} x(t) \cos n \omega_0 t d t, \quad \text { and } \quad b_n=\frac{2}{T_0} \int_{T_0} x(t) \sin n \omega_0 t d t \]

that the existence of these coefficients is guaranteed if \(x(t)\) is absolutely integrable over one period; that is, \[ \int_{T_0}|x(t)| d t<\infty \]

However, existence, by itself, does not inform us about the nature and the manner in which the series converges. We shall first discuss the notion of convergence.

Convergence of a Series

The key to many puzzles lies in the nature of the convergence of the Fourier series. Convergence of infinite series is a complex problem. It took mathematicians several decades to understand the convergence aspect of the Fourier series. We shall barely scratch the surface here.

Nothing annoys a student more than the discussion of convergence. "Have we not proved," they ask, "that a periodic signal \(x(t)\) can be expressed as a Fourier series"? Then why spoil the fun by this annoying discussion? All we have shown so far is that a signal represented by a Fourier series in Eq. (6.1) is periodic. We have not proved the converse, that every periodic signal can be expressed as a Fourier series. This issue will be tackled later, in Sec. \(6.5-4\), where it will be shown that a periodic signal can be represented by a Fourier series, as in Eq. (6.1), where the equality of the two sides of the equation is not in the ordinary sense, but in the mean-square sense (explained later in this discussion). But the astute reader should have been skeptical of the claims of the Fourier series to represent discontinuous functions in Figs. 6.2a and 6.6a. If \(x(t)\) has a jump discontinuity, say, at \(t=0\), then \(x\left(0^{+}\right), x(0)\), and \(x\left(0^{-}\right)\)are generally different. How could a series consisting of the sum of continuous functions of the smoothest type (sinusoids) add to one value at \(t=0^{-}\)and a different value at \(t=0\) and yet another value at \(t=0^{+}\)? The demand is impossible to satisfy unless the math involved executes some spectacular acrobatics. How does a Fourier series act under such conditions? Precisely for this reason, the great mathematicians Lagrange and Laplace, two of the judges examining Fourier's paper, were skeptical of Fourier's claims and voted against publication of the paper that later became a classic.

There are also other issues. In any practical application, we can use only a finite number of terms in a series. If, with a fixed number of terms, the series guarantees convergence within an arbitrarily small error at every value of \(t\), such a series is highly desirable and is called a uniformly convergent series. If a series converges at every value of \(t\), but to guarantee convergence within a given error requires a different number of terms at different \(t\), then the series is still convergent, but less desirable. It goes under the name pointwise convergent series.

Finally, we have the case of a series that refuses to converge at some \(t\), no matter how many terms are added. But the series may converge in the mean; that is, the energy of the difference between \(x(t)\) and the corresponding finite term series approaches zero as the number of terms approaches infinity. \({ }^{\dagger}\) To explain this concept, let us consider representation of a function \(x(t)\) by an infinite series \[ x(t)=\sum_{n=1}^{\infty} z_n(t) \]

Let the partial sum of the first \(N\) terms of the series on the right-hand side be denoted by \(x_N(t)\), that is, \[ x_N(t)=\sum_{n=1}^N z_n(t) \] \({ }^{\dagger}\) The behavior is called "convergence in the mean" because minimizing the error energy over a certain interval is equivalent to minimizing the mean-square value of the error over the same interval.

If we approximate \(x(t)\) by \(x_N(t)\) (the partial sum of the first \(N\) terms of the series), the error in the approximation is the difference \(x(t)-x_N(t)\). The series converges in the mean to \(x(t)\) in the interval \(\left(0, T_0\right)\) if \[ \int_0^{T_0}\left|x(t)-x_N(t)\right|^2 d t \rightarrow 0 \quad \text { as } \quad N \rightarrow \infty \]

Hence, the energy of the error \(x(t)-x_N(t)\) approaches zero as \(N \rightarrow \infty\). This form of convergence does not require the series to be equal to \(x(t)\) for all \(t\). It just requires the energy of the difference (area under \(\left|x(t)-x_N(t)\right|^2\) ) to vanish as \(N \rightarrow \infty\). Superficially it may appear that if the energy of a signal over an interval is zero, the signal (the error) must be zero everywhere. This is not true. The signal energy can be zero even if there are nonzero values at a finite number of isolated points. This is because although the signal is nonzero at a point (and zero everywhere else), the area under its square is still zero. Thus, a series that converges in the mean to \(x(t)\) need not converge to \(x(t)\) at a finite number of points. This is precisely what happens to the Fourier series when \(x(t)\) has jump discontinuities. This is also what makes Fourier series convergence compatible with the Gibbs phenomenon, to be discussed later in this section.

There is a simple criterion for ensuring that a periodic signal \(x(t)\) has a Fourier series that converges in the mean. The Fourier series for \(x(t)\) converges to \(x(t)\) in the mean if \(x(t)\) has a finite energy over one period, that is, \[ \int_{T_0}|x(t)|^2 d t<\infty \]

Thus, the periodic signal \(x(t)\), having a finite energy over one period, guarantees the convergence in the mean of its Fourier series. In all the examples discussed so far, Eq. (6.17) is satisfied; hence the corresponding Fourier series converges in the mean. Equation (6.17), like Eq. (6.16), guarantees that the Fourier coefficients are finite.

We shall now discuss an alternate set of criteria, due to Dirichlet, for convergence of the Fourier series.

DIRICHLET CONDITIONS

Dirichlet showed that if \(x(t)\) satisfies certain conditions (Dirichlet conditions), its Fourier series is guaranteed to converge pointwise at all points where \(x(t)\) is continuous. Moreover, at the points of discontinuities, \(x(t)\) converges to the value midway between the two values of \(x(t)\) on either side of the discontinuity. These conditions are:

  1. The function \(x(t)\) must be absolutely integrable; that is, it must satisfy Eq. (6.16).
  2. The function \(x(t)\) must have only a finite number of finite discontinuities in one period.
  3. The function \(x(t)\) must contain only a finite number of maxima and minima in one period.

All practical signals, including those in Exs. 6.1, 6.2, 6.3, and 6.4, satisfy these conditions.

The Role of Amplitude and Phase Spectra in Waveshaping

The trigonometric Fourier series of a signal \(x(t)\) shows explicitly the sinusoidal components of \(x(t)\). We can synthesize \(x(t)\) by adding the sinusoids in the spectrum of \(x(t)\). Let us synthesize the square-pulse periodic signal \(x(t)\) of Fig. 6.6a by adding successive harmonics in its spectrum step by step and observing the similarity of the resulting signal to \(x(t)\). The Fourier series for this function as found in Eq. (6.13) is \[ x(t)=\frac{1}{2}+\frac{2}{\pi}\left(\cos t-\frac{1}{3} \cos 3 t+\frac{1}{5} \cos 5 t-\frac{1}{7} \cos 7 t+\cdots\right) \]

We start the synthesis with only the first term in the series \((n=0)\), a constant \(1 / 2(\mathrm{dc})\); this is a gross approximation of the square wave, as shown in Fig. 6.8a. In the next step we add the dc \((n=0)\) and the first harmonic (fundamental), which results in a signal shown in Fig. 6.8b. Observe that the synthesized signal somewhat resembles \(x(t)\). It is a smoothed-out version of \(x(t)\). The sharp corners in \(x(t)\) are not reproduced in this signal because sharp corners mean rapid changes, and their reproduction requires rapidly varying (i.e., higher-frequency) components, which are excluded. Figure \(6.8 \mathrm{c}\) shows the sum of \(\mathrm{dc}\), first, and third harmonics (even harmonics are absent). As we increase the number of harmonics progressively, as shown in Figs. \(6.8 \mathrm{~d}\) (sum up to the fifth harmonic) and \(6.8 \mathrm{e}\) (sum up to the nineteenth harmonic), the edges of the pulses become sharper and the signal resembles \(x(t)\) more closely.