© Presses polytechniques et universitaires romandes, 2008
All rights reserved


Signal Processing for Communications

by Paolo Prandoni and Martin Vetterli

Contents

Preface
1 What Is Digital Signal Processing?
 1.1 Some History and Philosophy
  1.1.1 Digital Signal Processing under the Pyramids
  1.1.2 The Hellenic Shift to Analog Processing
  1.1.3 “Gentlemen: calculemus!”
 1.2 Discrete Time
 1.3 Discrete Amplitude
 1.4 Communication Systems
 1.5 How to Read this Book
 1.6 Further Reading
2 Discrete-Time Signals
 2.1 Basic Definitions
  2.1.1 The Discrete-Time Abstraction
  2.1.2 Basic Signals
  2.1.3 Digital Frequency
  2.1.4 Elementary Operators
  2.1.5 The Reproducing Formula
  2.1.6 Energy and Power
 2.2 Classes of Discrete-Time Signals
  2.2.1 Finite-Length Signals
  2.2.2 Infinite-Length Signals
 2.3 Examples
 2.4 Further Reading
 2.5 Exercises
3 Signals and Hilbert Spaces
 3.1 Euclidean Geometry: a Review
 3.2 From Vector Spaces to Hilbert Spaces
  3.2.1 The Recipe for Hilbert Space
  3.2.2 Examples of Hilbert Spaces
  3.2.3 Inner Products and Distances
 3.3 Subspaces, Bases, Projections
  3.3.1 Definitions
  3.3.2 Properties of Orthonormal Bases
  3.3.3 Examples of Bases
 3.4 Signal Spaces Revisited
  3.4.1 Finite-Length Signals
  3.4.2 Periodic Signals
  3.4.3 Infinite Sequences
 3.5 Further Reading
 3.6 Exercises
4 Fourier Analysis
 4.1 Preliminaries
  4.1.1 Complex Exponentials
  4.1.2 Complex Oscillations? Negative Frequencies?
 4.2 The DFT (Discrete Fourier Transform)
  4.2.1 Matrix Form
  4.2.2 Explicit Form
  4.2.3 Physical Interpretation
 4.3 The DFS (Discrete Fourier Series)
 4.4 2
  4.4.1 The DTFT as the Limit of a DFS
  4.4.2 The DTFT as a Formal Change of Basis
 4.5 Relationships between Transforms
 4.6 Fourier Transform Properties
  4.6.1 DTFT Properties
  4.6.2 DFS Properties
  4.6.3 DFT Properties
 4.7 Fourier Analysis in Practice
  4.7.1 Plotting Spectral Data
  4.7.2 Computing the Transform: the FFT
  4.7.3 Cosmetics: Zero-Padding
  4.7.4 Spectral Analysis
 4.8 Time-Frequency Analysis
  4.8.1 The Spectrogram
  4.8.2 The Uncertainty Principle
 4.9 Digital Frequency vs. Real Frequency
 4.10 Examples
 4.11 Further Reading
 4.12 Exercises
5 Discrete-Time Filters
 5.1 Linear Time-Invariant Systems
 5.2 Filtering in the Time Domain
  5.2.1 The Convolution Operator
  5.2.2 Properties of the Impulse Response
 5.3 Filtering by Example – Time Domain
  5.3.1 FIR Filtering
  5.3.2 IIR Filtering
 5.4 Filtering in the Frequency Domain
  5.4.1 LTI “Eigenfunctions”
  5.4.2 The Convolution and Modulation Theorems
  5.4.3 Properties of the Frequency Response
 5.5 Filtering by Example – Frequency Domain
 5.6 Ideal Filters
 5.7 Realizable Filters
  5.7.1 Constant-Coefficient Difference Equations
  5.7.2 The Algorithmic Nature of CCDEs
  5.7.3 Filter Analysis and Design
 5.8 Examples
 5.9 Further Reading
 5.10 Exercises
6 The Z-Transform
 6.1 Filter Analysis
  6.1.1 Solving CCDEs
  6.1.2 Causality
  6.1.3 Region of Convergence
  6.1.4 ROC and System Stability
  6.1.5 ROC of Rational Transfer Functions
and Filter Stability

 6.2 The Pole-Zero Plot
  6.2.1 Pole-Zero Patterns
  6.2.2 Pole-Zero Cancellation
  6.2.3 Sketching the Transfer Function
from the Pole-Zero Plot

 6.3 Filtering by Example – Z-Transform
 6.4 Examples
 6.5 Further Reading
 6.6 Exercises
7 Filter Design
 7.1 Design Fundamentals
  7.1.1 FIR versus IIR
  7.1.2 Filter Specifications and Tradeoffs
 7.2 FIR Filter Design
  7.2.1 FIR Filter Design by Windowing
  7.2.2 Minimax FIR Filter Design
 7.3 IIR Filter Design
  7.3.1 All-Time Classics
 7.4 Filter Structures
  7.4.1 FIR Filter Structures
  7.4.2 IIR Filter Structures
  7.4.3 Some Remarks on Numerical Stability
 7.5 Filtering and Signal Classes
  7.5.1 Filtering of Finite-Length Signals
  7.5.2 Filtering of Periodic Sequences
 7.6 Examples
 7.7 Further Reading
 7.8 Exercises
8 Stochastic Signal Processing
 8.1 Random Variables
 8.2 Random Vectors
 8.3 Random Processes
 8.4 Spectral Representation
of Stationary Random Processes

  8.4.1 Power Spectral Density
  8.4.2 PSD of a Stationary Process
  8.4.3 White Noise
 8.5 Stochastic Signal Processing
 8.6 Examples
 8.7 Further Reading
 8.8 Exercises
9 Interpolation and Sampling
 9.1 Preliminaries and Notation
 9.2 Continuous-Time Signals
 9.3 Bandlimited Signals
 9.4 Interpolation
  9.4.1 Local Interpolation
  9.4.2 Polynomial Interpolation
  9.4.3 Sinc Interpolation
 9.5 The Sampling Theorem
 9.6 Aliasing
  9.6.1 Non-Bandlimited Signals
  9.6.2 Aliasing: Intuition
  9.6.3 Aliasing: Proof
  9.6.4 Aliasing: Examples
 9.7 Examples
 9.8 Appendix
 9.9 Further Reading
 9.10 Exercises
10 A/D and D/A Conversions
 10.1 Quantization
  10.1.1 Uniform Scalar Quantization
  10.1.2 Advanced Quantizers
 10.2 A/D Conversion
 10.3 D/A Conversion
 10.4 Examples
 10.5 Further Reading
 10.6 Exercises
11 Multirate Signal Processing
 11.1 Downsampling
  11.1.1 Properties of the Downsampling Operator
  11.1.2 Frequency-Domain Representation
  11.1.3 Examples
  11.1.4 Downsampling and Filtering
 11.2 Upsampling
  11.2.1 Upsampling and Interpolation
 11.3 Rational Sampling Rate Changes
 11.4 Oversampling
  11.4.1 Oversampled A/D Conversion
  11.4.2 Oversampled D/A Conversion
 11.5 Examples
 11.6 Further Reading
 11.7 Exercises
12 Design of a Digital Communication System
 12.1 The Communication Channel
  12.1.1 The AM Radio Channel
  12.1.2 The Telephone Channel
 12.2 Modem Design: The Transmitter
  12.2.1 Digital Modulation and the Bandwidth Constraint
  12.2.2 Signaling Alphabets and the Power Constraint
 12.3 Modem Design: the Receiver
  12.3.1 Hilbert Demodulation
  12.3.2 The Effects of the Channel
 12.4 Adaptive Synchronization
  12.4.1 Carrier Recovery
  12.4.2 Timing Recovery
 12.5 Further Reading
 12.6 Exercises

© Presses polytechniques et universitaires romandes, 2008
All rights reserved

Preface

The present text evolved from course notes developed over a period of a dozen years teaching undergraduates the basics of signal processing for communications. The students had mostly a background in electrical engineering, computer science or mathematics, and were typically in their third year of studies at Ecole Polytechnique Fédérale de Lausanne (EPFL), with an interest in communication systems. Thus, they had been exposed to signals and systems, linear algebra, elements of analysis (e.g. Fourier series) and some complex analysis, all of this being fairly standard in an undergraduate program in engineering sciences.

The notes having reached a certain maturity, including examples, solved problems and exercises, we decided to turn them into an easy-to-use text on signal processing, with a look at communications as an application. But rather than writing one more book on signal processing, of which many good ones already exist, we deployed the following variations, which we think will make the book appealing as an undergraduate text.

  1. Less formal: Both authors came to signal processing by way of an interest in music and think that signal processing is fun, and should be taught to be fun! Thus, choosing between the intricacies of z-transform inversion through contour integration (how many of us have ever done this after having taken a class in signal processing?) or showing the Karplus-Strong algorithm for synthesizing guitar sounds (which also intuitively illustrates issues of stability along the way), you can guess where our choice fell.

    While mathematical rigor is not the emphasis, we made sure to be precise, and thus the text is not approximate in its use of mathematics. Remember, we think signal processing to be mathematics applied to a fun topic, and not mathematics for its own sake, nor a set of applications without foundations.

  2. More conceptual: We could have said “more abstract”, but this sounds scary (and may seem in contradiction with point 1 above, which of course it is not). Thus, the level of mathematical abstraction is probably higher than in several other texts on signal processing, but it allows to think at a higher conceptual level, and also to build foundations for more advanced topics. Therefore we introduce vector spaces, Hilbert spaces, signals as vectors, orthonormal bases, projection theorem, to name a few, which are powerful concepts not usually emphasized in standard texts. Because these are geometrical concepts, they foster understanding without making the text any more complex. Further, this constitutes the foundation of modern signal processing, techniques such as time-frequency analysis, filter banks and wavelets, which makes the present text an easy primer for more advanced signal processing books. Of course, we must admit, for the sake of full transparency, that we have been influenced by our research work, but again, this has been fun too!
  3. More application driven: This is an engineering text, which should help the student solve real problems. Both authors are engineers by training and by trade, and while we love mathematics, we like to see their “operational value”. That is, does the result make a difference in an engineering application?

    Certainly, the masterpiece in this regard is C. Shannon’s 1948 foundational paper on “The Mathematical Theory of Communication”. It completely revolutionized the way communication systems are designed and built, and, still today, we mostly live in its legacy. Not surprisingly, one of the key results of signal processing is the sampling theorem for bandlimited functions (often attributed to Shannon, since it appears in the above-mentioned paper), the theorem which single-handedly enabled the digital revolution. To a mathematician, this is a simple corollary to Fourier series, and he/she might suggest many other ways to represent such particular functions. However, the strength of the sampling theorem and its variations (e.g. oversampling or quantization) is that it is an operational theorem, robust, and applicable to actual signal acquisition and reconstruction problems.

    In order to showcase such powerful applications, the last chapter is entirely devoted to developing an end-to-end communication system, namely a modem for communicating digital information (or bits) over an analog channel. This real-world application (which is present in all modern communication devices, from mobile phones to ADSL boxes) nicely brings together many of the concepts and designs studied in the previous chapters.

Being less formal, more abstract and application-driven seems almost like moving simultaneously in several and possibly opposite directions, but we believe we came up with the right balancing act. Ultimately, of course, the readers and students are the judges!

A last and very important issue is the online access to the text and supplementary material. A full html version together with the unavoidable errata and other complementary material is available at www.sp4comm.org. A solution manual is available to teachers upon request.

As a closing word, we hope you will enjoy the text, and we welcome your feedback. Let signal processing begin, and be fun!

Martin Vetterli and Paolo Prandoni
Spring 2008, Paris and Grandvaux

Acknowledgements

The current book is the result of several iterations of a yearly signal processing undergraduate class and the authors would like to thank the students in Communication Systems at EPFL who survived the early versions of the manuscript and who greatly contributed with their feedback to improve and refine the text along the years. Invaluable help was also provided by the numerous teaching assistants who not only volunteered constructive criticism but came up with a lot of the exercices which appear at the end of each chapter (and their relative solutions). In no particular order: Andrea Ridolfi provided insightful mathematical remarks and also introduced us to the wonders of PsTricks while designing figures. Olivier Roy and Guillermo Barrenetxea have been indefatigable ambassadors between teaching and student bodies, helping shape exercices in a (hopefully) more user-friendly form. Ivana Jovanovic, Florence Bénézit and Patrick Vandewalle gave us a set of beautiful ideas and pointers thanks to their recitations on choice signal processing topics. Luciano Sbaiz always lent an indulgent ear and an insightful answer to all the doubts and worries which plague scientific writers. We would also like to express our personal gratitude to our families and friends for their patience and their constant support; unfortunately, to do so in a proper manner, we should resort to a lyricism which is sternly frowned upon in technical textbooks and therefore we must confine ourselves to a simple “thank you”.

© Presses polytechniques et universitaires romandes, 2008
All rights reserved

Chapter 1
What Is Digital Signal Processing?

A signal, technically yet generally speaking, is a a formal description of a phenomenon evolving over time or space; by signal processing we denote any manual or “mechanical” operation which modifies, analyzes or otherwise manipulates the information contained in a signal. Consider the simple example of ambient temperature: once we have agreed upon a formal model for this physical variable – Celsius degrees, for instance – we can record the evolution of temperature over time in a variety of ways and the resulting data set represents a temperature “signal”. Simple processing operations can then be carried out even just by hand: for example, we can plot the signal on graph paper as in Figure 1.1, or we can compute derived parameters such as the average temperature in a month.


PIC

Figure 1.1: Temperature measurements over a month.

Conceptually, it is important to note that signal processing operates on an abstract representation of a physical quantity and not on the quantity itself. At the same time, the type of abstract representation we choose for the physical phenomenon of interest determines the nature of a signal processing unit. A temperature regulation device, for instance, is not a signal processing system as a whole. The device does however contain a signal processing core in the feedback control unit which converts the instantaneous measure of the temperature into an ON/OFF trigger for the heating element. The physical nature of this unit depends on the temperature model: a simple design is that of a mechanical device based on the dilation of a metal sensor; more likely, the temperature signal is a voltage generated by a thermocouple and in this case the matched signal processing unit is an operational amplifier.

Finally, the adjective “digital” derives from digitus, the Latin word for finger: it concisely describes a world view where everything can be ultimately represented as an integer number. Counting, first on one’s fingers and then in one’s head, is the earliest and most fundamental form of abstraction; as children we quickly learn that counting does indeed bring disparate objects (the proverbial “apples and oranges”) into a common modeling paradigm, i.e. their cardinality. Digital signal processing is a flavor of signal processing in which everything including time is described in terms of integer numbers; in other words, the abstract representation of choice is a one-size-fit-all countability. Note that our earlier “thought experiment” about ambient temperature fits this paradigm very naturally: the measuring instants form a countable set (the days in a month) and so do the measures themselves (imagine a finite number of ticks on the thermometer’s scale). In digital signal processing the underlying abstract representation is always the set of natural numbers regardless of the signal’s origins; as a consequence, the physical nature of the processing device will also always remain the same, that is, a general digital (micro)processor. The extraordinary power and success of digital signal processing derives from the inherent universality of its associated “world view”.

1.1 Some History and Philosophy

1.1.1 Digital Signal Processing under the Pyramids

Probably the earliest recorded example of digital signal processing dates back to the 25th century BC. At the time, Egypt was a powerful kingdom reaching over a thousand kilometers south of the Nile’s delta. For all its latitude, the kingdom’s populated area did not extend for more than a few kilometers on either side of the Nile; indeed, the only inhabitable areas in an otherwise desert expanse were the river banks, which were made fertile by the yearly flood of the river. After a flood, the banks would be left covered with a thin layer of nutrient-rich silt capable of supporting a full agricultural cycle. The floods of the Nile, however, were(1) a rather capricious meteorological phenomenon, with scant or absent floods resulting in little or no yield from the land. The pharaohs quickly understood that, in order to preserve stability, they would have to set up a grain buffer with which to compensate for the unreliability of the Nile’s floods and prevent potential unrest in a famished population during “dry” years. As a consequence, studying and predicting the trend of the floods (and therefore the expected agricultural yield) was of paramount importance in order to determine the operating point of a very dynamic taxation and redistribution mechanism. The floods of the Nile were meticulously recorded by an array of measuring stations called “nilometers” and the resulting data set can indeed be considered a full-fledged digital signal defined on a time base of twelve months. The Palermo Stone, shown in the left panel of Figure 1.2, is a faithful record of the data in the form of a table listing the name of the current pharaoh alongside the yearly flood level; a more modern representation of an flood data set is shown on the left of the figure: bar the references to the pharaohs, the two representations are perfectly equivalent. The Nile’s behavior is still an active area of hydrological research today and it would be surprising if the signal processing operated by the ancient Egyptians on their data had been of much help in anticipating for droughts. Yet, the Palermo Stone is arguably the first recorded digital signal which is still of relevance today.


PIC PIC

Figure 1.2: Representations of flood data for the river Nile: circa 2500 BC (left) and 2000 AD (right).

1.1.2 The Hellenic Shift to Analog Processing

“Digital” representations of the world such as those depicted by the Palermo Stone are adequate for an environment in which quantitative problems are simple: counting cattle, counting bushels of wheat, counting days and so on. As soon as the interaction with the world becomes more complex, so necessarily do the models used to interpret the world itself. Geometry, for instance, is born of the necessity of measuring and subdividing land property. In the act of splitting a certain quantity into parts we can already see the initial difficulties with an integer-based world view ;(2)  yet, until the Hellenic period, western civilization considered natural numbers and their ratios all that was needed to describe nature in an operational fashion. In the 6th century BC, however, a devastated Pythagoras realized that the the side and the diagonal of a square are incommensurable, i.e. that √ --
  2 is not a simple fraction. The discovery of what we now call irrational numbers “sealed the deal” on an abstract model of the world that had already appeared in early geometric treatises and which today is called the continuum. Heavily steeped in its geometric roots (i.e. in the infinity of points in a segment), the continuum model postulates that time and space are an uninterrupted flow which can be divided arbitrarily many times into arbitrarily (and infinitely) small pieces. In signal processing parlance, this is known as the “analog” world model and, in this model, integer numbers are considered primitive entities, as rough and awkward as a set of sledgehammers in a watchmaker’s shop.

In the continuum, the infinitely big and the infinitely small dance together in complex patterns which often defy our intuition and which required almost two thousand years to be properly mastered. This is of course not the place to delve deeper into this extremely fascinating epistemological domain; suffice it to say that the apparent incompatibility between the digital and the analog world views appeared right from the start (i.e. from the 5th century BC) in Zeno’s works; we will appreciate later the immense import that this has on signal processing in the context of the sampling theorem.

Zeno’s paradoxes are well known and they underscore this unbridgeable gap between our intuitive, integer-based grasp of the world and a model of the world based on the continuum. Consider for instance the dichotomy paradox; Zeno states that if you try to move along a line from point A to point B you will never in fact be able to reach your destination. The reasoning goes as follows: in order to reach B, you will have to first go through point C, which is located mid-way between A and B; but, even before you reach C, you will have to reach D, which is the midpoint between A and C; and so on ad infinitum. Since there is an infinity of such intermediate points, Zeno argues, moving from A to B requires you to complete an infinite number of tasks, which is humanly impossible. Zeno of course was well aware of the empirical evidence to the contrary but he was brilliantly pointing out the extreme trickery of a model of the world which had not yet formally defined the concept of infinity. The complexity of the intellectual machinery needed to solidly counter Zeno’s argument is such that even today the paradox is food for thought. A first-year calculus student may be tempted to offhandedly dismiss the problem by stating

∑∞
    1--= 1
n=1 2n
(1.1)

but this is just a void formalism begging the initial question if the underlying notion of the continuum is not explicitly worked out.(3)  In reality Zeno’s paradoxes cannot be “solved”, since they cease to be paradoxes once the continuum model is fully understood.

1.1.3 “Gentlemen: calculemus!”

The two competing models for the world, digital and analog, coexisted quite peacefully for quite a few centuries, one as the tool of the trade for farmers, merchants, bankers, the other as an intellectual pursuit for mathematicians and astronomers. Slowly but surely, however, the increasing complexity of an expanding world spurred the more practically-oriented minds to pursue science as a means to solve very tangible problems besides describing the motion of the planets. Calculus, brought to its full glory by Newton and Leibnitz in the 17th century, proved to be an incredibly powerful tool when applied to eminently practical concerns such as ballistics, ship routing, mechanical design and so on; such was the faith in the power of the new science that Leibnitz envisioned a not-too-distant future in which all human disputes, including problems of morals and politics, could be worked out with pen and paper: “gentlemen, calculemus”. If only.

As Cauchy unsurpassably explained later, everything in calculus is a limit and therefore everything in calculus is a celebration of the power of the continuum. Still, in order to apply the calculus machinery to the real world, the real world has to be modeled as something calculus understands, namely a function of a real (i.e. continuous) variable. As mentioned before, there are vast domains of research well behaved enough to admit such an analytical representation; astronomy is the first one to come to mind, but so is ballistics, for instance. If we go back to our temperature measurement example, however, we run into the first difficulty of the analytical paradigm: we now need to model our measured temperature as a function of continuous time, which means that the value of the temperature should be available at any given instant and not just once per day. A “temperature function” as in Figure 1.3 is quite puzzling to define if all we have (and if all we can have, in fact) is just a set of empirical measurements reasonably spaced in time. Even in the rare cases in which an analytical model of the phenomenon is available, a second difficulty arises when the practical application of calculus involves the use of functions which are only available in tabulated form. The trigonometric and logarithmic tables are a typical example of how a continuous model needs to be made countable again in order to be put to real use. Algorithmic procedures such as series expansions and numerical integration methods are other ways to bring the analytic results within the realm of the practically computable. These parallel tracks of scientific development, the “Platonic” ideal of analytical results and the slide rule reality of practitioners, have coexisted for centuries and they have found their most durable mutual peace in digital signal processing, as will appear shortly.


PIC

Figure 1.3: Temperature “function” in a continuous-time world model.

1.2 Discrete Time

One of the fundamental problems in signal processing is to obtain a permanent record of the signal itself. Think back of the ambient temperature example, or of the floods of the Nile: in both cases a description of the phenomenon was gathered by a naive sampling operation, i.e. by measuring the quantity of interest at regular intervals. This is a very intuitive process and it reflects the very natural act of “looking up the current value and writing it down”. Manually this operation is clearly quite slow but it is conceivable to speed it up mechanically so as to obtain a much larger number of measurements per unit of time. Our measuring machine, however fast, still will never be able to take an infinite amount of samples in a finite time interval: we are back in the clutches of Zeno’s paradoxes and one would be tempted to conclude that a true analytical representation of the signal is impossible to obtain.


PIC

Figure 1.4: A thermograph.

At the same time, the history of applied science provides us with many examples of recording machines capable of providing an “analog” image of a physical phenomenon. Consider for instance a thermograph: this is a mechanical device in which temperature deflects an ink-tipped metal stylus in contact with a slowly rolling paper-covered cylinder. Thermographs like the one sketched in Figure 1.4 are still currently in use in some simple weather stations and they provide a chart in which a temperature function as in Figure 1.3 is duly plotted. Incidentally, the principle is the same in early sound recording devices: Edison’s phonograph used the deflection of a steel pin connected to a membrane to impress a “continuous-time” sound wave as a groove on a wax cylinder. The problem with these analog recordings is that they are not abstract signals but a conversion of a physical phenomenon into another physical phenomenon: the temperature, for instance, is converted into the amount of ink on paper while the sound pressure wave is converted into the physical depth of the groove. The advent of electronics did not change the concept: an audio tape, for instance, is obtained by converting a pressure wave into an electrical current and then into a magnetic deflection. The fundamental consequence is that, for analog signals, a different signal processing system needs to be designed explicitly for each specific form of recording.


PIC

Figure 1.5: Analytical and empirical averages.

Consider for instance the problem of computing the average temperature over a certain time interval. Calculus provides us with the exact answer ¯C if we know the elusive “temperature function” f(t) over an interval [T0,T1] (see Figure 1.5, top panel):

            ∫
¯   ---1---   T1
C = T1 - T0  T0 f(t)dt
(1.2)

We can try to reproduce the integration with a “machine” adapted to the particular representation of temperature we have at hand: in the case of the thermograph, for instance, we can use a planimeter as in Figure 1.6, a manual device which computes the area of a drawn surface; in a more modern incarnation in which the temperature signal is given by a thermocouple, we can integrate the voltage with the RC network in Figure 1.7. In both cases, in spite of the simplicity of the problem, we can instantly see the practical complications and the degree of specialization needed to achieve something as simple as an average for an analog signal.


PIC

Figure 1.6: The planimeter: a mechanical integrator.

Now consider the case in which all we have is a set of daily measurements c1,c2,,cD as in Figure 1.1; the “average” temperature of our measurements over D days is simply:

     1 ∑D
ˆC = --    cn
    D  n=1
(1.3)

(as shown in the bottom panel of Figure 1.5) and this is an elementary sum of D terms which anyone can carry out by hand and which does not depend on how the measurements have been obtained: wickedly simple! So, obviously, the question is: “How different (if at all) is Ĉ from C¯ ?” In order to find out we can remark that if we accept the existence of a temperature function f(t) then the measured values cn are samples of the function taken one day apart:

c  = f(nT  )
 n        s

(where Ts is the duration of a day). In this light, the sum (1.3) is just the Riemann approximation to the integral in (1.2) and the question becomes an assessment on how good an approximation that is. Another way to look at the problem is to ask ourselves how much information we are discarding by only keeping samples of a continuous-time function.


PIC

Figure 1.7: The RC network: an electrical integrator.

The answer, which we will study in detail in Chapter 9, is that in fact the continuous-time function and the set of samples are perfectly equivalent representations – provided that the underlying physical phenomenon “doesn’t change too fast”. Let us put the proviso aside for the time being and concentrate instead on the good news: first, the analog and the digital world can perfectly coexist; second, we actually possess a constructive way to move between worlds: the sampling theorem, discovered and rediscovered by many at the beginning of the 20th century(4) , tells us that the continuous-time function can be obtained from the samples as

        ∞∑     sin(π(t- nTs)∕Ts)
f(t) =      cn------------------
       n=-∞      π(t- nTs)∕Ts
(1.4)

So, in theory, once we have a set of measured values, we can build the continuous-time representation and use the tools of calculus. In reality things are even simpler: if we plug (1.4) into our analytic formula for the average (1.2) we can show that the result is a simple sum like (1.3). So we don’t need to explicitly go “through the looking glass” back to continuous-time: the tools of calculus have a discrete-time equivalent which we can use directly.

The equivalence between the discrete and continuous representations only holds for signals which are sufficiently “slow” with respect to how fast we sample them. This makes a lot of sense: we need to make sure that the signal does not do “crazy” things between successive samples; only if it is smooth and well behaved can we afford to have such sampling gaps. Quantitatively, the sampling theorem links the speed at which we need to repeatedly measure the signal to the maximum frequency contained in its spectrum. Spectra are calculated using the Fourier transform which, interestingly enough, was originally devised as a tool to break periodic functions into a countable set of building blocks. Everything comes together.

1.3 Discrete Amplitude

While it appears that the time continuum has been tamed by the sampling theorem, we are nevertheless left with another pesky problem: the precision of our measurements. If we model a phenomenon as an analytical function, not only is the argument (the time domain) a continuous variable but so is the function’s value (the codomain); a practical measurement, however, will never achieve an infinite precision and we have another paradox on our hands. Consider our temperature example once more: we can use a mercury thermometer and decide to write down just the number of degrees; maybe we can be more precise and note the half-degrees as well; with a magnifying glass we could try to record the tenths of a degree – but we would most likely have to stop there. With a more sophisticated thermocouple we could reach a precision of one hundredth of a degree and possibly more but, still, we would have to settle on a maximum number of decimal places. Now, if we know that our measures have a fixed number of digits, the set of all possible measures is actually countable and we have effectively mapped the codomain of our temperature function onto the set of integer numbers. This process is called quantization and it is the method, together with sampling, to obtain a fully digital signal.

In a way, quantization deals with the problem of the continuum in a much “rougher” way than in the case of time: we simply accept a loss of precision with respect to the ideal model. There is a very good reason for that and it goes under the name of noise. The mechanical recording devices we just saw, such as the thermograph or the phonograph, give the illusion of analytical precision but are in practice subject to severe mechanical limitations. Any analog recording device suffers from the same fate and even if electronic circuits can achieve an excellent performance, in the limit the unavoidable thermal agitation in the components constitutes a noise floor which limits the “equivalent number of digits”. Noise is a fact of nature that cannot be eliminated, hence our acceptance of a finite (i.e. countable) precision.


PIC

Figure 1.8: Analog and digital computers.

Noise is not just a problem in measurement but also in processing. Figure 1.8 shows the two archetypal types of analog and digital computing devices; while technological progress may have significantly improved the speed of each, the underlying principles remain the same for both. An analog signal processing system, much like the slide rule, uses the displacement of physical quantities (gears or electric charge) to perform its task; each element in the system, however, acts as a source of noise so that complex or, more importantly, cheap designs introduce imprecisions in the final result (good slide rules used to be very expensive). On the other hand the abacus, working only with integer arithmetic, is a perfectly precise machine(5) even if it’s made with rocks and sticks. Digital signal processing works with countable sequences of integers so that in a digital architecture no processing noise is introduced. A classic example is the problem of reproducing a signal. Before mp3 existed and file sharing became the bootlegging method of choice, people would “make tapes”. When someone bought a vinyl record he would allow his friends to record it on a cassette; however, a “peer-to-peer” dissemination of illegally taped music never really took off because of the “second generation noise”, i.e. because of the ever increasing hiss that would appear in a tape made from another tape. Basically only first generation copies of the purchased vinyl were acceptable quality on home equipment. With digital formats, on the other hand, duplication is really equivalent to copying down a (very long) list of integers and even very cheap equipment can do that without error.

Finally, a short remark on terminology. The amplitude accuracy of a set of samples is entirely dependent on the processing hardware; in current parlance this is indicated by the number of bits per sample of a given representation: compact disks, for instance, use 16 bits per sample while DVDs use 24. Because of its “contingent” nature, quantization is almost always ignored in the core theory of signal processing and all derivations are carried out as if the samples were real numbers; therefore, in order to be precise, we will almost always use the term discrete-time signal processing and leave the label “digital signal processing” (DSP) to the world of actual devices. Neglecting quantization will allow us to obtain very general results but care must be exercised: in the practice, actual implementations will have to deal with the effects of finite precision, sometimes with very disruptive consequences.

1.4 Communication Systems

Signals in digital form provide us with a very convenient abstract representation which is both simple and powerful; yet this does not shield us from the need to deal with an “outside” world which is probably best modeled by the analog paradigm. Consider a mundane act such as placing a call on a cell phone, as in Figure 1.9: humans are analog devices after all and they produce analog sound waves; the phone converts these into digital format, does some digital processing and then outputs an analog electromagnetic wave on its antenna. The radio wave travels to the base station in which it is demodulated, converted to digital format to recover the voice signal. The call, as a digital signal, continues through a switch and then is injected into an optical fiber as an analog light wave. The wave travels along the network and then the process is inverted until an analog sound wave is generated by the loudspeaker at the receiver’s end.


PIC

Figure 1.9: A prototypical telephone call and the associated transitions from the digital to the analog domain and back; processing in the blocks is done digitally while the links between blocks are over an analog medium.

Communication systems are in general a prime example of sophisticated interplay between the digital and the analog world: while all the processing is undoubtedly best done digitally, signal propagation in a medium (be it the the air, the electromagnetic spectrum or an optical fiber) is the domain of differential (rather than difference) equations. And yet, even when digital processing must necessarily hand over control to an analog interface, it does so in a way that leaves no doubt as to who’s boss, so to speak: for, instead of transmitting an analog signal which is the reconstructed “real” function as per (1.4), we always transmit an analog signal which encodes the digital representation of the data. This concept is really at the heart of the “digital revolution” and, just like in the cassette tape example, it has to do with noise.

Imagine an analog voice signal s(t) which is transmitted over a (long) telephone line; a simplified description of the received signal is

sr(t) = αs(t)+ n(t)

where the parameter α, with α < 1, is the attenuation that the signal incurs and where n(t) is the noise introduced by the system. The noise function is of obviously unknown (it is often modeled as a Gaussian process, as we will see) and so, once it’s added to the signal, it’s impossible to eliminate it. Because of attenuation, the receiver will include an amplifier with gain G to restore the voice signal to its original level; with G = 1∕α we will have

sa(t) = Gsr(t) = s(t)+ Gn (t)

Unfortunately, as it appears, in order to regenerate the analog signal we also have amplified the noise G times; clearly, if G is large (i.e. if there is a lot of attenuation to compensate for) the voice signal end up buried in noise. The problem is exacerbated if many intermediate amplifiers have to be used in cascade, as is the case in long submarine cables.

Consider now a digital voice signal, that is, a discrete-time signal whose samples have been quantized over, say, 256 levels: each sample can therefore be represented by an 8-bit word and the whole speech signal can be represented as a very long sequence of binary digits. We now build an analog signal as a two-level signal which switches for a few instants between, say, -1 V and +1 V for every “0” and “1” bit in the sequence respectively. The received signal will still be

sr(t) = αs(t)+ n(t)

but, to regenerate it, instead of linear amplification we can use nonlinear thresholding:

       {
sa(t) =   +1    if sr(t) ≥ 0
         - 1   if sr(t) < 0

Figure 1.10 clearly shows that as long as the magnitude of the noise is less than α the two-level signal can be regenerated perfectly; furthermore, the regeneration process can be repeated as many times as necessary with no overall degradation.


PIC

Figure 1.10: Two-level analog signal encoding a binary sequence: original signal s(t) (light gray) and received signal sr(t) (black) in which both attenuation and noise are visible.

In reality of course things are a little more complicated and, because of the nature of noise, it is impossible to guarantee that some of the bits won’t be corrupted. The answer is to use error correcting codes which, by introducing redundancy in the signal, make the sequence of ones and zeros robust to the presence of errors; a scratched CD can still play flawlessly because of the Reed-Solomon error correcting codes used for the data. Data coding is the core subject of Information Theory and it is behind the stellar performance of modern communication systems; interestingly enough, the most successful codes have emerged from group theory, a branch of mathematics dealing with the properties of closed sets of integer numbers. Integers again! Digital signal processing and information theory have been able to join forces so successfully because they share a common data model (the integer) and therefore they share the same architecture (the processor). Computer code written to implement a digital filter can dovetail seamlessly with code written to implement error correction; linear processing and nonlinear flow control coexist naturally.

A simple example of the power unleashed by digital signal processing is the performance of transatlantic cables. The first operational telegraph cable from Europe to North America was laid in 1858 (see Fig. 1.11); it worked for about a month before being irrecoverably damaged.(6)  From then on, new materials and rapid progress in electrotechnics boosted the performance of each subsequent cable; the key events in the timeline of transatlantic communications are shown in Table 1.1. The first transatlantic telephone cable was laid in 1956 and more followed in the next two decades with increasing capacity due to multicore cables and better repeaters; the invention of the echo canceler further improved the number of voice channels for already deployed cables. In 1968 the first experiments in PCM digital telephony were successfully completed and the quantum leap was around the corner: by the end of the 70’s cables were carrying over one order of magnitude more voice channels than in the 60’s. Finally, the deployment of the first fiber optic cable in 1988 opened the door to staggering capacities (and enabled the dramatic growth of the Internet).


PIC

Figure 1.11: Laying the first transatlantic cable.

Finally, it’s impossible not to mention the advent of data compression in this brief review of communication landmarks. Again, digital processing allows the coexistence of standard processing with sophisticated decision logic; this enables the implementation of complex data-dependent compression techniques and the inclusion of psychoperceptual models in order to match the compression strategy to the characteristics of the human visual or auditory system. A music format such as mp3 is perhaps the first example to come to mind but, as shown in Table 1.2, all communication domains have been greatly enhanced by the gains in throughput enabled by data compression.


Table 1.1: The main transatlantic cables from 1858 to our day.





Cable Year Type Signaling Capacity





1858 Coax telegraph a few words per hour





1866 Coax telegraph 6-8 words per minute





1928 Coax telegraph 2500 characters per minute





TAT-1 1956 Coax telephone 36 [48 by 1978] voice channels





TAT-3 1963 Coax telephone 138 [276 by 1986] voice channels





TAT-5 1970 Coax telephone 845 [2112 by 1993] voice channels





TAT-6 1976 Coax telephone 4000 [10,000 by 1994] voice channels





TAT-8 1988 Fiber data 280 Mbit/s (~ 40,000 voice channels)





TAT-14 2000 Fiber data 640 Gbit/s (~ 9,700,000 voice channels)







Table 1.2: Uncompressed and compressed data rates.



Signal Uncompressed Rate Common Rate



Music 4.32 Mbit/s (CD audio) 128 Kbit/s (MP3)



Voice 64 Kbit/s (AM radio) 4.8 Kbit/s (cellphone CELP)



Photos 14 MB (raw) 300 KB (JPEG)



Video 170 Mbit/s (PAL) 700 Kbit/s (DivX)




1.5 How to Read this Book

This book tries to build a largely self-contained development of digital signal processing theory from within discrete time, while the relationship to the analog model of the world is tackled only after all the fundamental “pieces of the puzzle” are already in place. Historically, modern discrete-time processing started to consolidate in the 50’s when mainframe computers became powerful enough to handle the effective simulations of analog electronic networks. By the end of the 70’s the discipline had by all standards reached maturity; so much so, in fact, that the major textbooks on the subject still in use today had basically already appeared by 1975. Because of its ancillary origin with respect to the problems of that day, however, discrete-time signal processing has long been presented as a tributary to much more established disciplines such as Signals and Systems. While historically justifiable, that approach is no longer tenable today for three fundamental reasons: first, the pervasiveness of digital storage for data (from CDs to DVDs to flash drives) implies that most devices today are designed for discrete-time signals to start with; second, the trend in signal processing devices is to move the analog-to-digital and digital-to-analog converters at the very beginning and the very end of the processing chain so that even “classically analog” operations such as modulation and demodulation are now done in discrete-time; third, the availability of numerical packages like Matlab provides a testbed for signal processing experiments (both academically and just for fun) which is far more accessible and widespread than an electronics lab (not to mention affordable).

The idea therefore is to introduce discrete-time signals as a self-standing entity (Chap. 2), much in the natural way of a temperature sequence or a series of flood measurements, and then to remark that the mathematical structures used to describe discrete-time signals are one and the same with the structures used to describe vector spaces (Chap. 3). Equipped with the geometrical intuition afforded to us by the concept of vector space, we can proceed to “dissect” discrete-time signals with the Fourier transform, which turns out to be just a change of basis (Chap. 4). The Fourier transform opens the passage between the time domain and the frequency domain and, thanks to this dual understanding, we are ready to tackle the concept of processing as performed by discrete-time linear systems, also known as filters (Chap. 5). Next comes the very practical task of designing a filter to order, with an eye to the subtleties involved in filter implementation; we will mostly consider FIR filters, which are unique to discrete time (Chaps 6 and 7). After a brief excursion in the realm of stochastic sequences (Chap. 8) we will finally build a bridge between our discrete-time world and the continuous-time models of physics and electronics with the concepts of sampling and interpolation (Chap. 9); and digital signals will be completely accounted for after a study of quantization (Chap. 10). We will finally go back to purely discrete time for the final topic, multirate signal processing (Chap. 11), before putting it all together in the final chapter: the analysis of a commercial voiceband modem (Chap. 12).

1.6 Further Reading

The Bible of digital signal processing was and remains Discrete-Time Signal Processing, by A. V. Oppenheim and R. W. Schafer (Prentice-Hall, last edition in 1999); exceedingly exhaustive, it is a must-have reference. For background in signals and systems, the eponimous Signals and Systems, by Oppenheim, Willsky and Nawab (Prentice Hall, 1997) is a good start.

Most of the historical references mentioned in this introduction can be integrated by simple web searches. Other comprehensive books on digital signal processing include S. K. Mitra’s Digital Signal Processing (McGraw Hill, 2006) and Digital Signal Processing, by J. G. Proakis and D. K. Manolakis (Prentis Hall 2006). For a fascinating excursus on the origin of calculus, see D. Hairer and G. Wanner, Analysis by its History (Springer-Verlag, 1996). A more than compelling epistemological essay on the continuum is Everything and More, by David Foster Wallace (Norton, 2003), which manages to be both profound and hilarious in an unprecedented way.

Finally, the very prolific literature on current signal processing research is published mainly by the Institute of Electronics and Electrical Engineers (IEEE) in several of its transactions such as IEEE Transactions on Signal Processing, IEEE Transactions on Image Processing and IEEE Transactions on Speech and Audio Processing.

© Presses polytechniques et universitaires romandes, 2008
All rights reserved

Chapter 2
Discrete-Time Signals

In this Chapter we define more formally the concept of the discrete-time signal and establish an associated basic taxonomy used in the remainder of the book. Historically, discrete-time signals have often been introduced as the discretized version of continuous-time signals, i.e. as the sampled values of analog quantities, such as the voltage at the output of an analog circuit; accordingly, many of the derivations proceeded within the framework of an underlying continuous-time reality. In truth, the discretization of analog signals is only part of the story, and a rather minor one nowadays. Digital signal processing, especially in the context of communication systems, is much more concerned with the synthesis of discrete-time signals rather than with sampling. That is why we choose to introduce discrete-time signals from an abstract and self-contained point of view.

2.1 Basic Definitions

A discrete-time signal is a complex-valued sequence. Remember that a sequence is defined as a complex-valued function of an integer index n, with n ∈ Z; as such, it is a two-sided, infinite collection of values. A sequence can be defined analytically in closed form, as for example:

x[n] = (n  mod 11) - 5
(2.1)

shown as the “triangular” waveform plotted in Figure 2.1; or

        j πn
x [n] = e 20
(2.2)

which is a complex exponential of period 40 samples, plotted in Figure 2.2. An example of a sequence drawn from the real world is

x[n ] = The average Dow -Jones index in year n
(2.3)

plotted in Figure 2.3 from year 1900 to 2002. Another example, this time of a random sequence, is

x[n] = the n-th output of a random source U(- 1,1)
(2.4)

a realization of which is plotted in Figure 2.4.


PIC

Figure 2.1: Triangular discrete-time wave.


PIC
PIC

Figure 2.2: Discrete-time complex exponential x[n] = exp(jπ
20n) (real and imaginary parts).

A few notes are in order:


PIC

Figure 2.3: The Dow-Jones industrial index.


PIC

Figure 2.4: An example of random signal.

2.1.1 The Discrete-Time Abstraction

While analytical forms of discrete-time signals such as the ones above are useful to illustrate the key points of signal processing and are absolutely necessary in the mathematical abstractions which follow, they are non-etheless just that, abstract examples. How does the notion of a discrete-time signal relate to the world around us? A discrete-time signal, in fact, captures our necessarily limited ability to take repeated accurate measurements of a physical quantity. We might be keeping track of the stock market index at the end of each day to draw a pencil and paper chart; or we might be measuring the voltage level at the output of a microphone 44,100 times per second (obviously not by hand!) to record some music via the computer’s soundcard. In both cases we need “time to write down the value” and are therefore forced to neglect everything that happens between measuring times. This “look and write down” operation is what is normally referred to as sampling. There are real-world phenomena which lend themselves very naturally and very intuitively to a discrete-time representation: the daily Dow-Jones index, for example, solar spots, yearly floods of the Nile, etc. There seems to be no irrecoverable loss in this neglect of intermediate values. But what about music, or radio waves? At this point it is not altogether clear from an intuitive point of view how a sampled measurement of these phenomena entail no loss of information. The mathematical proof of this will be shown in detail when we study the sampling theorem; for the time being let us say that “the proof of the cake is in the eating”: just listen to your favorite CD!

The important point to make here is that, once a real-world signal is converted to a discrete-time representation, the underlying notion of “time between measurements” becomes completely abstract. All we are left with is a sequence of numbers, and all signal processing manipulations, with their intended results, are independent of the way the discrete-time signal is obtained. The power and the beauty of digital signal processing lies in part with its invariance with respect to the underlying physical reality. This is in stark contrast with the world of analog circuits and systems, which have to be realized in a version specific to the physical nature of the input signals.

2.1.2 Basic Signals

The following sequences are fundamental building blocks for the theory of signal processing.

Impulse. The discrete-time impulse (or discrete-time delta function) is potentially the simplest discrete-time signal; it is shown in Figure 2.5(a) and is defined as

      {
δ[n ] =   1  n = 0
         0  n ⁄= 0
(2.5)

Unit Step. The discrete-time unit step is shown in Figure 2.5(b) and is defined by the following expression:

      {
         1  n ≥ 0
u[n] =   0  n < 0
(2.6)

The unit step can be obtained via a discrete-time integration of the impulse (see eq. (2.16)).

Exponential Decay. The discrete-time exponential decay is shown in Figure 2.5(c) and is defined as

       n
x[n] = a u [n],     a ∈ C, |a| < 1
(2.7)

The exponential decay is, as we will see, the free response of a discrete-time first order recursive filter. Exponential sequences are well-behaved only for values of a less than one in magnitude; sequences in which |a| > 1 are unbounded and represent an unstable behavior (their energy and power are both infinite).

Complex Exponential. The discrete-time complex exponential has already been shown in Figure 2.2 and is defined as

       j(ω n+φ)
x[n ] = e 0
(2.8)

Special cases of the complex exponential are the real-valued discrete-time sinusoidal oscillations:

x[n] = sin(ω0n + φ) (2.9)
x[n] = cos(ω0n + φ) (2.10)
An example of (2.9) is shown in Figure 2.5(d).

a) PIC    b) PIC
c) PIC    d) PIC

Figure 2.5: Basic signals. Impulse (a); unit step (b); decaying exponential (c); real-valued sinusoid (d).

2.1.3 Digital Frequency

With respect to the oscillatory behavior captured by the complex exponential, a note on the concept of “frequency” is in order. In the continuous-time world (the world of textbook physics, to be clear), where time is measured in seconds, the usual unit of measure for frequency is the Hertz which is equivalent to 1second. In the discrete-time world, where the index n represents a dimensionless time, “digital” frequency is expressed in radians which is itself a dimensionless quantity.(1) The best way to appreciate this is to consider an algorithm to generate successive samples of a discrete-time sinusoid at a digital frequency ω0:

ω 0; initialization

φ initial phase value;

repeat

x sin(ω + φ); compute next value

ω ω + ω0; update phase

until done

At each iteration,(2) the argument of the trigonometric function is incremented by ω0 and a new output sample is produced. With this in mind, it is easy to see that the highest frequency manageable by a discrete-time system is ωmax = 2π; for any frequency larger than this, the inner 2π-periodicity of the trigonometric functions “maps back” the output values to a frequency between 0 and 2π. This can be expressed as an equation:

sin(n (ω + 2k π)+ φ ) = sin(nω + φ)
(2.11)

for all values of k ∈ Z. This 2π-equivalence of digital frequencies is a pervasive concept in digital signal processing and it has many important consequences which we will study in detail in the next Chapters.

2.1.4 Elementary Operators

In this Section we present some elementary operations on sequences.

Shift. A sequence x[n], shifted by an integer k is simply:

y[n] = x [n - k]
(2.12)

If k is positive, the signal is shifted “to the left”, meaning that the signal has been delayed; if k is negative, the signal is shifted “to the right”, meaning that the signal has been advanced. The delay operator can be indicated by the following notation:

  {    }
Dk x [n]  = x[n-  k]

Scaling. A sequence x[n] scaled by a factor α ∈ C is

y[n] = αx[n]
(2.13)

If α is real, then the scaling represents a simple amplification or attenuation of the signal (when α > 1 and α < 1, respectively). If α is complex, amplification and attenuation are compounded with a phase shift.

Sum. The sum of two sequences x[n] and w[n] is their term-by-term sum:

y[n ] = x[n]+ w [n ]
(2.14)

Please note that sum and scaling are linear operators. Informally, this means scaling and sum behave “intuitively”:

  (          )
α  x[n ]+ w[n] =  αx[n]+ αw [n]

or

Dk {x[n]+ w [n ]} = x[n - k]+ w[n - k]

Product. The product of two sequences x[n] and w[n] is their term-by-term product

y[n] = x[n]w[n]
(2.15)

Integration. The discrete-time equivalent of integration is expressed by the following running sum:

        ∑n
y[n] =      x[k]
       k= -∞
(2.16)

Intuitively, integration computes a non-normalized running average of the discrete-time signal.

Differentiation. A discrete-time approximation to differentiation is the first-order difference:(3)

y[n] = x [n]- x[n - 1]
(2.17)

With respect to Section 2.1.2, note how the unit step can be obtained by applying the integration operator to the discrete-time impulse; conversely, the impulse can be obtained by applying the differentiation operator to the unit step.

2.1.5 The Reproducing Formula

The signal reproducing formula is a simple application of the basic signal and signal properties that we have just seen and it states that

       ∑∞
x[n ] =     x[k]δ[n-  k]
      k=-∞
(2.18)

Any signal can be expressed as a linear combination of suitably weighed and shifted impulses. In this case, the weights are nothing but the signal values themselves. While self-evident, this formula will reappear in a variety of fundamental derivations since it captures the “inner structure” of a discrete-time signal.

2.1.6 Energy and Power

We define the energy of a discrete-time signal as

              ∞∑
Ex =  ∥x∥2=       ||x [n]||2
         2   n=-∞
(2.19)

(where the squared-norm notation will be clearer after the next Chapter). This definition is consistent with the idea that, if the values of the sequence represent a time-varying voltage, the above sum would express the total energy (in joules) dissipated over a 1Ω-resistor. Obviously, the energy is finite only if the above sum converges, i.e. if the sequence x[n] is square-summable. A signal with this property is sometimes referred to as a finite- energy signal. For a simple example of the converse, note that a periodic signal which is not identically zero is not square-summable.

We define the power of a signal as the usual ratio of energy over time, taking the limit over the number of samples considered:

              N -1
           -1-∑   ||   ||2
Px = Nlim→∞  2N     x [n]
               -N
(2.20)

Clearly, signals whose energy is finite, have zero total power (i.e. their energy dilutes to zero over an infinite time duration). Exponential sequences which are not decaying (i.e. those for which |a| > 1 in (2.7)) possess infinite power (which is consistent with the fact that they describe an unstable behavior). Note, however, that many signals whose energy is infinite do have finite power and, in particular, periodic signals (such as sinusoids and combinations thereof). Due to their periodic nature, however, the above limit is undetermined; we therefore define their power to be simply the average energy over a period. Assuming that the period is N samples, we have

        N∑ -1|   |
Px =  1-    |x [n]|2
      N n=0
(2.21)

2.2 Classes of Discrete-Time Signals

The examples of discrete-time signals in (2.1) and (2.2) are two-sided, infinite sequences. Of course, in the practice of signal processing, it is impossible to deal with infinite quantities of data: for a processing algorithm to execute in a finite amount of time and to use a finite amount of storage, the input must be of finite length; even for algorithms that operate on the fly, i.e. algorithms that produce an output sample for each new input sample, an implicit limitation on the input data size is imposed by the necessarily limited life span of the processing device.(4) This limitation was all too apparent in our attempts to plot infinite sequences as shown in Figure 2.1 or 2.2: what the diagrams show, in fact, is just a meaningful and representative portion of the signals; as for the rest, the analytical description remains the only reference. When a discrete-time signal admits no closed-form representation, as is basically always the case with real-world signals, its finite time support arises naturally because of the finite time spent recording the signal: every piece of music has a beginning and an end, and so did every phone conversation. In the case of the sequence representing the Dow Jones index, for instance, we basically cheated since the index did not even exist for years before 1884, and its value tomorrow is certainly not known – so that the signal is not really a sequence, although it can be arbitrarily extended to one. More importantly (and more often), the finiteness of a discrete-time signal is explicitly imposed by design since we are interested in concentrating our processing efforts on a small portion of an otherwise longer signal; in a speech recognition system, for instance, the practice is to cut up a speech signal into small segments and try to identify the phonemes associated to each one of them.(5) A special case is that of periodic signals; even though these are bona-fide infinite sequences, it is clear that all information about them is contained in just one period. By describing one period (graphically or otherwise), we are, in fact, providing a full description of the sequence. The complete taxonomy of the discrete-time signals used in the book is the subject of the next Sections ans is summarized in Table 2.1.

2.2.1 Finite-Length Signals

As we just mentioned, a finite-length discrete-time signal of length N are just a collection of N complex values. To introduce a point that will reappear throughout the book, a finite-length signal of length N is entirely equivalent to a vector in CN. This equivalence is of immense import since all the tools of linear algebra become readily available for describing and manipulating finite-length signals. We can represent an N-point finite-length signal using the standard vector notation

x = [x   x   ... x    ]T
      0   1       N -1

Note the transpose operator, which declares x as a column vector; this is the customary practice in the case of complex-valued vectors. Alternatively, we can (and often will) use a notation that mimics the one used for proper sequences:

x[n ],    n = 0, ..., N - 1

Here we must remember that, although we use the notation x[n], x[n] is not defined for values outside its support, i.e. for n < 0 or for n N. Note that we can always obtain a finite-length signal from an infinite sequence by simply dropping the sequence values outside the indices of interest. Vector and sequence notations are equivalent and will be used interchangeably according to convenience; in general, the vector notation is useful when we want to stress the algorithmic or geometric nature of certain signal processing operations. The sequence notation is useful in stressing the algebraic structure of signal processing.

Finite-length signals are extremely convenient entities: their energy is always and, as a consequence, no stability issues arise in processing. From the computational point of view, they are not only a necessity but often the cornerstone of very efficient algorithmic design (as we will see, for instance, in the case of the FFT); one could say that all “practical” signal processing lives in CN. It would be extremely awkward, however, to develop the whole theory of signal processing only in terms of finite-length signals; the asymptotic behavior of algorithms and transformations for infinite sequences is also extremely valuable since a stability result proven for a general sequence will hold for all finite-length signals too. Furthermore, the notational flexibility which infinite sequences derive from their function-like definition is extremely practical from the point of view of the notation. We can immediately recognize and understand the expression x[n - k] as a k-point shift of a sequence x[n]; but, in the case of finite-support signals, how are we to define such a shift? We would have to explicitly take into account the finiteness of the signal and the associated “border effects”, i.e. the behavior of operations at the edges of the signal. For this reason, in most derivations which involve finite-length signal, these signals will be embedded into proper sequences, as we will see shortly.

2.2.2 Infinite-Length Signals

Aperiodic Signals. The most general type of discrete-time signal is represented by a generic infinite complex sequence. Although, as previously mentioned, they lie beyond our processing and storage capabilities, they are invaluably useful as a generalization in the limit. As such, they must be handled with some care when it comes to their properties. We will see shortly that two of the most important properties of infinite sequences concern their summability: this can take the form of either absolute summability (stronger condition) or square summability (weaker condition, corresponding to finite energy).


˜x[n] = , xN-2, xN-1, ◜x-,x-,x-,.◞.◟.,x----,x---◝
 0  1  2      N-2  N -1   x, x 0, x1,
↕→ n = 0
˜x[n - 1] = , xN-3, xN-2, xN -1,x0,x1,x2,...,xN-2
◟----------◝◜---------◞x, xN-1, x0,

Figure 2.6: Equivalence between a right shift by one of a periodized signal and the circular shift of the original signal. x and xare the length-N original signal and its right circular shift by one, respectively.

Periodic Signals. A periodic sequence with period N is one for which

˜x[n] = ˜x[n + kN ],      k ∈ Z
(2.22)

The tilde notation ˜x[n] will be used whenever we need to explicitly stress a periodic behavior. Clearly an N-periodic sequence is completely defined by its N values over a period; that is, a periodic sequence “carries no more information” than a finite-length signal of length N.

Periodic Extensions. Periodic sequences are infinite in length, and yet their information is contained within a finite number of samples. In this sense, periodic sequences represent a first bridge between finite-length signals and infinite sequences. In order to “embed” a finite-length signal x[n], n = 0,,N - 1 into a sequence, we can take its periodized version:

˜x[n] = x[n   mod N ],      n ∈ Z-
(2.23)

this is called the periodic extension of the finite length signal x[n]. This type of extension is the “natural” one in many contexts, for reasons which will be apparent later when we study the frequency-domain representation of discrete-time signals. Note that now an arbitrary shift of the periodic sequence corresponds to the periodization of a circular shift of the original finite-length signal. A circular shift by k ∈ Z is easily visualized by imagining a shift register; if we are shifting towards the right (k > 0), the values which pop out of the rightmost end of the shift register are pushed back in at the other end.(6) The relationship between the circular shift of a finite-length signal and the linear shift of its periodic extension is depicted in Figure 2.6. Finally, the energy of a periodic extension becomes infinite, while its power is simply the energy of the finite-length original signal scaled by 1∕N.

Finite-Support Signals. An infinite discrete-time sequence ¯x[n] is said to have finite support if its values are zero for all indices outside of an interval; that is, there exist N and M ∈ Z such that

¯x[n] = 0   for n < M  and n > M + N  - 1

Note that, although ¯x[n] is an infinite sequence, the knowledge of M and of the N nonzero values of the sequence completely specifies the entire signal. This suggests another approach to embedding a finite-length signal x[n], n = 0,,N - 1, into a sequence, i.e.

       {
         x[n]  if 0 ≤ n < N - 1
¯x[n] =                                n ∈ Z-
         0     otherwise
(2.24)

where we have chosen M = 0 (but any other choice of M could be used). Note that, here, in contrast to the the periodic extension of x[n], we are actually adding arbitrary information in the form of the zero values outside of the support interval. This is not without consequences, as we will see in the following Chapters. In general, we will use the bar notation ¯x[n] for sequences defined as the finite support extension of a finite-length signal. Note that, now, the shift of the finite-support extension gives rise to a zero-padded shift of the signal locations between M and M + N - 1; the dynamics of the shift are shown in Figure 2.7.


¯x[n] = , 0, 0, ◜x-,x-,x-,..◞.◟,x----,x---◝
 0  1  2      N -2  N- 1   x, 0, 0, 0, 0,
↕→ n = 0
¯x[n - 1] = , 0, 0, 0,x0,x1, x2,...,xN -3,xN -2
◟-----------◝◜-----------◞x, xN-1, 0, 0,

Figure 2.7: Relationship between the right shift by one of a finite-support extension and the zero padded shift of the original signal. x and xare the length-N original signal and its zero-padded shift by one, respectively.


Table 2.1: Basic discrete-time signal types.
|--------------|-----------------------|-------------|---------|
|Signal-Type---|-------Notation--------|---Energy----|-Power---|
|              |                       | N∑-1|   |   |         |
|Finite- Length |x[n], n = 0,1,...N,N - 1 |     |x[n]|2   |  undef.  |
----------------------x,--x-∈ C-----------n=0-------------------
|              |                       |             |         |
|Infinite- Length|------x[n],--n ∈-Z------|--eq. (2.19)--|eq. (2.20)|
|              |      ˜x[n],  n ∈ Z      |             |         |
|N -Periodic    |    ˜x[n] = ˜x[n + kN ]   |     ∞       |eq. (2.21)|
|--------------|-----------------------|-------------|---------|
|              |      ¯x[n],  n ∈ Z      |M+N∑ - 1    2 |         |
|Finite- Support|      ¯x[n] ⁄= 0 for      |       |x[n]| |    0    |
------------------M--≤-n ≤-M-+-N---1------n=M--------------------


2.3 Examples

Example 2.1: Discrete-time in the Far West

The fact that the “fastest” digital frequency is 2π can be readily appreciated in old western movies. In classic scenarios there is always a sequence showing a stagecoach leaving town. We can see the spoked wagon wheels starting to turn forward faster and faster, then stop and then starting to turn backwards. In fact, each frame in the movie is a snapshot of a spinning disk with increasing angular velocity. The filming process therefore transforms the wheel’s movement into a sequence of discrete-time positions depicting a circular motion with increasing frequency. When the speed of the wheel is such that the time between frames covers a full revolution, the wheel appears to be stationary: this corresponds to the fact that the maximum digital frequency ω = 2π is undistinguishable from the slowest frequency ω = 0. As the speed of the real wheel increases further, the wheel on film starts to move backwards, which corresponds to a negative digital frequency. This is because a displacement of 2π + α between successive frames is interpreted by the brain as a negative displacement of α: our intuition always privileges the most economical explanation of natural phenomena.

Example 2.2: Building periodic signals

Given a discrete-time signal x[n] and an integer N > 0 we can always formally write

        ∞∑
y˜[n] =      x[n - kN ]
       k=-∞

The signal [n], if it exists, is an N-periodic sequence. The periodic signal [n] is “manufactured” by superimposing infinite copies of the original signal x[n] spaced N samples apart. We can distinguish three cases:

  1. If x[n] is finite-support and N is bigger than the size of the support, then the copies in the sum do not overlap; in the limit, if N is exactly equal to the size of the support then [n] corresponds to the periodic extension of x[n] considered as a finite-length signal.
  2. If x[n] is finite-support and N is smaller than the size of the support then the copies in the sum do overlap; for each n, the value of [n] is be the sum of at most a finite number of terms.
  3. If x[n] has infinite support, then each value of [n] is be the sum of an infinite number of terms. Existence of [n] depends on the properties of x[n].

PIC
PIC
PIC
PIC

Figure 2.8: Periodization of a simple finite-support signal (support length L = 19); original signal (top panel) and periodized versions with N = 35 > L, N = 19 = L, N = 15 < L respectively.

The first two cases are illustrated in Figure 2.8. In practice, the periodization of short sequences is an effective method to synthesize the sound of string instruments such as a guitar or a piano; used in conjunction with simple filters, the technique is known as the Karplus-Strong algorithm.

As an example of the last type, take for instance the signal x[n] = α-n u[n]. The periodization formula leads to

[n] = k=-∞α-(n-kN)u[n - kN] = k=-∞n∕Nα-(n-kN)
since u[n-kN] = 0 for k ≥⌊n∕N. Now write n = mN + i with m = n∕N and i = n mod N. We have
[n] = k=-∞mα-(m-k)N-i = α-i h=0α-hN
which exists and is finite for |α| > 1; for these values of α we have
        -(n  mod N)
˜y[n ] = α----------
        1 - α-N
(2.25)

which is indeed N-periodic. An example is shown in Figure 2.9.


PIC
PIC

Figure 2.9: Periodization of x[n] = 1.1-n u[n] with N = 40; original signal (top panel) and periodized version (bottom panel).

2.4 Further Reading

For more discussion on discrete-time signals, see Discrete-Time Signal Processing, by A. V. Oppenheim and R. W. Schafer (Prentice-Hall, last edition in 1999), in particular Chapter 2.

Other books of interest include: B. Porat, A Course in Digital Signal Processing (Wiley, 1997) and R. L. Allen and D. W. Mills’ Signal Analysis (IEEE Press, 2004).

2.5 Exercises

Exercise 2.1: Review of complex numbers. 

  1. Let s[n] := 1n-
2 + j1n-
3. Compute n=1s[n].
  2. Same question with s[n] := (  )
 -j
 3n.
  3. Characterize the set of complex numbers satisfying z* = z-1.
  4. Find 3 complex numbers {z0,z1,z2} which satisfy zi3 = 1, i = 1,2,3.
  5. What is the following infinite product n=1exp(jπ∕2n)?

Exercise 2.2: Periodic signals.  For each of the following discrete-time signals, state whether the signal is periodic and, if so, specify the period:

  1. x[n] = exp(jn
π)
  2. x[n] = cos(n)
  3. x[n] = ∘ ----(---)
  cos  πn-
        7
  4. x[n] = k=-∞y[n - 100k], with y[n] absolutely summable.

© Presses polytechniques et universitaires romandes, 2008
All rights reserved

Chapter 3
Signals and Hilbert Spaces

In the 17th century, algebra and geometry started to interact in a fruitful synergy which continues to the present day. Descartes’s original idea of translating geometric constructs into algebraic form spurred a new line of attack in mathematics; soon, a series of astonishing results was produced for a number of problems which had long defied geometrical solutions (such as, famously, the trisection of the angle). It also spearheaded the notion of vector space, in which a geometrical point could be represented as an n-tuple of coordinates; this, in turn, readily evolved into the theory of linear algebra. Later, the concept proved useful in the opposite direction: many algebraic problems could benefit from our innate geometrical intuition once they were cast in vector form; from the easy three-dimensional visualization of concepts such as distance and orthogonality, more complex algebraic constructs could be brought within the realm of intuition. The final leap of imagination came with the realization that the concept of vector space could be applied to much more abstract entities such as infinite-dimensional objects and functions. In so doing, however, spatial intuition could be of limited help and for this reason, the notion of vector space had to be formalized in much more rigorous terms; we will see that the definition of Hilbert space is one such formalization.

Most of the signal processing theory which in this book can be usefully cast in terms of vector notation and the advantages of this approach are exactly what we have just delineated. Firstly of all, all the standard machinery of linear algebra becomes immediately available and applicable; this greatly simplifies the formalism used in the mathematical proofs which will follow and, at the same time, it fosters a good intuition with respect to the underlying principles which are being put in place. Furthermore, the vector notation creates a frame of thought which seamlessly links the more abstract results involving infinite sequences to the algorithmic reality involving finite-length signals. Finally, on the practical side, vector notation is the standard paradigm for numerical analysis packages such as Matlab; signal processing algorithms expressed in vector notation translate to working code with very little effort.

In the previous Chapter, we established the basic notation for the different classes of discrete-time signals which we will encounter time and again in the rest of the book and we hinted at the fact that a tight correspondence can be established between the concept of signal and that of vector space. In this Chapter, we pursue this link further, firstly by reviewing the familiar Euclidean space in finite dimensions and then by extending the concept of vector space to infinite-dimensional Hilbert spaces.

3.1 Euclidean Geometry: a Review

Euclidean geometry is a straightforward formalization of our spatial sensory experience; hence its cornerstone role in developing a basic intuition for vector spaces. Everybody is (or should be) familiar with Euclidean geometry and the natural “physical” spaces like R2 (the plane) and R3 (the three-dimensional space). The notion of distance is clear; orthogonality is intuitive and maps to the idea of a “right angle”. Even a more abstract concept such as that of basis is quite easy to contemplate (the standard coordinate concepts of latitude, longitude and height, which correspond to the three orthogonal axes in R3). Unfortunately, immediate spatial intuition fails us for higher dimensions (i.e. for RN with N > 3), yet the basic concepts introduced for R3 generalize easily to RN so that it is easier to state such concepts for the higher-dimensional case and specialize them with examples for N = 2 or N = 3. These notions, ultimately, will be generalized even further to more abstract types of vector spaces. For the moment, let us review the properties of RN, the N-dimensional Euclidean space.

Vectors and Notation. A point in RN is specified by an N-tuple of coordinates:(1)

    ⌊       ⌋
    |   x0  |
    ||   x1  ||
x = |   .   | = [x0  x1  ... xN -1]T
    |⌈   ..   |⌉
      xN -1

where xi ∈ R, i = 0,1,,N - 1. We call this set of coordinates a vector and the N-tuple will be denoted synthetically by the symbol x; coordinates are usually expressed with respect to a “standard” orthonormal basis.(2) The vector

0 = [           ]T
     0 0  ...  0

i.e. the null vector, is considered the origin of the coordinate system.

The generic n-th element in vector x is indicated by the subscript xn. In the following we will often consider a set of M arbitrarily chosen vectors in RN and this set will be indicated by the notation {x(k)}k=0 M-1. Each vector in the set is indexed by the superscript (k). The n-th element of the k-th vector in the set is indicated by the notation xn(k)

Inner Product. The inner product between two vectors x,y ∈ RN is defined as

        N∑-1
⟨x,y⟩ =    xnyn
        n=0
(3.1)

We say that x and y are orthogonal when their inner product is zero:

x ⊥ y ⇐ ⇒  ⟨x,y⟩ = 0
(3.2)

Norm. The norm of a vector is defined in terms of the inner product as

       ┌ -------
       ││ N∑-1
∥x∥2 = ∘     x2 = ⟨x,x⟩1∕2
          n=0 n
(3.3)

It is easy to visualize geometrically that the norm of a vector corresponds to its length, i.e. to the distance between the origin and the point identified by the vector’s coordinates. A remarkable property linking the inner product and the norm is the Cauchy-Schwarz inequality (the proof of which is nontrivial); given x,y ∈ RN we can state that

|     |
|⟨x,y⟩| ≤ ∥x∥2 ∥y∥2

Distance. The concept of norm is used to introduce the notion of Euclidean distance between two vectors x and y:

                   ┌ -------------
                   ││ N∑ -1
d(x,y) = ∥x - y∥2 = ∘    (xn - yn)2
                     n=0
(3.4)


PIC

Figure 3.1: Elementary properties of vectors in R2: orthogonality of two vectors x and y (left); difference vector x - y (middle); sum of two orthogonal vectors z = x + y, also known as Pythagorean theorem (right).

From this, we can easily derive the Pythagorean theorem for N dimensions: if two vectors are orthogonal, x y, and we consider the sum vector z = x + y, we have

∥z∥22 = ∥x∥22 + ∥y∥22
(3.5)

The above properties are graphically shown in Figure 3.1 for R2.

Bases. Consider a set of M arbitrarily chosen vectors in RN: {x(k)}k=0M-1. Given such a set, a key question is that of completeness: can any vector in RN be written as a linear combination of vectors from the set? In other words, we ask ourselves whether, for any z ∈ RN, we can find a set of M coefficients αk ∈ R such that z can be expressed as

    M∑ -1    (k)
z =     αkx
    k=0
(3.6)

Clearly, M needs to be greater or equal to N, but what conditions does a set of vectors {x(k)}k=0M-1 need to satisfy so that (3.6) holds for any z ∈ RN? There needs to be a set of M vectors that span RN, and it can be shown that this is equivalent to saying that the set must contain at least N linearly independent vectors. In turn, N vectors {y(k)}k=0N-1 are linearly independent if the equation

N∑-1
    βky(k) = 0
 k=0
(3.7)

is satisfied only when all the βk’s are zero. A set of N linearly independent vectors for RN is called a basis and, amongst bases, the ones with mutually orthogonal vectors of norm equal to one are called orthonormal bases. For an orthonormal basis {y(k)} we therefore have

            {
⟨ (k)  (h)⟩      1  if k = h
 y  ,y    =    0  otherwise
(3.8)

Figure 3.2 reviews the above concepts in low dimensions.


PIC

Figure 3.2: Linear independence and bases: x(0), x(1) and x(2) are coplanar in R3 and, therefore, they do not form a basis; conversely, x(3) and any two of {x(0),x(1),x(2)} are linearly independent.

The standard orthonormal basis for RN is the canonical basis {δ(k)}k=0N-1 with

                {
δ(k)= δ[n-  k] =  1   if n = k
 n                0   otherwise

The orthonormality of such a set is immediately apparent. Another important orthonormal basis for RN is the normalized Fourier basis {w(k)}k=0N-1 for which

w(k)=  √1--e-j2Nπnk
 n      N

The orthonormality of the basis will be proved in the next Chapter.

3.2 From Vector Spaces to Hilbert Spaces

The purpose of the previous Section was to briefly review the elementary notions and spatial intuitions of Euclidean geometry. A thorough study of vectors in RN and CN is the subject of linear algebra; yet, the idea of vectors, orthogonality and bases is much more general, the basic ingredients being an inner product and the use of a square norm as in (3.3).

While the analogy between vectors in CN and length-N signal is readily apparent, the question now hinges on how we are to proceed in order to generalize the above concepts to the class of infinite sequences. Intuitively, for instance, we can let N grow to infinity and obtain C as the Euclidean space for infinite sequences; in this case, however, much care must be exercised with expressions such as (3.1) and (3.3) which can diverge for sequences as simple as x[n] = 1 for all n. In fact, the proper generalization of CN to an infinite number of dimensions is in the form of a particular vector space called Hilbert space; the structure of this kind of vector space imposes a set of constraints on its elements so that divergence problems, such as the one we just mentioned, no longer bother us. When we embed infinite sequences into a Hilbert space, these constraints translate to the condition that the corresponding signals have finite energy – which is a mild and reasonable requirement.

Finally, it is important to remember that the notion of Hilbert space is applicable to much more general vector spaces than CN; for instance, we can easily consider spaces of functions over an interval or over the real line. This generality is actually the cornerstone of a branch of mathematics called functional analysis. While we will not follow in great depth these kinds of generalizations, we will certainly point out a few of them along the way. The space of square integrable functions, for instance, will turn out to be a marvelous tool a few Chapters from now when, finally, the link between continuous—and discrete—time signals will be explored in detail.

3.2.1 The Recipe for Hilbert Space

A word of caution: we are now starting to operate in a world of complete abstraction. Here a vector is an entity per se and, while analogies and examples in terms of Euclidean geometry can be useful visually, they are, by no means, exhaustive. In other words: vectors are no longer just N-tuples of numbers; they can be anything. This said, a Hilbert space can be defined in incremental steps starting from a general notion of vector space and by supplementing this space with two additional features: the existence of an inner product and the property of completeness.

Vector Space. Consider a set of vectors V and a set of scalars S (which can be either R or C for our purposes). A vector space H(V,S) is completely defined by the existence of a vector addition operation and a scalar multiplication operation which satisfy the following properties for any x,y,z, ∈ V and any α,β ∈ S:

Inner Product Space. What we have so far is the simplest type of vector space; the next ingredient which we consider is the inner product which is essential to build a notion of distance between elements in a vector space. A vector space with an inner product is called an inner product space. An inner product for H(V,S) is a function from V × V to S which satisfies the following properties for any x,y,z, ∈ V :

From this definition of the inner product, a series of additional definitions and properties can be derived: first of all, orthogonality between two vectors is defined with respect to the inner product, and we say that the non-zero vectors x and y are orthogonal, or x ⊥ y  , if and only if

⟨x,y ⟩ = 0
(3.23)

From the definition of an inner product, we can define the norm of a vector as (we will omit from now on the subscript 2 from the norme symbol):

           1∕2
∥x∥ = ⟨x,x⟩
(3.24)

In turn, the norm satisfies the Cauchy-Schwartz inequality:

|     |
|⟨x,y⟩| ≤ ∥x ∥⋅∥y∥
(3.25)

with strict equality if and only if x = αy. The norm also satisfies the triangle inequality:

∥x+ y∥ ≤ ∥x∥ + ∥y∥
(3.26)

with strict equality if and only if x = αy and α ∈ R+. For orthogonal vectors, the triangle inequality becomes the famous Pythagorean theorem:

       2      2     2
∥x + y∥ =  ∥x∥ + ∥y∥        for x ⊥ y
(3.27)

Hilbert Space. A vector space H(V,S) equipped with an inner product is called an inner product space. To obtain a Hilbert space, we need completeness. This is a slightly more technical notion, which essentially implies that convergent sequences of vectors in V have a limit that is also in V . To gain intuition, think of the set of rational numbers Q versus the set of real numbers R. The set of rational numbers is incomplete, because there are convergent sequences in Q which converge to irrational numbers. The set of real numbers contains these irrational numbers, and is in that sense the completion of Q. Completeness is usually hard to prove in the case of infinite-dimensional spaces; in the following it will be tacitly assumed and the interested reader can easily find the relevant proofs in advanced analysis textbooks. Finally, we will also only consider separate Hilbert spaces, which are the ones that admit orthonormal bases.

3.2.2 Examples of Hilbert Spaces

Finite Euclidean Spaces. The vector space CN, with the “natural” definition for the sum of two vectors z = x + y as

zn = xn + yn
(3.28)

and the definition of the inner product as

        N∑-1
⟨x,y⟩ =    x *yn
        n=0  n
(3.29)

is a Hilbert space.

Polynomial Functions. An example of “functional” Hilbert space is the vector space PN([0,1]) of polynomial functions on the interval [0,1] with maximum degree N. It is a good exercise to show that P([0,1]) is not complete; consider for instance the sequence of polynomials

         n   k
p (x) = ∑  x--
 n      k=0 k!

This series converges as p (x) → ex ⁄∈ P  ([0,1])
 n            ∞ .

Square Summable Functions. Another interesting example of functional Hilbert space is the space of square integrable functions over a finite interval. For instance, L2([-π,π]) is the space of real or complex functions on the interval [-π,π] which have a finite norm. The inner product over L2([-π,π]) is defined as

        ∫ π
⟨f,g⟩ =    f *(t)g(t)dt
         -π
(3.30)

so that the norm of f(t) is

      ∘ ∫-π-|---|---
∥f∥ =       |f (t)|2dt
         - π
(3.31)

For f(t) to belong to L2([-π,π]) it must be f< .

3.2.3 Inner Products and Distances

The inner product is a fundamental tool in a vector space since it allows us to introduce a notion of distance between vectors. The key intuition about this is a typical instance in which a geometric construct helps us to generalize a basic idea to much more abstract scenarios. Indeed, take the simple Euclidean space RN and a given vector x; for any vector y ∈ RN the inner product < x,y > is the measure of the orthogonal projection of y over x. We know that the orthogonal projection defines the point on x which is closest to y and therefore this indicates how well we can approximate y by a simple scaling of x. To illustrate this, it should be noted that

⟨x,y⟩ = ∥x∥ ∥y∥cosθ

where θ is the angle between the two vectors (you can work out the expression in R2 to easily convince yourself of this; the result generalizes to any other dimension). Clearly, if the vectors are orthogonal, the cosine is zero and no approximation is possible. Since the inner product is dependent on the angular separation between the vectors, it represents a first rough measure of similarity between x and y; in broad terms, it provides a measure of the difference in shape between vectors.

In the context of signal processing, this is particularly relevant since most of the times, we are interested in the difference in shape” between signals. As we have said before, discrete-time signals are vectors; the computation of their inner product will assume different names according to the processing context in which we find ourselves: it will be called filtering, when we are trying to approximate or modify a signal or it will be called correlation when we are trying to detect one particular signal amongst many. Yet, in all cases, it will still be an inner product, i.e. a qualitative measure of similarity between vectors. In particular, the concept of orthogonality between signals implies that the signals are perfectly distinguishable or, in other words, that their shape is completely different.

The need for a quantitative measure of similarity in some applications calls for the introduction of the Euclidean distance, which is derived from the inner product as

d(x,y) = ⟨x-  y,x- y⟩1∕2 = ∥x- y∥
(3.32)

In particular, for CN the Euclidean distance is defined by the expression:

         ┌ -------------
         ││ N∑- 1
d(x,y) = ∘     |xn - yn|2
           n=0
(3.33)

whereas for L2([-π,π]) we have

        ∘  ∫-----------------
d(x,y) =     π||x(t) - y(t)||2dt
            -π
(3.34)

In the practice of signal processing, the Euclidean distance is referred to as the root mean square error;(4) this is a global, quantitative goodness-of-fit measure when trying to approximate signal y with x.

Incidentally, there are other types of distance measures which do not rely on a notion of inner product; for example in CN we could define

d (x,y ) = 0m≤anx<N |xn - yn|
(3.35)

This distance is based on the supremum norm and is usually indicated by ∥x-  y∥∞ ; however, it can be shown that there is no inner product from which this norm can be derived and therefore no Hilbert space can be constructed where ∥ ⋅ ∥∞ is the natural norm. Nonetheless, this norm will reappear later, in the context of optimal filter design.

3.3 Subspaces, Bases, Projections

Now that we have defined the properties of Hilbert space, it is only natural to start looking at the consequent inner structure of such a space. The best way to do so is by introducing the concept of basis. You can think of a basis as the “skeleton” of a vector space, i.e. a structure which holds everything together; yet, this skeleton is flexible and we can twist it, stretch it and rotate it in order to highlight some particular structure of the space and facilitate access to particular information that we may be seeking. All this is accomplished by a linear transformation called a change of basis; to anticipate the topic of the next Chapter, we will see shortly that the Fourier transform is an instance of basis change.

Sometimes, we are interested in exploring in more detail a specific subset of a given vector space; this is accomplished via the concept of subspace. A subspace is, as the name implies, a restricted region of the global space, with the additional property that it is closed under the usual vector operations. This implies that, once in a subspace, we can operate freely without ever leaving its confines; just like a full-fledged space, a subspace has its own skeleton (i.e. the basis) and, again, we can exploit the properties of this basis to highlight the features that interest us.

3.3.1 Definitions

Assume H(V,S) is a Hilbert space, with V a vector space and S a set of scalars (i.e. C).

Subspace. A subspace of V is defined as a subset P V that satisfies the following properties:

Clearly, V is a subspace of itself.

Span. Given an arbitrary set of M vectors W = {x(m)}m=0,1,,M-1, the span of these vectors is defined as

           {            }
             M∑- 1    (m )
span(W ) =       αmx      ,      αm ∈ S
             m=0
(3.38)

i.e. the span of W is the set of all possible linear combinations of the vectors in W. The set of vectors W is called linearly independent if the following holds:

M -1
∑   α  x(m) = 0 ⇐ ⇒  α  =  0       for m = 0,1,...,M  - 1
     m                m
m=0
(3.39)

Basis. A set of K vectors W = {x(k)}k=0,1,,K-1 from a subspace P is a basis for that subspace if

The last statement affirms that any y ∈ P can be written as a linear combination of {x(k)}k=0,1,,K-1 or that, for all y ∈ P, there exist K coefficients αk such that

    K∑- 1
y =     αkx(k)
    k=0
(3.40)

which is equivalently expressed by saying that the set W is complete in P.

Orthogonal/Orthonormal Basis. An orthonormal basis for a subspace P is a set of K basis vectors W = {x(k)}k=0,1,,K-1 for which

⟨ (i)  (j)⟩
 x  ,x    = δ[i- j]      0 ≤ i,j < K
(3.41)

which means orthogonality across vectors and unit norm. Sometimes, the set of vectors can be orthogonal but not normal (i.e. the norm of the vectors is not unitary). This is not a problem provided that we remember to include the appropriate normalization factors in the analysis and/or synthesis formulas. Alternatively, an lineary idependent set of vectors can be orthonormalized via the Gramm-Schmidt procedure, which can be found in any linear algebra textbook.

Among all bases, orthonormal bases are the most “beautiful” in a way because of their structure and their properties. One of the most important properties for finite-dimensional spaces is the following:

In other words, in finite dimensions, once we find a full set of orthogonal vectors, we are sure that the set spans the space.

3.3.2 Properties of Orthonormal Bases

Let W = {x(k)}k=0,1,,K-1 be an orthonormal basis for a (sub)space P. Then the following properties (all of which are easily verified) hold:

Analysis Formula. The coefficients in the linear combination (3.40) are obtained simply as

     ⟨     ⟩
αk =  x(k),y
(3.42)

The coefficients {αk} are called the Fourier coefficients(5) of the orthonormal expansion of y with respect to the basis W and (3.42) is called the Fourier analysis formula; conversely, Equation (3.40) is called the synthesis formula.

Parseval’s Identity For an orthonormal basis, there is a norm conservation property given by Parseval’s identity:

   2   K∑-1||⟨ (k)  ⟩||2
∥y∥ =     | x   ,y  |
       k=0
(3.43)

For physical quantities, the norm is dimensionally equivalent to a measure of energy; accordingly, Parseval’s identity is also known as the energy conservation formula.

Bessel’s Inequality. The generalization of Parseval’s equality is Bessel’s inequality. In our subspace P, consider a set of L orthonormal vectors G ⊂ P  (a set which is not necessarily a basis since it may be L < K), with G = {g(l)}l=1,2,L-1; then the norm of any vector y ∈ P is lower bounded as :

       L-1|       |
∥y∥2 ≥ ∑  ||⟨g(l),y⟩||2

       l=0
(3.44)

and the lower bound is reached for all y if and only if the system G is complete, that is, if it is an orthonormal basis for P.

Best Approximations. Assume P is a subspace of V ; if we try to approximate a vector y ∈ V by a linear combination of basis vectors from the subspace P, then we are led to the concepts of least squares approximations and orthogonal projections. First of all, we define the best linear approximation ˆy ∈ P of a general vector y ∈ V to be the approximation which minimizes the norm ∥y - ˆy∥ . Such approximation is easily obtained by projecting y onto an orthonormal basis for P, as shown in Figure 3.3. With W as our usual orthonormal basis for P, the projection is given by

    K∑-1⟨      ⟩
ˆy =     x(k),y x(k)
    k=0
(3.45)

Define the approximation error as the vector d = y -ˆy; it can be easily shown that:

This approximation with an orthonormal basis has a key property: it can be successively refined. Assume you have the approximation with the first m terms of the orthonormal basis:

     m∑-1
ˆym =     ⟨x(k),y⟩x(k)
      k=0
(3.47)

and now you want to compute the (m + 1)-term approximation. This is simply given by

            ⟨ (m)  ⟩ (m)
ˆym+1 = ˆym +  x   ,y x
(3.48)

While this seems obvious, it is actually a small miracle, since it does not hold for more general, non-orthonormal bases.


PIC

Figure 3.3: Orthogonal projection of the vector y onto the subspace W spanned by {x(0),x(1)}, leading to the approximation ˆy . This approximation minimizes the square norm ∥y - ˆy∥2   among all approximations belonging to W.

3.3.3 Examples of Bases

Considering the examples of 3.2.2, we have the following:

Finite Euclidean Spaces. For the simplest case of Hilbert spaces, namely CN, orthonormal bases are also the most intuitive since they contain exactly N mutually orthogonal vectors of unit norm. The classical example is the canonical basis {δ(k)}k=0N-1 with

 (k)
δn  = δ[n - k]
(3.49)

but we will soon study more interesting bases such as the Fourier basis {w(k)}, for which

 (k)   j2πnk
wn  = e N

In CN, the analysis and synthesis formulas (3.42) and (3.40) take a particularly neat form. For any set {x(k)} of N orthonormal vectors one can indeed arrange the conjugates of the basis vectors(6) as the successive rows of an N × N square matrix M so that each matrix element is the conjugate of the n-th element of the m-th basis vector:

       ( (m))*
Mmn  =  xn

M is called a change of basis matrix. Given a vector y, the set of expansion coefficient {αk}k=0N-1 can now be written itself as a vector(7) α ∈ CN. Therefore, we can rewrite the analysis formula (3.42) in matrix-vector form and we have

α = M  y
(3.50)

The reconstruction formula (3.40) for y from the expansion coefficients, becomes, in turn,

y = M H α
(3.51)

where the superscript denotes the Hermitian transpose (transposition and conjugation of the matrix). The previous equation shows that y is a linear combination of the columns of MH, which, in turn, are of course the vectors {x(k)}. The orthogonality relation (3.49) takes the following forms:

MHM = I (3.52)
MMH = I (3.53)
since left inverse equals right inverse for square matrices; this implies that M has orthonormal rows as well as orthonormal columns.

Polynomial Functions. A basis for PN([0,1]) is {xk}0k<N. This basis, however, is not an orthonormal basis. It can be transformed to an orthonormal basis by a standard Gramm-Schmidt procedure; the basis vector thus obtained are called Legendre polynomials.

Square Summable Functions. An orthonormal basis set for L2([-π,π]) is the set {(1√2-π-)exp(jnt)}n∈Z. This is actually the classic Fourier basis for functions on an interval. Please note that, here, as opposed to the previous examples, the number of basis vectors is actually infinite. The orthogonality of these basis vectors is easily verifiable; their completeness, however, is rather hard to prove and this, unfortunately, is very much the rule for all non-trivial, infinite-dimensional basis sets.

3.4 Signal Spaces Revisited

We are now in a position to formalize our intuitions so far, with respect to the equivalence between discrete-time signals and vector spaces, with a particularization for the three main classes of signals that we have introduced in the previous Chapter. Note that in the following, we will liberally interchange the notations x and x[n] to denote a sequence as a vector embedded in its appropriate Hilbert space.

3.4.1 Finite-Length Signals

The correspondence between the class of finite-length, length-N signals and CN should be so immediate at this point that it does not need further comment. As a reminder, the canonical basis is the canonical basis for CN. The k-th canonical basis vector is often expressed in signal form as

δ[n - k]      n = 0,...,N - 1,  k = 0,...,N - 1

3.4.2 Periodic Signals

As we have seen, N-periodic signals are equivalent to length-N signals. The space of N-periodic sequences is therefore isomorphic to CN. In particular, the sum of two sequences considered as vectors is the standard pointwise sum for the elements:

z[n] = x [n]+ y[n]      n ∈ Z-
(3.54)

while, for the inner product, we extend the summation over a period only:

            N -1
⟨x[n ],y[n]⟩ = ∑  x*[n]y[n ]
             n=0
(3.55)

The canonical basis for the space of N-periodic sequences is the canonical basis for CN, because of the isomorphism; in general, any basis for CN is also a basis for the space of N-periodic sequences. Sometimes, however, we will also consider an explicitly periodized version of the basis. For the canonical basis, in particular, the periodized basis is composed of N vectors of infinite-length {˜
δ(k)}k=0N-1 with

        ∞
˜(k)   ∑
δ   =      δ[n - k - iN ]
      i=-∞

Such a sequence is called a pulse train. Note that here we are abandoning mathematical rigor, since the norm of each of these basis vectors is infinite; yet the pulse train, if handled with care, can be a useful tool in formal derivations.

3.4.3 Infinite Sequences

In the case of infinite sequences, whose “natural” Euclidean space would appear to be C, the situation is rather delicate. While the sum of two sequences can be defined in the usual way, by extending the sum for CN to C, care must be taken when evaluating the inner product. We have already pointed out that the formula:

⟨        ⟩    ∞∑    *
 x[n],y[n] =       x [n]y[n]
             n=-∞
(3.56)

can diverge even for simple constant sequences such as x[n] = y[n] = 1. A way out of this impasse is to restrict ourselves to 2(Z), the space of square summable sequences, for which

       ∑   |   |2
∥x ∥2 =    |x [n]| < ∞
       n∈Z-
(3.57)

This is the space of choice for all the theoretical derivations involving infinite sequences. Note that these sequences are often called “of finite energy”, since the square norm corresponds to the definition of energy as given in (2.19).

The canonical basis for 2(Z) is simply the set {δ(k)}k∈Z; in signal form:

δ(k) = δ[n- k ],     n, k ∈ Z
                          --
(3.58)

This is an infinite set, and actually an infinite set of linearly independent vectors, since

           ∑
δ[n - k] =     αlδ[n - l]
          l∈Z∕k
(3.59)

has no solution for any k. Note that, for an arbitrary signal x[n] the analysis formula gives

α  = ⟨δ (k),x⟩ = ⟨δ[n - k],x[n ]⟩ = x[k]
  k

so that the reconstruction formula becomes

       ∑∞      (k)    ∑∞
x[n ] =     αk δ   =      x[k]δ[n- k ]
      k=-∞          k=-∞

which is the reproducing formula (2.18). The Fourier basis for 2(Z) will be introduced and discussed at length in the next Chapter.

As a last remark, note that the space of all finite-support signals, which is clearly a subset of 2(Z), does not form a Hilbert space. Clearly, the space is closed under addition and scalar multiplication, and the canonical inner product is well behaved since all sequences have only a finite number of nonzero values. However, the space is not complete; to clarify this, consider the following family of signals:

       {
yk[n] =   1∕n  |n| < k
         0    otherwise

For k growing to infinity, the sequence of signals converges as yk[n] y[n] = 1∕n for all n; while y[n] is indeed in 2(Z), since

 ∞        2
∑   1--= π--
n=0 n2    6

y[n] is clearly not a finite-support signal.

3.5 Further Reading

A comprehensive review of linear algebra, which contains all the concepts of Hilbert spaces but in finite dimensions, is the classic by G. Strang, Linear Algebra and Its Applications (Brooks Cole, 2005). For an introduction to Hilbert spaces, there are many mathematics books; we suggest N. Young, An Introduction to Hilbert Space (Cambridge University Press, 1988). As an alternative, a more intuitive and engineering-motivated approach is in the classic book by D. G. Luenberger, Optimization by Vector Space Methods (Wiley, 1969).

3.6 Exercises

Exercise 3.1: Elementary operators.  An operator H is a transformation of a given signal and is indicated by the notation

        {    }
y[n] = H x[n]

For instance, the delay operator D is indicated as

  {    }
D  x[n] =  x[n - 1]

and the one-step difference operator is indicated as

 {    }           {    }
V x [n]  = x[n]- D  x[n] =  x[n ]- x[n- 1]
(3.60)

A linear operator is one for which the following holds:

{ H {αx [n ]} = αH {x [n]}
    {          }     {    }   {    }
   H  x[n ]+ y[n ] = H  x[n] +H   y[n ]

  1. Show that the delay operator D is linear.
  2. Show that the differentiation operator V is linear.
  3. Show that the squaring operator S{x[n]} = x2[n] is not linear.

In CN, any linear operator on a vector x can be expressed as a matrix-vector multiplication for a suitable N × N matrix A. In CN, we define the delay operator as the left circular shift of a vector:

 {  }                            T
D  x  = [xN -1  x0  x1  ... xN -2]

Assume N = 4 for convenience; it is easy to see that

        ⌊          ⌋
        | 0 0  0  1|
  { }   || 1 0  0  0||
D  x  = |          | x = Dx
        |⌈ 0 1  0  0|⌉
          0 0  1  0

so that D is the matrix associated to the delay operator.

  1. Using the same definition for the one-step difference operator as in (3.60), write out the associated matrix for the operator in C4.
  2. Consider the following matrix:
        ⌊          ⌋
    |1  0  0  0|
    ||1  1  0  0||
A = |          |
    |⌈1  1  1  0|⌉
     1  1  1  1

    Which operator do you think it is associated to? What does the operator do?

Exercise 3.2: Bases.  Let {x(k)}k=0,,N-1 be a basis for a subspace S. Prove that any vector z ∈ S is uniquely represented in this basis.
Hint: prove by contradiction.

Exercise 3.3: Vector spaces and signals. 

  1. Show that the set of all ordered n-tuples [a1,a2,,an] with the natural definition for the sum:
    [a1,a2,...,an]+ [b1,b2,...,bn] = [a1 + b1,a2 + b2,...,an + bn]

    and the multiplication by a scalar:

    α[a1,a2,...,an] = [αa1,αa2,...,αan ]

    form a vector space. Give its dimension and find a basis.

  2. Show that the set of signals of the form y(x) = acos(x) + bsin(x) (for arbitrary a, b), with the usual addition and multiplication by a scalar, form a vector space. Give its dimension and find a basis.
  3. Are the four diagonals of a cube orthogonal?
  4. Express the discrete-time impulse δ[n] in terms of the discrete-time unit step u[n] and conversely.
  5. Show that any function f(t) can be written as the sum of an odd and an even function, i.e. f(t) = fo(t) + fe(t) where fo(-t) = -fo(t) and fe(-t) = fe(t).

Exercise 3.4: The Haar basis.  Consider the following change of basis matrix in C8, with respect to the standard orthonormal basis:

    ⌊                                  ⌋
      1  - 1  0    0   0    0   0    0
    || 0  0    1   - 1  0    0   0    0 ||
    ||                                  ||
    || 0  0    0    0   1   - 1  0    0 ||
    || 0  0    0    0   0    0   1   - 1||
H = ||                                  ||
    | 1  1   - 1  - 1  0    0   0    0 |
    || 0  0    0    0   1    1   - 1 - 1||
    ||                                  ||
    ⌈ 1  1    1    1  - 1  - 1  - 1 - 1⌉
      1  1    1    1   1    1   1    1

Note the pattern in the first four rows, in the next two, and in the last two.

  1. What is an easy way to prove that the rows in H do indeed form a basis?
    (Hint: it is sufficient to show that they are linearly independent, i.e. that the matrix has full rank…)

The basis described by H is called the Haar basis and it is one of the most celebrated cornerstones of a branch of signal processing called wavelet analysis. To get a feeling for its properties, consider the following set of numerical experiments (you can use Matlab or any other numerical package):

  1. Verify that HHH is a diagonal matrix, which means that the vectors are orthogonal.
  2. Consider a constant signal
    {x = [1  1  ...  1]}

    and compute its coefficients in the Haar basis.

  3. Consider an alternating signal
    {x = [+1   - 1  +1  ... +1   - 1]}

    and compute its coefficients in the Haar basis.

© Presses polytechniques et universitaires romandes, 2008
All rights reserved

Chapter 4
Fourier Analysis

Fourier theory has a long history, from J. Fourier’s early work on the transmission of heat to recent results on non-harmonic Fourier series. Fourier theory is a branch of harmonic analysis, and in that sense, a topic in pure and applied mathematics. At the same time, because of its usefulness in practical applications, Fourier analysis is a key tool in several engineering branches, and in signal processing in particular.

Why is Fourier analysis so important? To understand this, let us take a short philosophical detour. Interesting signals are time-varying quantities: you can imagine, for instance, the voltage level at the output of a microphone or the measured level of the tide at a particular location; in all cases, the variation of a signal, over time, implies that a transfer of energy is happening somewhere, and ultimately this is what we want to study. Now, a time-varying value which only increases over time is not only a physical impossibility but a recipe for disaster for whatever system is supposed to deal with it; fuses will blow, wires will melt and so on. Oscillations, on the other hand, are nature’s and man’s way of keeping things in motion without trespassing all physical bounds; from Maxwell’s wave equation to the mechanics of the vocal cords, from the motion of an engine to the ebb and flow of the tide, oscillatory behavior is the recurring theme. Sinusoidal oscillations are the purest form of such a constrained motion and, in a nutshell, Fourier’s immense contribution was to show that (at least mathematically) one could express any given phenomenon as the combined output of a number of sinusoidal “generators”.

Sinusoids have another remarkable property which justifies their ubiquitous presence. Indeed, any linear time-invariant transformation of a sinusoid is a sinusoid at the same frequency: we express this by saying that sinusoidal oscillations are eigenfunctions of linear time-invariant systems. This is a formidable tool for the analysis and design of signal processing structures, as we will see in much detail in the context of discrete-time filters.

The purpose of the present Chapter is to introduce and analyze some key results on Fourier series and Fourier transforms in the context of discrete-time signal processing. It appears that, as we mentioned in the previous Chapter, the Fourier transform of a signal is a change of basis in an appropriate Hilbert space. While this notion constitutes an extremely useful unifying framework, we also point out the peculiarities of its specialization within the different classes of signal. In particular, for finite-length signals we highlight the eminently algebraic nature of the transform, which leads to efficient computational procedures; for infinite sequences, we will analyze some of its interesting mathematical subtleties.

4.1 Preliminaries

The Fourier transform of a signal is an alternative representation of the data in the signal. While a signal lives in the time domain,(1) its Fourier representation lives in the frequency domain. We can move back and forth at will from one domain to the other using the direct and inverse Fourier operators, since these operators are invertible.

In this Chapter we study three types of Fourier transform which apply to the three main classes of signals that we have seen so far:

The frequency representation of a signal (given by a set of coefficients in the case of the DFT and DFS and by a frequency distribution in the case of the DTFT) is called the spectrum.

4.1.1 Complex Exponentials

The basic ingredient of all the Fourier transforms which follow, is the discrete-time complex exponential; this is a sequence of the form:

x [n] = A ej(ωn+φ) = A[cos(ωn + φ) + jsin(ωn + φ )]

A complex exponential represents an oscillatory behavior; A ∈ R is the amplitude of the oscillation, ω is its frequency and φ is its initial phase. Note that, actually, a discrete-time complex exponential sequence is not always a periodic sequence; it is periodic only if ω = 2π(M∕N) for some value of M,N ∈ Z. The power of a complex exponential is equal to the average energy over a period, i.e. |A|2, irrespective of frequency.

4.1.2 Complex Oscillations? Negative Frequencies?

In the introduction, we hinted at the fact that Fourier analysis allows us to decompose a physical phenomenon into oscillatory components. However, it may seem odd, that we have chosen to use complex oscillation for the analysis of real-world signals. It may seem even odder that these oscillations can have a negative frequency and that, as we will soon see in the context of the DTFT, the spectrum extends over to the negative axis.

The starting point in answering these legitimate questions is to recall that the use of complex exponentials is essentially a matter of convenience. One could develop a complete theory of frequency analysis for real signals using only the basic trigonometric functions. You may actually have noticed this if you are familiar with the Fourier Series of a real function; yet the notational overhead is undoubtedly heavy since it involves two separate sets of coefficients for the sine and cosine basis functions, plus a distinct term for the zero-order coefficient. The use of complex exponentials elegantly unifies these separate series into a single complex-valued sequence. Yet, one may ask again, what does it mean for the spectrum of a musical sound to be complex? Simply put, the complex nature of the spectrum is a compact way of representing two concurrent pieces of information which uniquely define each spectral component: its frequency and its phase. These two values form a two-element vector in R2 but, since R2 is isomorphic to C, we use complex numbers for their mathematical convenience.

With respect negative frequencies, one must first of all consider, yet again, a basic complex exponential sequence such as x[n] = exp(jωn). We can visualize its evolution over discrete-time as a series of points on the unit circle in the complex plane. At each step, the angle increases by ω, defining a counterclockwise circular motion. It is easy to see that a complex exponential sequence of frequency -ω is just the same series of points with the difference that the points move clockwise instead; this is illustrated in detail in Figure 4.1. If we decompose a real signal into complex exponentials, we will show that, for any given frequency value, the phases of the positive and negative components are always opposite in sign; as the two oscillations move in opposite directions along the unit circle, their complex part will always cancel out exactly, thus returning a purely real signal.(2)


PIC

Figure 4.1: Complex exponentials as a series of points on the unit circle; x[n] = exp(0n) and y[n] = exp(-0n) for ω0 = π∕5.

The final step in developing a comfortable feeling for complex oscillations comes from the realization that, in the synthesis of discrete-time signals (and especially in the case of communication systems) it is actually more convenient to work with complex-valued signals, themselves. Although the transmitted signal of a device like an ADSL box is a real signal, the internal representation of the underlying sequences is complex; therefore complex oscillations become a necessity.

4.2 The DFT (Discrete Fourier Transform)

We now develop a Fourier representation for finite-length signals; to do so, we need to find a set of oscillatory signals of length N which contain a whole number of periods over their support. We start by considering a family of finite-length sinusoidal signals (indexed by an integer k) of the form

        jωkn
wk[n] = e   ,      n = 0,...,N - 1
(4.1)

where all the ωk’s are distinct frequencies which fulfill our requirements. To determine these frequency values, note that, in order for wk[n] to contain a whole number of periods over N samples, it must conform to

wk [N] = wk[0] = 1

which translates to

( jω )N
 e  k   = 1

The above equation has N distinct solutions which are the N roots of unity exp(j2πm∕N), m = 0,,N - 1; if we define the complex number

          2π
WN  = e- jN-

then the family of N signals in (4.1) can be written as

          -nk
wk [n] = W N  ,      n = 0,...,N  - 1
(4.2)

for each value of k = 0,,N - 1. We can represent these N signals as a set of vectors {w(k)}k=0,,N-1 in CN with

  (k)   [     -k    -2k         -(N-1)k]T
w   =  1  W N    W N    ... W N
(4.3)

The real and imaginary parts of w(k) for N = 32 and for some values of k are plotted in Figures 4.2 to 4.5.

We can verify that {w(k)}k=0,,N-1 is a set of N orthogonal vectors and therefore a basis for CN; indeed we have (noting that (WN-k)* = WNk):

                            (
                            ||  N                  for m =  n
⟨ (m)   (n)⟩   N∑-1   (m -n)i  {        (m -n)N
 w   ,w    =      WN      = |  1--W-N-------
              i=0           |(   1- W (m- n) = 0   for m ⁄=  n
                                     N
(4.4)

since WNiN = 1 for all i ∈ Z. In more compact notation we can therefore state that

⟨         ⟩
 w(m ),w (n)  = N δ[n- m ]
(4.5)

The vectors {w(k)}k=0,,N-1 are the Fourier basis for CN, and therefore for the space of length-N signals. It is immediately evident that this basis is not orthonormal, since ∥w(k)∥2 = N, but that it could be made orthonormal simply by scaling the basis vectors by (1√ --
  N). In signal processing practice, however, it is customary to keep the normalization factor explicit in the change of basis formulas; this is mostly due to computational reasons, as we will see later, but, for the sake of consistency with the mainstream literature, we will also follow this convention.

4.2.1 Matrix Form

The Discrete Fourier Transform (DFT) analysis and synthesis formulas can now be easily expressed in the familiar matrix notation as in Section 3.3.3: define an N × N square matrix W by stacking the conjugate of the basis vectors, i.e. Wnk = exp(-j(2π∕N)nk) = WNnk; from this we can state, for all vectors x ∈ CN:

X = Wx (4.6)
x = 1
--
NWHX (4.7)
(note the normalization factor in the reconstruction formula). Here, X is the set of Fourier coefficients in vector form, whose physical interpretation we will explore shortly. Note that the DFT preserves the energy of the finite-length signal: indeed Parseval’s relation (3.43) becomes
∥x∥2 = √1--∥X ∥2
         N
(4.8)

(once again, note the explicit normalization factor).

4.2.2 Explicit Form

It is very common in the literature to explicitly write out the inner products in (4.6) and (4.7); this is both for historical reasons and to underscore the highly structured form of this transformation which, as we will see, is the basis for very efficient computational procedures. In detail, we have the analysis formula

       N -1
X [k] = ∑  x[n]W nk,      k = 0,...,N - 1
                 N
        n=0
(4.9)

and the dual synthesis formula

         N∑-1
x [n] = 1-    X [k]W -nk,      n = 0,...,N - 1
       N  k=0       N
(4.10)

where we have used the standard convention of “lumping” the normalizing factor (1∕N) entirely within in the reconstruction sum (4.10).


PIC
PIC

Figure 4.2: Basis vector w(0) ∈ C32.


PIC
PIC

Figure 4.3: Basis vector w(1) ∈ C32.


PIC
PIC

Figure 4.4: Basis vector w(7) ∈ C32.


PIC
PIC

Figure 4.5: Basis vector w(31) ∈ C32.

4.2.3 Physical Interpretation

To return to the physical interpretation of the DFT, what we have just obtained is the decomposition of a finite-length signal into a set of N sinusoidal components; the magnitude and initial phase of each oscillator are given by the coefficients X[k] in (4.9) (or, equivalently, by the vector elements Xk in (4.6)(3) ). To stress the point again:

The first N output values of this “machine” are exactly x[n].

If we look at this from the opposite end, each X[k] shows “how much” oscillatory behavior at frequency 2π∕k, is contained in the signal; this is consistent with the fact that an inner product is a measure of similarity. The coefficients X[k] are referred to as the spectrum of the signal. The square magnitude |X[k]|2 is a measure (up to a scale factor N) of the signal’s energy at the frequency (2π∕N)k; the coefficients X[k], therefore, show exactly how the global energy of the original signal is distributed in the frequency domain while Parseval’s equality (4.8) guarantees that the result is consistent. The phase of each Fourier coefficient, indicated by X[k], specifies the initial phase of each oscillator for the reconstruction formula, i.e. the relative alignment of each complex exponential at the onset of the signal. While this does not influence the energy distribution in frequency, it does have a significant effect on the shape of the signal in the discrete-time domain as we will shortly see in more detail.

Some examples for signals in C64 are plotted in Figures 4.64.9. Figure 4.6 shows one of the simplest cases: indeed, the signal x[n] is a sinusoid whose frequency coincides with that of one of the basis vectors (precisely, to that of w(4)) and, as a consequence of the orthogonality of the basis, only X[4] and X[60] are nonzero (this can be easily verified by decomposing the sinusoid as the sum of two appropriate basis functions). Figure 4.7 shows the same signal, but this time with a phase offset.

The magnitude DFT does not change, but the phase offset appears in the phase of the transform. Figure 4.8 shows the transform of a sinusoid whose frequency does not coincide with any of the basis frequencies. As a consequence, all of the basis vectors are needed to reconstruct the signal. Clearly, the magnitude is larger for frequencies closer to hot of the original signal’s (6π∕64 and 7π∕64 in this case); yet, to reconstruct x[n] exactly, we need the contribution of the entire basis. Finally, Figure 4.9 shows the DFT of a step signal. It can be shown (with a few trigonometric manipulations) that the DFT of a step signal is

          (         )
       sin--(π(-∕N)M--k)- -jNπ(M- 1)k
X [k] =  sin (π∕N )k   e

where N is the length of the signal and M is the length of the step (in Figure 4.9 N = 64 and M = 5, for instance.)


PIC
x[n] = cos(π- )
 8 n, n = 0,1,2,,63
PIC
PIC

Figure 4.6: Signal and DFT (example).


PIC
x[n] = cos(π-   π-)
 8 n+ 3, n = 0,1,2,63
PIC
PIC

Figure 4.7: Signal and DFT (example cont.d).


PIC
x[n] = cos(π- )
 5 n, n = 0,1,2,63
PIC
PIC

Figure 4.8: Signal and DFT (example cont.d).


PIC
x[n] = k=04δ[n - k], n = 0,1,2,,63
PIC
PIC

Figure 4.9: Signal and DFT (example cont.d).

4.3 The DFS (Discrete Fourier Series)

Consider the reconstruction formula in (4.10); what happens if we let the index n roam outside of the [0,N - 1] interval? Since WN(n+iN)k = WNnk for all i ∈ Z, we note that x[n + iN] = x[n]. In other words, the reconstruction formula in (4.10) implicitly defines a periodic sequence of period N. This is the reason why, earlier, we stated that periodic sequences are the natural way to embed a finite-length signal into a sequence: their Fourier representation is formally identical. This is not surprising since a) we have already established a correspondence between CN and ˜C-N and b) we are actually expressing a length-N sequence as a combination of N-periodic basis signals.

The Fourier representation of periodic sequences is called the Discrete Fourier Series (DFS), and its explicit analysis and synthesis formulas are the exact equivalent of (4.9) and (4.10), modified only with respect to the range of the indices. We have already seen that in (4.10), the reconstruction formula, n is now in Z. Symmetrically, due to the N-periodicity of WNk, we can let the index k in (4.9) assume any value in Z too; this way, the DFS coefficients become an N-periodic sequence themselves and the DFS becomes a change of basis in ˜
C-N using the definition of inner product given in Section (3.4.2) and the formal periodic basis for ˜C-N:

X˜[k] = n=0N-1˜x[n]W Nnk, k ∈ Z (4.11)
x˜[n] = -1
N k=0N-1˜X[k]W N-nk, n ∈ Z (4.12)

4.4 The DTFT (Discrete-Time Fourier Transform)

We now consider a Fourier representation for infinite non-periodic sequences. From a purely mathematical point of view, the Discrete-Time Fourier Transform of a sequence x[n] is defined as

          ∑∞
X (ejω ) =      x[n ]e-jωn
         n=- ∞
(4.13)

The DTFT is therefore a complex-valued function of the real argument ω, and, as can be easily verified, it is periodic in ω with period 2π. The somewhat odd notation X(exp()) is quite standard in the signal processing literature and offers several advantages:

The DTFT, when it exists, can be inverted via the integral

         ∫ π
x[n ] =-1-    X (ejω )ejωnd ω
      2π  - π
(4.14)

as can be easily verified by substituting (4.13) into 4.14) and using

∫  π
    e- jω(n-k) = 2πδ[n - k]
  -π

In fact, due to the 2π-periodicity of the DTFT, the integral in (4.14) can be computed over any 2π-wide interval on the real line (i.e. between 0 and 2π, for instance). The relation between a sequence x[n] and its DTFT X(exp()) will be indicated in the general case by

x [n] DT←F→T  X (ejω )

While the DFT and DFS were signal transformations which involved only a finite number of quantities, both the infinite summation and the real-valued argument, appearing in the DTFT, can create an uneasiness which overshadows the conceptual similarities between the transforms. In the following, we start by defining the mathematical properties of the DTFT and we try to build an intuitive feeling for this Fourier representation, both with respect to its physical interpretation and to its conformity to the “change of basis” framework, that we used for the DFT and DFS.

Mathematically, the DTFT is a transform operator which maps discrete-time sequences onto the space of 2π-periodic functions. Clearly, for the DTFT to exist, the sum in (4.13) must converge, i.e. the limit for M →∞ of the partial sum

             M
     jω     ∑        - jωn
XM  (e ) =       x[n ]e
           n=-M
(4.15)

must exist and be finite. Convergence of the partial sum in (4.15) is very easy to prove for absolutely summable sequences, that is for sequences satisfying

         ∑M  |    |
M li→m∞ =      |x[n]| < ∞
        n=- M
(4.16)

since, according to the triangle inequality,

||     jω ||    M∑   ||    - jωn ||   ∑M  ||    ||
 XM  (e ) ≤        x[n ]e     =       x[n]
             n=-M              n=- M
(4.17)

For this class of sequences it can also be proved that the convergence of XM(exp()) to X(exp()) is uniform and that X(exp()) is continuous. While absolute summability is a sufficient condition, it can be shown that the sum in (4.15) is convergent also for all square-summable sequences, i.e. for sequences whose energy is finite; this is very important to us with respect to the discussion in Section 3.4.3 where we defined the Hilbert space 2(Z). In the case of square summability only, however, the convergence of (4.15) is no longer uniform but takes place only in the mean-square sense, i.e.

     ∫ π
 lim     ||XM  (ejω) - X (ejω)||2dω = 0
M→ ∞  -π
(4.18)

Convergence in the mean square sense implies that, while the total energy of the error signal becomes zero, the pointwise values of the partial sum may never approach the values of the limit. One manifestation of this odd behavior is called the Gibbs phenomenon, which has important consequences in our approach to filter design, as we will see later. Furthermore, in the case of square-summable sequences, X(exp()) is no longer guaranteed to be continuous.

As an example, consider the sequence:

      {
         1  for - N ≤  n ≤ N
x[n] =
         0  otherwise
(4.19)

Its DTFT can be computed as the sum(4)

X(e) = n=-NNe-jωn
= n=1Nejωn + n=0Ne-jωn
= 1 - e-jω(N+1)
---1--e--jω--- + 1 - ejω(N+1 )
--1---ejω--- - 1
= ejω∕2      -jω(N+1)
1---e--------
ejω∕2 - e- jω∕2 + e-jω∕2       jω(N+1 )
-1---e-------
e-jω∕2 - ejω∕2 - 1
= ejω(N+ 12) - e-jω(N+ 12)
-----jω∕2----jω∕2---
    e    - e
=    (  (     1))
sin-ω--N-+--2--
    sin(ω ∕2)

PIC

Figure 4.10: The DTFT of the signal in (4.19).

The DTFT of this particular signal turns out to be real (we will see later that this is a consequence of the signal’s symmetry) and it is plotted in Figure 4.10. When, as is very often the case, the DTFT is complex-valued, the usual way to represent it graphically takes the magnitude and the phase separately into account. The DTFT is always a 2π-periodic function and the standard convention is to plot the interval from -π to π. Larger intervals can be considered if the periodicity needs to be made explicit; Figure 4.11, for instance, shows five full periods of the same function.


PIC

Figure 4.11: The DTFT of the signal in (4.19), with explicit periodicity.

4.4.1 The DTFT as the Limit of a DFS

A way to gain some intuition about the structure of the DTFT formulas is to consider the DFS of periodic sequences with larger and larger periods. Intuitively, as we look at the structure of the Fourier basis for the DFS, we can see that the number of basis vectors in (4.9) grows with the length N of the period and, consequently, the frequencies of the underlying complex exponentials become “denser” between 0 and 2π. We want to show that, in the limit, we end up with the reconstruction formula of the DTFT.

To do so, let us restrict ourselves to the domain of absolute summable sequences; for these sequences, we know that the sum in (4.13) exists. Now, given an absolutely summable sequence x[n], we can always build an N-periodic sequence ˜x[n] as

       ∑∞
˜x[n] =     x[n + iN]
      i=-∞
(4.20)

for any value of N (see Example 2.2); this is guaranteed by the fact that the above sum converges for all n ∈ Z (because of the absolute summability of x[n]) so that all values of ˜x[n] are finite. Clearly, there is overlap between successive copies of x[n]; the intuition, however, is the following: since in the end we will consider very large values for N and since x[n], because of absolute summability, decays rather fast with n, the resulting overlap of “tails” will be negligible. This can be expressed as

 lim  x˜[n] = x[n]
N →∞

Now consider the DFS of ˜x[n]:

                              (                         )
 ˜     N∑ -1     -j2πnk    ∞∑    N∑-1          - j2π(n+iN)k
X [k] =    ˜x[n]e  N    =           x [n + iN ]e   N
       n=0               i= -∞   n=0
(4.21)

where in the last term we have used (4.20), interchanged the order of the summation and exploited the fact that exp(-j(2π∕N)(n + iN)k) = exp(-j(2π∕N)nk). We can see that, for every value of i in the outer sum, the argument of the inner sum varies between iN and iN + N - 1, i.e. non-overlapping intervals, so that the double summation can be simplified as

         ∞
 ˜       ∑         -j2πmk
X [k] =      x [m ]e  N
       m= -∞
(4.22)

and therefore

              |
X˜[k] = X (ejω)|ω= 2πk
                 N
(4.23)

This already gives us a noteworthy piece of intuition: the DFS coefficients for the periodized signal are a discrete set of values of its DTFT (here considered solely as a formal operator) computed at multiples of 2π∕N. As N grows, the spacing between these frequency intervals narrows more and more so that, in the limit, the DFS converges to the DTFT.

To check that this assertion is consistent, we can now write the DFS reconstruction formula using the DFS values given to us by inserting (4.23) in (4.10):

         N∑-1
˜x[n] = 1-    X (ej2Nπk)ej2πN-nk
       N  k=0
(4.24)

By defining Δ = (2π∕N), we can rewrite the above expression as

          N- 1
x˜[n] = -1-∑   X (ej(kΔ))ej(kΔ )n Δ
       2π
          k=0
(4.25)

and the summation is easily recognized as the Riemann sum with step Δ approximating the integral of f(ω) = X(e)ejωn between 0 and 2π. As N goes to infinity (and therefore ˜x[n] x[n]), we can therefore write

        1 ∫ 2π
˜x[n] → ---    X (ejω)ejωndω
       2π  0
(4.26)

which is indeed the DTFT reconstruction formula (4.14).(5)

4.4.2 The DTFT as a Formal Change of Basis

We now show that, if we are willing to sacrifice mathematical rigor, the DTFT can be cast in the same conceptual framework we used for the DFT and DFS, namely as a basis change in a vector space. The following formulas are to be taken as nothing more than a set of purely symbolic derivations, since the mathematical hypotheses under which the results are well defined are far from obvious and are completely hidden by the formalism. It is only fair to say, however, that the following expressions represent a very handy and intuitive toolbox to grasp the essence of the duality between the discrete-time and the frequency domains and that they can be put to use very effectively to derive quick results when manipulating sequences.

One way of interpreting Equation (4.13) is to see that, for any given value ω0, the corresponding value of the DTFT is the inner product in 2(Z) of the sequence x[n] with the sequence exp(0n); formally, at least, we are still performing a projection in a vector space akin to C:

         ⟨         ⟩
X (ejω) = ejωn,x[n]

Here, however, the set of “basis vectors” {exp(jωn)}ω∈R is indexed by the real variable ω and is therefore uncountable. This uncountability is mirrored in the inversion formula (4.14), in which the usual summation is replaced by an integral; in fact, the DTFT operator maps 2(Z) onto L2([-π,π]) which is a space of 2π-periodic, square integrable functions. This interpretation preserves the physical meaning given to the inner products in (4.13) as a way to measure the frequency content of the signal at a given frequency; in this case the number of oscillators is infinite and their frequency separation becomes infinitesimally small.

To complete the picture of the DTFT as a change of basis, we want to show that, at least formally, the set {exp(jωn)}ω∈R constitutes an orthogonal “basis” for 2(Z).(6) In order to do so, we need to introduce a quirky mathematical entity called the Dirac delta functional; this is defined in an implicit way by the following formula

∫ ∞
    δ(t- τ )f (t)dt = f(τ)
 -∞
(4.27)

where f(t) is an arbitrary integrable function on the real line; in particular

∫ ∞
    δ(t)f(t) dt = f(0)
 -∞
(4.28)

While no ordinary function satisfies the above equation, δ(t) can be interpreted as shorthand for a limiting operation. Consider, for instance, the family of parametric functions(7)

rk(t) = k rect(kt)
(4.29)

which are plotted in Figure 4.12. For any continuous function f(t) we can write

∫ ∞                ∫ 1∕2k              |
     rk(t)f(t)dt = k      f(t)dt = f(γ)|γ∈[-1∕2k,1∕2k]
 -∞                 -1∕2k
(4.30)

where we have used the Mean Value theorem. Now, as k goes to infinity, the integral converges to f(0); hence we can say that the limit of the series of functions rk(t) converges then to the Dirac delta. As already stated, the delta cannot be considered as a proper function, so the expression δ(t) outside of an integral sign has no mathematical meaning; it is customary however to associate an “idea” of function to the delta and we can think of it as being undefined for t⁄=0 and to have a value of at t = 0. This interpretation, together with (4.27), defines the so-called sifting property of the Dirac delta; this property allows us to write (outside of the integral sign):

δ(t-  τ)f(t) = δ(t- τ)f (τ )
(4.31)


PIC

Figure 4.12: The Dirac delta as the limit of a family of rectangular functions.

The physical interpretation of the Dirac delta is related to quantities expressed as continuous distributions for which the most familiar example is probably that of a probability distribution (pdf). These functions represent a value which makes physical sense only over an interval of nonzero measure; the punctual value of a distribution is only an abstraction. The Dirac delta is the operator that extracts this punctual value from a distribution, in a sense capturing the essence of considering smaller and smaller observation intervals.

To see how the Dirac delta applies to our basis expansion, note that equation (4.27) is formally identical to an inner product over the space of functions on the real line; by using the definition of such an inner product we can therefore write

      ∫ ∞ ⟨            ⟩
f(t) =      δ(s- τ ),f (s)  δ(t - τ)dτ
       -∞
(4.32)

which is, in turn, formally identical to the reconstruction formula of Section 3.4.3. In reality, we are interested in the space of 2π-periodic functions, since that is where DTFTs live; this is easily accomplished by building a 2π-periodic version of the delta as

           ∑∞
δ˜(ω ) = 2π     δ(ω - 2πk)
          k=-∞
(4.33)

where the leading 2π factor is for later convenience. The resulting object is called a pulse train, similarly to what we built for the case of periodic sequences in ˜C-N. Using the pulse train and given any 2π-periodic function f(ω), the reconstruction formula (4.32) becomes

          ∫
       -1-  σ+2π ⟨˜           ⟩˜
f(ω ) = 2π        δ(θ - φ),f(θ) δ(ω - φ)dφ
           σ
(4.34)

for any σ ∈ R.

Now that we have the delta notation in place, we are ready to start. First of all, we show the formal orthogonality of the basis functions {exp(jωn)}ω∈R. We can write

 1 ∫ π           jωn      jω n
2π-    ˜δ(ω - ω0)e   dω = e  0
    - π
(4.35)

The left-hand side of this equation has the exact form of the DTFT reconstruction formula (4.14); hence we have found the fundamental relationship

 jω0n DTFT
e     ← →  ˜δ(ω - ω0)
(4.36)

Now, the DTFT of a complex exponential exp(jσn) is, in our change of basis interpretation, simply the inner product exp(jωn),exp(jσn); because of (4.36) we can therefore express this as

⟨         ⟩
 ejωn,ejσn =  ˜δ(ω - σ )
(4.37)

which is formally equivalent to the orthogonality relation in (4.5).

We now recall for the last time that the delta notation subsumes a limiting operation: the DTFT pair (4.36) should be interpreted as shorthand for the limit of the partial sums

         ∑k   -jωn
sk(ω) =      e
        n=-k

(where we have chosen ω0 = 0 for the sake of example). Figure 4.13 plots |sk(ω)| for increasing values of k (we show only the [-π,π] interval, although of course the functions are 2π-periodic). The family of functions sk(ω) is exactly equivalent to the family of functions rk(t) we saw in (4.29); they too become increasingly narrow while keeping a constant area (which turns out to be 2π). That is why we can simply state that sk(ω) ˜
δ (ω).


PIC

Figure 4.13: The sum | n=-kk exp(-jωn)| for different values of k.

From (4.36) we can easily obtain other interesting results: by setting ω0 = 0 and by exploiting the linearity of the DTFT operator, we can derive the DTFT of a constant sequence:

α  D←TF→T α ˜δ(ω)
(4.38)

or, using Euler’s formulas, the DTFTs of sinusoidal functions:

cos(ω0n + φ) DT←FT→1-
2[e˜δ (ω - ω 0) + e-˜δ (ω + ω 0)] (4.39)
sin(ω0n + φ) DTFT
 ←→- j
2[e˜
δ (ω - ω 0) - e-˜
δ (ω + ω 0)] (4.40)
As we can see from the above examples, we are defining the DTFT for sequences which are not even square-summable; again, these transforms are purely a notational formalism used to capture a behavior, in the limit, as we showed before.

4.5 Relationships between Transforms

We can now show that, thanks to the delta formalism, the DTFT is the most general type of Fourier transform for discrete-time signals. Consider a length-N signal x[n] and its N DFT coefficients X[k]; consider also the sequences obtained from x[n] either by periodization or by building a finite-support sequence. The computation of the DTFTs of these sequences highlights the relationships linking the three types of discrete-time transforms that we have seen so far.

Periodic Sequences. Given a length-N signal x[n], n = 0,,N - 1, consider the associated N-periodic sequence ˜x[n] = x[n mod N] and its N DFS coefficients X[k]. If we try to write the analysis DTFT formula for ˜x[n] we have

X˜(e) = n=-∞˜x[n]e-jωn
= n=-∞(                  )
   1 N∑-1      j2πnk
  N-     X [k]e N
     k=0e-jωn (4.41)
= 1
--
N k=0N-1X[k](                 )
   ∞∑     2π
        ej N nke-jωn
  n=-∞ (4.42)
where in (4.41) we have used the DFS reconstruction formula. Now we recognize in the last term important to recognize the last terms of (4.42) as the DTFT of a complex exponential of frequency (2π∕N)k; we can therefore write
          1 N∑-1      (     2π  )
X˜(ejω) = --    X [k]˜δ  ω - ---k
          N  k=0             N
(4.43)

which is the relationship between the DTFT and the DFS. If we restrict ourselves to the [-π,π] interval, we can see that the DTFT of a periodic sequence is a series of regularly spaced deltas placed at the N roots of unity and whose amplitude is proportional to the DFS coefficients of the sequence. In other words, the DTFT is uniquely determined by the DFS and vice versa.

Finite-Support Sequences. Given a length-N signal x[n], n = 0,, N - 1 and its N DFT coefficients X[k], consider the associated finite-support sequence

       {
         x[n]  0 ≤ n < N
¯x [n] =
         0     otherwise

from which we can easily derive the DTFT of ¯x as

                   (        )
¯  jω    N∑- 1           2-π
X (e  ) =     X [k]Λ  ω -  N k
         k=0
(4.44)

with

          N -1
Λ (ω ) = 1-∑   e-jωm
        N m=0

What the above expression means, is that the DTFT of the finite support sequence x¯[n] is again uniquely defined by the N DFT coefficients of the finite-length signal x[n] and it can be obtained by a type of Lagrangian interpolation. As in the previous case, the values of DTFT at the roots of unity are equal to the DFT coefficients; note, however, that the transform of a finite support sequence is very different from the DTFT of a periodized sequence. The latter, in accordance with the definition of the Dirac delta, is defined only in the limit and for a finite set of frequencies; the former is just a (smooth) interpolation of the DFT.

4.6 Fourier Transform Properties

4.6.1 DTFT Properties

The DTFT possesses the following properties.

Symmetries and Structure. The DTFT of a time-reversed sequence is

x[- n ] DT←FT→ X (e-jω)
(4.45)

while, for the complex conjugate of a sequence we have

 *    DTFT   *  -jω
x [n] ← →  X  (e   )
(4.46)

For the very important case of a real sequence x[n] ∈ R, property 4.46 implies that the DTFT is conjugate-symmetric:

    jω      *  -jω
X (e  ) = X (e   )
(4.47)

which leads to the following special symmetries for real signals:

Finally, if x[n] is real and symmetric, then the DTFT is real:

x[n] ∈ R, x [- n] = x [n] ⇐ ⇒ X (ejω) ∈ R
(4.52)

while, for real antisymmetric signals we have that the DTFT is purely imaginary:

x[n] ∈ R , x[- n] = - x[n] ⇐⇒ Re{X (ejω)} = 0
      --
(4.53)

Linearity and Shifts. The DTFT is a linear operator:

αx [n]+ βy[n] D←TF→T  αX (ejω )+ βY (ejω )
(4.54)

A shift in the discrete-time domain leads to multiplication by a phase term in the frequency domain:

x[n - n0] D←TF→T e- jωn0X (ejω)
(4.55)

while multiplication of the signal by a complex exponential (i.e. signal modulation by a complex “carrier” at frequency ω0) leads to

         DTFT    (       )
ejω0nx [n]  ←→   X  ej(ω-ω0)
(4.56)

which means that the spectrum is shifted by ω0. This last result is known as the modulation theorem.

Energy Conservation. The DTFT satisfies the Plancherel-Parseval equality:

⟨        ⟩    1 ⟨              ⟩
 x[n],y[n ] = --- X (ejω),Y (ejω)
             2π
(4.57)

or, using the respective definitions of inner product for 2(Z) and L2([-π,π]):

 ∑∞    *         1  ∫ π  *  jω    jω
      x [n ]y[n] = 2π-   X  (e  )Y(e  )dω
n=- ∞                -π
(4.58)

(note the explicit normalization factor 12π). The above equality specializes into Parseval’s theorem as

 ∑∞  |   |2    1 ∫ π|    jω |2
     |x[n]|  = 2π-   |X (e  )| dω
n=-∞              -π
(4.59)

which establishes the conservation of energy property between the time and the frequency domains.

4.6.2 DFS Properties

The DTFT properties we have just seen extend easily to the Fourier Transform of periodic signals. The easiest way to obtain the particularizations which follow is to apply relationship (4.43) to the results of the previous Section.

Symmetries and Structure. The DFS of a time-reversed sequence is

˜x[- n] D←F→S  ˜X [- k]
(4.60)

while, for the complex conjugate of a sequence we have

 *    DFS    *
˜x [n] ← →  ˜X  [- k]
(4.61)

For real periodic sequences, the following special symmetries hold (see (4.47)–(4.53)):

X˜[k] = ˜X*[-k] (4.62)
|X ˜[k]| = | ˜X[-k]| (4.63)
˜X[k] = -X˜[-k] (4.64)
Re{X˜[k]} = Re{ ˜X[-k]} (4.65)
Im{X˜[k]} = -Im{X˜[-k]} (4.66)
Finally, if ˜x[n] is real and symmetric, then the DFS is real:
˜x[n] = ˜x[- n] ⇐⇒  ˜X[k] ∈ R
(4.67)

while, for real antisymmetric signals, we can state that the DFS is purely imaginary.

Linearity and Shifts. The DFS is a linear operator, since it can be expressed as a matrix-vector product. A shift in the discrete-time domain leads to multiplication by a phase term in the frequency domain:

˜x[n - n ] D←F→S  W kn0˜X[k]
       0        N
(4.68)

while multiplication of the signal by a complex exponential of frequency multiple of 2π∕N leads to of a shift in frequency:

  -nk0     DFS
WN    ˜x[n] ←→   ˜X[k - k0]
(4.69)

Energy Conservation. We have already seen the energy conservation property in the context of basis expansion. Here, we simply recall Parseval’s theorem, which states

N∑-1           N∑-1
    ||˜x[n ]||2 = 1-    || ˜X[k]||2
 n=0         N  k=0
(4.70)

4.6.3 DFT Properties

The properties of the DFT are obviously the same as those for the DFS, given the formal equivalence of the transforms. The only detail is how to interpret shifts, index reversal and symmetries for finite, length-N vectors; this is easily solved by considering the fact that the equivalence DFT-DFS translates in the time domain to a homomorphism between a length-N signal and its associated N-periodic extension to an infinite sequence. A shift, for instance, can be applied to the periodized version of the signal and the resulting shifted length N signal is given by the values of the shifted sequence from 0 to N - 1, as previously explained in Section 2.2.2.

Mathematically, this means that all shifts and time reversals of a length-N signal are operated modulo N. Consider a length-N signal:

[x[0]  x[1] ...  x[N  - 1]]T = x [n]
(4.71)

Its time-reversed version is

[                                      ]T
 x[0] x[N -  1] x[N - 2]  ... x[2] x [1]   = x[- n mod  N ]
(4.72)

as we can easily see by considering the underlying periodic extension (note that x[0] remains in place!) A shift by k corresponds to a circular shift:

[
x[k]
x[k - 1]
x[0]
x[N - 1]
x[k - 1]
]T
= x[(n - k) mod N] (4.73)

PIC    PIC
PIC    PIC

Figure 4.14: Examples of finite-length symmetric signals for N = 2,3,4,5. Unconstrained values are drawn in gray.

The concept of symmetry can be reinterpreted as follows: a symmetric signal is equal to its time reversed version; therefore, for a length-N signal to be symmetric, the first member of (4.71) must equal the first member of (4.72), that is

                                 ⌊         ⌋
x[k] = x[N - k],     k = 1,2,..., (N - 1)∕2
(4.74)

Note that, in the above definition, the index k runs from 1 of ⌊(N - 1)2⌋; this means that symmetry does not place any constraint on the value of x[0] and, similarly, the value of x[N∕2] is also unconstrained for even-length signals. Figure 4.14 shows some examples of symmetric length-N signals for different values of N. Of course the same definition can be used for antisymmetric signals with just a change of sign.

Symmetries and Structure. The symmetries and structure derived for the DFS can be rewritten specifically for the DFT as

x[-n mod N] ←DF→T X[-k mod N] (4.75)
x*[n] ←DF→T X*[-k mod N] (4.76)

The following symmetries hold only for real signals:

X[k] = X*[-k mod N] (4.77)
|X[k]| = |X[-k mod N]| (4.78)
X[k] = -X[-k mod N] (4.79)
Re{X[k]} = Re{X[-k mod N]} (4.80)
Im{X[k]} = -Im{X[-k mod N]} (4.81)
Finally, if x[n] is real and symmetric (using the symmetry definition in (4.74), then the DFT is real:
x[k] = x [N - k],       k = 1,2,...,⌊(N - 1)∕2⌋ ⇐ ⇒ X [k] ∈ R
(4.82)

while, for real antisymmetric signals we have that the DFT is purely imaginary.

Linearity and Shifts. The DFT is obviously a linear operator. A circular shift in the discrete-time domain leads to multiplication by a phase term in the frequency domain:

 [                ] DFT    kn0
x (n - n0)  mod  N   ← →  W N  X [k ]
(4.83)

while the finite-length equivalent of the modulation theorem states

  - nk0     DFT    [                ]
W N   x [n] ← →  X  (k - k0) mod  N
(4.84)


---------------------------------------------------------------------
 Discre te-Tim e Fourie r Transform (DTFT)
---------------------------------------------------------------------
 used for:                   infinite, two sided signals (x[n] ∈ ℓ2(Z-))
                                      ∑∞
 analysis formula:             X (ejω) =     x[n]e-jωn
                                     n=- ∞
                                   1-∫ π    jω  jωn
 synthesis formula:            x[n ] = 2π -πX (e )e   dω
                                   DTFT
 symmetries:                  x[- n] ← →  X (e- jω)
                                  DTFT
                             x*[n] ← →  X *(e-jω)
                                     DTFT
 shifts:                       x[n - n0] ←→   e-jωn0X (ejω )
                                     DTFT    (      )
                             ejω0nx[n] ←→   X ej(ω-ω0)
                              ∞∑   |   |2   1 ∫ π |     |2
 Parseval:                         |x[n]| = 2π-   |X (ejω )| dω
-----------------------------n=-∞--------------π---------------------
 Som e DTFT pa irs
---------------------------------------------------------------------
 x[n] = δ[n - k]              X (ejω) = e-jωk

 x[n] = 1                    X (ejω) = ˜δ(ω)

 x[n] = u[n ]                 X (ejω) =----1---+ 1 ˜δ(ω)
                                     1 - e-jω  2
 x[n] = anu[n ], |a| < 1        X (ejω) =----1----
                                     1 - ae-jω
        jω0n                     jω   ˜
 x[n] = e                    X (e  ) = δ(ω - ω0 )
                                jω   1 [ jφ ˜          -jφ ˜      ]
 x[n] = cos(ω0n + φ)          X (e  ) = 2 e  δ(ω - ω0)+ e   δ(ω+ ω0)
                                jω   --j[ jφ ˜         - jφ ˜       ]
 x[n] = s{in(ω0n+ φ)           X (e  ) = 2  e  δ(ω- ω0) - e   δ(ω + ω0)
        1  for 0 ≤ n ≤ N - 1    jω   sin((N-∕2)ω) -jN-21ω
 x[n] = 0  otherwise          X (e  ) =  sin(ω∕2)  e
---------------------------------------------------------------------
 Discre te Fourie r Series (D FS)
---------------------------------------------------------------------
 used for:                   periodic signals (˜x[n] ∈ ˜CN )
                                   N∑ -1
 analysis formula:             ˜X [k] =    ˜x[n ]W nNk,  k = 0,...,N - 1
                                   n=0
                                   1 N∑- 1      -nk
 synthesis formula:            ˜x[n ] = N    ˜X[k]WN   , n = 0,...,N - 1
                                     k=0
 symmetries:                  ˜x[- n] D←F→S X˜[- k]

                             ˜x*[n] D←FS→  ˜X*[- k]

 shifts:                       ˜x[(n - n0)]←DF→S W kn0˜X[k]
                                              N
                             W -nk0˜x[n] D←FS→  ˜X[(k- k0)]
                             N-N1           N-1
 Parseval:                    ∑  ||˜x[n]||2 =-1 ∑  ||X ˜[k]||2
                             n=0        N  k=0
---------------------------------------------------------------------

-Discre-te-Fourie-r Transform-(DFT)--------------------------------------
                                                       N
 used for:                   finite support signals (x[n] ∈ C )
                                   N∑ -1     nk
 analysis formula:             X [k] =    x[n ]W N ,     k = 0,...,N - 1
                                   n=0N- 1
 synthesis formula:            x[n ] = 1-∑  X[k]W -nk, n = 0,...,N - 1
                                   N k=0      N
                                          DFT
 symmetries:                  x[- n mod  N] ←→  X [- k mod N ]
                                  DFT
                             x*[n] ← →  X*[- k mod N ]
                              [              ] DFT
 shifts:                       x (n - n0) mod N   ←→  W kNn0X [k]
                                       DFT   [              ]
                             W -Nnk0x[n] ← →  X (k- k0) mod N
                             N∑-1|   |    1 N∑-1|   |
 Parseval:                       |x[n]|2 =--    |X [k]|2
-----------------------------n=0--------N--k=0-----------------------

 Som e DFT p airs for length-N sig nals             (n,k = 0,1,...,N - 1)
---------------------------------------------------------------------
 x[n] = δ[n - k]              X [k] = e-j2Nπk

 x[n] = 1                    X [k] = N δ[k]

 x[n] = ej2πN L                X [k] = N δ[k - L]
          (         )
 x[n] = cos 2π-Ln + φ        X [k] = N [ejφδ[k- L ]+ e-jφδ[k- N + L )]]
         ( N        )               2
 x[n] = sin 2π-Ln +φ          X [k] = --jN-[ejφ δ[k - L]- e-jφδ[k - N + L]]
       {   N                         2(        )
        1  for n ≤ M - 1           sin-(π∕N-)M-k- -j-π(M -1)k
 x[n] = 0  for M ≤ n ≤ N - 1 X [k] = sin((π ∕N )k) e  N
---------------------------------------------------------------------


Energy Conservation. Parseval’s theorem for the DFT is (obviously) identical to (4.70):

N∑-1|   |2   1 N∑-1|    |2
    |x[n ]| =  --    |X[k]|
 n=0         N  k=0
(4.85)

4.7 Fourier Analysis in Practice

In the previous Sections, we have developed three frequency representations for the three main types of discrete-time signals; the derivation was eminently theoretical and concentrated mostly upon the mathematical properties of the transforms seen as a change of basis in Hilbert space. In the following Sections we will see how to put the Fourier machinery to practical use.

We have seen two fundamental ways to look at a signal: its time-domain representation, in which we consider the values of the signal as a function of discrete time, and its frequency-domain representation, in which we consider its energy and phase content as a function of digital frequency. The information contained in each of the two representations is exactly the same, as guaranteed by the invertibility of the Fourier transform; yet, from an analytical point of view, we can choose to concentrate on one domain or the other according to what we are specifically seeking. Consider for instance a piece of music; such a signal contains two coexisting perceptual features, meter and key. Meter can be determined by looking at the duration patterns of the played notes: its “natural” domain is therefore the time domain. The key, on the other hand, can be determined by looking at the pitch patterns of the played notes: since pitch is related to the frequency content of the sound, the natural domain of this feature is the frequency domain.

We can recall that the DTFT is mostly a theoretical analysis tool; the DTFTs which can be computed exactly (i.e. those in which the sum in (4.13) can be solved in closed form) represent only a small set of sequences; yet, these sequences are highly representative and they will be used over and over to illustrate a prototypical behavior. The DFT,(8) on the other hand, is fundamentally a numerical tool in that it defines a finite set of operations which can be computed in a finite amount of time; in fact, a very efficient algorithmic implementation of the DFT exists under the name of Fast Fourier Transform (FFT) which only requires a number of operations on the order of N(log N) for an N-point data vector. The DFT, as we know, only applies to finite-length signals but this is actually acceptable since, in practice, all measured signals have finite support; in principle, therefore, the DFT suffices for the spectral characterization of real-world sequences. Since the transform of a finite-length signal and its DTFT are related by (4.43) or by (4.44) according to the underlying model for the infinite-length extension, we can always use the DTFT to illustrate the fundamental concepts of spectral analysis for the general case and then particularize the results for finite-length sequences.

4.7.1 Plotting Spectral Data

The first question that we ask ourselves is how to represent spectral data. Since the transform values are complex numbers, it is customary to separately plot their magnitude and their phase; more often than not, we will concentrate on the magnitude only, which is related to the energy distribution of the signal in the frequency domain.(9) For infinite sequences whose DTFT can be computed exactly, the graphical representation of the transform is akin to a standard function graph – again, the interest here is mostly theoretical. Consider now a finite-length signal of length N; its DFT can be computed numerically, and it yields a length-N vector of complex spectral values. These values can be displayed as such (and we obtain a plain DFT plot) or they can be used to obtain the DTFT of the periodic or finite-support extension of the original signal.

Consider for example the length-16 triangular signal x[n] in Figure 4.15; note in passing that the signal is symmetric according to our definition in (4.74) so that its DFT is real. The DFT coefficients |X[k]| are plotted in Figure 4.16; according to the fact that x[n] is a real sequence, the set of DFT coefficients is symmetric (again according to (4.74)). The k-th DFT coefficient corresponds to the frequency (2π∕N)k and, therefore, the plot’s abscissa extends implicitly from 0 to 2π; this is a little different than what we are used to in the case of the DTFT, where we usually consider the [-π,π] interval, but it is customary. Furthermore, the difference is easily eliminated if we consider the sequence of X[k] as being N-periodic (which it is, as we showed in Section 4.3) and plot the values from -k∕2 to k∕2 for k even, or from -(k - 1)2 to (k - 1)2 for k odd.


PIC

Figure 4.15: Length-16 signal.


PIC

Figure 4.16: Magnitude DFT (or, equivalently, one period of the DFS) of the signal in Figure 4.15.

This can be made explicit by considering the N-periodic extension of x[n] and by using the DFS-DTFT relationship (4.23); the standard way to plot this is as in Figure 4.17. Here the N pulse trains ˜δ (ω - (2π∕N)k) are represented as lines (or arrows) scaled by the magnitude of the corresponding DFT coefficients. By plotting the representative [-π,π] interval, we can appreciate, in full, the symmetry of the transform’s magnitude.

By considering the finite-support extension of x[n] instead, and by plotting the magnitude of its DTFT, we obtain Figure 4.18. The points in the plot can be computed directly from the summation defining the DTFT (which, for finite-support signals only contains a finite number of terms) and by evaluating the sum over a sufficiently fine grid of values for ω in the [-π,π] interval; alternatively, the whole set of points can be obtained in one shot from an FFT with a sufficient amount of zero-padding (this method will be precised later). Again, the DTFT of a finite-support extension is just a smooth interpolation of the original DFT points and no new information is added.


PIC

Figure 4.17: Magnitude DTFT of the periodic extension of the signal in Figure 4.15.


PIC

Figure 4.18: Magnitude DTFT of the finite-support extension of the signal in Figure 4.15. The Lagrange interpolation between DFT values is made apparent by the lines in gray.

4.7.2 Computing the Transform: the FFT

The Fast Fourier Transform, or FFT, is not another type of transform but simply the name of an efficient algorithm to compute the DFT. The algorithm, in its different flavors, is so ubiquitous and so important that the acronym FFT is often used liberally to indicate the DFT (or the DFS, which would be more appropriate since the underlying model is that of a periodic signal).

We have already seen in (4.6) that the DFT can be expressed in terms of a matrix vector multiplication:

X = W  x

as such, the computation of the DFT requires a number of operations on the order of N2. The FFT algorithm exploits the highly structured nature of W to reduce the number of operations to N log(N). In matrix form this is equivalent to decomposing W into the product of a series of matrices with mostly zero or unity elements. The algorithmic details of the FFT can be found in the bibliography; we can mention, however, that the FFT algorithm is particularly efficient for data lengths which are a power of 2 and that, in general, the more prime factors the data length can be decomposed into, the more efficient the FFT implementation.

4.7.3 Cosmetics: Zero-Padding

FFT algorithms are tailored to the specific length of the input signal. When the input signal’s length is a large prime number or when only a subset of FFT algorithms is available (when, for instance, all we have is the radix-2 algorithm, which processes input vectors with lengths of a power of 2) it is customary to extend the length of the signal to match the algorithmic requirements. This is usually achieved by zero padding, i.e. the length-N data vector is extended to a chosen length M by appending (M - N) zeros to it. Now, the maximum resolution of an N-point DFT, i.e. the separation between frequency components, is 2π∕N. By extending the signal to a longer length M, we are indeed reducing the separation between frequency components. One may think that this artificial increase in resolution allows the DFT to show finer details of the input signal’s spectrum. It is not so.

The M-point DFT X(M) of an N-point data vector x, obtained via zero-padding, can be obtained directly from the “canonical” N-point DFT of the vector X(N) via a simple matrix multiplication:

  (M )          (N)
X    =  MM,N X
(4.86)

where the M × N matrix MM,N is given by

           ′  H
MM,N  = W M W N

where WN is the standard DFT matrix and WMis the M × N matrix obtained by keeping just the first N columns of the standard DFT matrix WM. The fundamental meaning of (4.86) is that, by zero padding, we are adding no information to the spectral representation of a finite-length signal. Details of the spectrum which were not apparent in an N-point DFT are still not apparent in a zero-padded version of the same. It can be shown that (4.86) is a form of Lagrangian interpolation of the original DFT samples; therefore the zero-padded DFT is more attractive in a “cosmetic” fashion since the new points, when plotted, show a smooth curve between the original DFT points (and this is how plots such as the one in Figure 4.18 are obtained).

4.7.4 Spectral Analysis

The spectrum is a complete, alternative representation of a signal; by analyzing the spectrum, one can obtain, at a glance, the fundamental information, reguired to characterize and classify a signal in the frequency domain.

Magnitude The magnitude of a signal’s spectrum, obtained by the Fourier transform, represents the energy distribution in frequency for the signal. It is customary to broadly classify discrete-time signals into three classes:

For real-valued signals, the magnitude spectrum is a symmetric function and the above classifications take this symmetry into account. Remember also, that all spectra of discrete-time signals are 2π-periodic functions so that the above definitions are to be interpreted in a 2π-periodic fashion. For once, this is made explicit in Figures 4.19 to 4.21 where the plotting range, instead of the customary [-π,π] interval, is extended from -2π to 2π.

Phase As we have stated before, the Fourier representation allows us to think of any signal as the sum of the outputs of a (potentially infinite) number of sinusoidal generators. While the magnitude of the spectrum defines the inherent power produced by each of the generators, its phase defines the relative alignment of the generated sinusoids. This alignment determines the shape of the signal in the discrete-time domain. To illustrate this with an example, consider the following 64-periodic signal:(10)

       3           (                )
˜x[n] = ∑  --1---sin  2π-(2i+ 1)n+ φ
          2i+ 1      64             i
      i=0
(4.87)

The magnitude of its DFS ˜X[k] is independent of the values of φi = 0, i = 0,1,2,3, and it is plotted in Figure 4.22. If the phase terms are uniformly zero, i.e. φi = 0, i = 0,1,2,3, ˜x[n] is the discrete-time periodic signal plotted in Figure 4.23; the alignment of the constituent sinusoids is such that the “square wave” exhibits a rather sharp transition between half-periods and a rather flat behavior over the half-period intervals. In addition, it should be noted with a zero phase term, the periodic signal is symmetric and that therefore the DFS coefficients are real. Now consider modifying the individual phases so that φi = 2πi∕3; in other words, we introduce a linear phase term in the constituent sinusoids. While the DFS magnitude remains exactly the same, the resulting time-domain signal is the one depicted in Figure 4.24; lack of alignment between sinusoids creates a “smearing” of the signal which no longer resembles a square wave.


PIC

Figure 4.22: Magnitude DFS of the signal in (4.87).


PIC

Figure 4.23: The signal in (4.87) with φi = 0.


PIC

Figure 4.24: The signal in (4.87) with φi = 2πi∕3.

4.8 Time-Frequency Analysis

Recall our example at the beginning of this Chapter, when we considered the time and frequency information contained in a piece of music. We stated that the melodic information is related to the frequency content of the signal; obviously this is only partially true, since the melody is determined not only by the pitch values but also by their duration and order. Now, if we take a global Fourier Transform of the entire musical piece we have a comprehensive representation of the frequency content of the piece: in the resulting spectrum there is information about the frequency of each played note.(11) The time information, however, that is the information pertaining to the order in which the notes are played, is completely hidden by the spectral representation. This makes us wonder whether there exists a time-frequency representation of a signal, in which both time and frequency information are readily apparent.

4.8.1 The Spectrogram

The simplest time-frequency transformation is called the spectrogram. The recipe involves splitting the signal into small consecutive (and possibly overlapping) length-N pieces and computing the DFT of each. What we obtain is the following function of discrete-time and of a dicrete frequency index:

         N- 1
S[k,m] = ∑   x[mM   + i]W  ik
                        N
         i=0
(4.88)

where M, 1 M N controls the overlap between segments. In matrix notation we have

        ⌊                                      ⌋
           x[0]        x[M ]       x[2M ]   ⋅⋅⋅
        |                                      |
        ||  x[1]      x[M +  1]   x[2M  + 1] ⋅⋅⋅||
S = WN  ||    ..           ..            ..        ||
        ⌈    .           .            .     ⋅⋅⋅⌉
         x[N - 1]  x[M  + N  - 1]    x[L]    ⋅⋅⋅
(4.89)

The resulting spectrogram is therefore an N ×⌊L∕Mmatrix, where L is the total length of the signal x[n]. It is usually represented graphically as a plot in which the x-axis is the discrete-time index m, the y-axis is the discrete frequency index k and a color is the magnitude of S[k,m], with darker colors for larger values.

As an example of the insight we can gain from the spectrogram, consider analyzing the well-known Bolero by Ravel. Figure 4.25 shows the spectrogram of the initial 37 seconds of the piece. In the first 13 seconds the only instrument playing is the snare drum, and the vertical line in the spectrogram represents, at the same time, the wide frequency content of a percussive instrument and its rhythmic pattern: if we look at the spacing between lines, we can identify the “trademark” drum pattern of Ravel’s Bolero. After 13 seconds, the flute starts playing the theme; this is identifiable in the dark horizontal stripes which denote a high energy content around the frequencies which correspond to the pitches of the melody; with further analysis we could even try to identify the exact notes. The clarity of this plot is due to the simple nature of the signal; if we now plot the spectrogram of the last 20 seconds of the piece, we obtain Figure 4.26. Here the orchestra is playing full blast, as indicated by the high energy activity across the whole spectrum; we can only detect the onset of the rhythmic shouts that precede the final chord.


PIC

Figure 4.25: Spectrogram representation of the beginning of Ravel’s Bolero. DFT size is 1024 samples, overlap is 512 samples.


PIC

Figure 4.26: Spectrogram representation of the end of Ravel’s Bolero.

4.8.2 The Uncertainty Principle

Each of the columns of S represents the “local” spectrum of the signal for a time interval of length N. We can therefore say that the time resolution of the spectrogram is N samples since the value of the signal at time n0 influences the DFT of the N-point window around n0. Seen from another point of view, the time information is “smeared” over an N-point interval. At the same time, the frequency resolution of the spectrogram is 2π∕N (and we cannot increase it by zero-padding, as we have just shown). The conflict is therefore apparent: if we want to increase the frequency resolution we need to take longer windows but in so doing, we lose the time localization of the spectrogram; likewise, if we want to achieve a fine resolution in time, the corresponding spectral information for each “time slice” will be very coarse. It is rather easy to show that the amount of overlap does not change the situation. In practice, we need to choose an optimal tradeoff taking the characteristics of the signal into consideration.

The above problem, described for the case of the spectrogram, is actually a particular instance of a general uncertainty principle for time-frequency analysis. The principle states that, independently of the analysis tools that we put in place, we can never hope to achieve arbitrarily good resolution in both time and frequency since there exists a lower bound greater than zero for the product of the localization measure in time and frequency.

4.9 Digital Frequency vs. Real Frequency

The conceptual representation of discrete-time signals relies on the notion of a dimensionless “time”, indicated by the integer index n. The absence of a physical dimension for time has the happy consequence that all discrete-time signal processing tools become indifferent to the underlying physical nature of the actual signals: stock exchange values or sampled orchestral music are just sequences of numbers. Similarly, we have just derived a frequency representation for signals which is based on the notion of a dimensionless frequency; because of the periodicity of the Fourier basis, all we know is that π is the highest digital frequency that we can represent in this model. Again, the power of generality is (or will soon be) apparent: a digital filter which is designed to remove the upper half of a signal’s spectrum can be used with any type of input sequence, with the same results. This is in stark contrast with the practice of analog signal processing in which a half-band filter (made of capacitors, resistors and other electronic components) must be redesigned for any new class of input signals.

This dimensionless abstraction, however, is not without its drawbacks from the point of view of hands-on intuition; after all, we are all very familiar with signals in the real world for which time is expressed in seconds and frequency is expressed in hertz. We say, for instance, that speech has a bandwidth up to 4 KHz, that the human ear is sensitive to frequencies up to 20 KHz, that a cell phone transmits in the GHz band, and so on. What does “π” mean in these cases? The precise, formal link between real-world signal and discrete-time signal processing is given by the Sampling Theorem, which we will study later. The fundamental idea, however, is that we can remove the abstract nature of a discrete-time signal (and, correspondingly, of a dimensionless frequency) by associating a time duration to the interval between successive discrete-time indices in the sequence.

Let us say that the “real-world” time between indices n and n + 1 in a discrete-time sequence is Ts seconds (where Ts is generally very small); this can correspond to sampling a signal every Ts seconds or to generating a synthetic sequence with a DSP chip whose clock cycle is Ts seconds. Now, recall that the phase increment between successive samples of a generic complex exponential e0n is ω0 radians. The oscillation, therefore, completes a full cycle in n0 = (2π∕ω0) samples. If Ts is the real-world time between samples, the full cycle is completed in n0Ts seconds and so its “real-world” frequency is f0 = 1(n0Ts) hertz. The relationship between the digital frequency ω0 and the “real” frequency f0 in Hertz as determined by the “clock” period Ts is therefore

         1 ω
f0 ←Ts→  -----0
        2π Ts
(4.90)

In particular, the highest real frequency which can be represented in the discrete-time system (which corresponds to ω = π) is

Fmax = Fs-
        2

where we have used Fs = (1∕Ts); Fs is just the operating frequency of the discrete time system (also called the sampling frequency or clock frequency). With this notation, the digital frequency ω0 corresponding to a real frequency f0 is

        f
ω0 = 2π--0
       Fs

The compact disk system, for instance, operates at a frequency Fs = 44.1 KHz; the maximum representable frequency for the system is 22.05 KHz (which constitutes the highest-pitched sound which can be encoded on, and reproduced by, a CD).

4.10 Examples

Example 4.1: The structure of DFT formulas

The DFT and inverse DFT (IDFT) formulas have a high degree of symmetry. Indeed, we can use the DFT algorithm to compute the IDFT with just a little manipulation: this can be useful if we have a “black box” FFT routine and we want to compute an inverse transform.

In the space of length-N signals, indicate the DFT of a signal x as

     [                         ]T
X  =  X [0]  X [1]  ... X [N - 1]  =  DFT {x}

so that we can also write

DFT  {x}[n] = X [n]

Now consider the time-reversed signal

     [                                    ]T
Xr =  X [N - 1]  X [N  - 2] ...  X[1]  X[0]

we can show that

x[n] = 1-W n ⋅DFT  {X }[n]
       N   N         r

so that the inverse DFT can be obtained as the DFT of a time-reversed and scaled version of the original DFT. Indeed, with the change of variable m = (N - 1) - k, we have

DFT{Xr}[n] = k=0N-1X[(N - 1) - k]e-j2πNkn
= m=0N-1X[m]e-j2π
-N(N-1-m)n
= m=0N-1X[m]ej2π
Nmn ej2π-
Nn e-j2π
NNn
= ej2π
Nn m=0N-1X[m]ej2π
Nmn = ej2π
Nn N x[n]

Example 4.2: The DTFT of the step function

In the delta-function formalism, the Fourier transform of the unit signal x[n] = 1 is the pulse train ˜δ (ω). Intuitively, the reasoning goes as follows: the unit signal has the same value over its entire infinite, two-sided support; nothing ever changes, there is not even the minutest glimpse of movement in the signal, ergo its spectrum can only have a nonzero value at the zero frequency. Recall that a frequency of zero is the frequency of dead quiet; the spectral value at ω = 0 is also known as the DC component (for Direct Current), as opposed to a livelier AC (Alternating Current). At the same time, the unit signal has a very large energy, an infinite energy to be precise; imagine it as the voltage at the poles of a battery connected to a light bulb: to keep the light on for all eternity (i.e. over Z) the energy must indeed be infinite. Our delta function captures this duality very effectively, if not rigorously.

Now consider the unit step u[n]; this is a somewhat stranger entity since it still possesses infinite energy and it still is a very quiet signal – except in n = 0. The transition in the origin is akin to flipping a switch in the battery/light bulb circuit above with the switch remaining on for the rest of (positive) eternity. As for the Fourier transform, intuitively we will still have a delta in zero (because of the infinite energy) but also some nonzero values over the entire frequency range because of the “movement” in n = 0. We know that for |a| < 1 it is

 n     DTFT  ----1-----
a u[n] ← →   1- a e-jω

so that it is tempting to let a 1 and just say

u [n] DTF←T→  ?? ----1---
             1 - e-jω

This is not quite correct; even intuitively, the infinite energy delta is missing. To see what’s wrong, let us try to find the inverse Fourier transform of the above expression; by using the substitution exp() = z and contour integration on the unit circle we have

       1  ∫ π   ejωn         1 ∮     zn   dz
ˆu[n] = 2π-    1--e--jω-dω =  2π-   1--z-1-jz-
           -π                   C

Since there is a pole on the contour, we need have to use Cauchy’s principal value theorem for the indented integration contour shown in Figure 4.27. For n 0 there are no poles other than in z = 1 and we can use the “half-residue” theorem to obtain

∮     n         [               ]
   --z-- dz = jπ Residue at z = 1 = jπ
 C′z - 1

so that

      1
ˆu[n ] =--      for n ≥ 0
      2

For n < 0 there is a (multiple) pole in the origin; with the change of variable v = z-1 we have

∮     n       ∮    -(n+1)
    -z---dz =     v------dv
 C ′z - 1      C′′ 1 - v

where C′′ is the same contour as Cbut oriented clockwise. Because of this inversion it is

ˆu[n] = - 1      for n < 0
        2

In conclusion

                             {
      -1{    1    }            +1 ∕2   for n ≥ 0
DTFT      1---e-jω  = ˆu[n] =
                               - 1∕2   for n < 0

But this is almost good! Indeed,

u[n] = ˆu[n]+ 1-
             2

so that finally the DTFT of the unit step is

    jω        1      1
U (e ) = 1---e-jω + 2-˜δ(ω )
(4.91)

and its magnitude is sketched in Figure 4.28.


PIC

Figure 4.27: Indented integration contour.


PIC

Figure 4.28: Magnitude spectrum for the unit step.

4.11 Further Reading

A nice engineering book on Fourier theory is The Fourier Transform and Its Applications, by R. Bracewell (McGraw- Hill, 1999). A more mathematically oriented textbook is Fourier Analysis, by T. W. Korner (Cambridge University Press, 1989), as is P. Bremaud’s book, Mathematical Principles of Signal Processing (Springer, 2002).

4.12 Exercises

Exercise 4.1: DFT of elementary functions.  Derive the formula for the DFT of the length-N signal x[n] = cos((2π∕N)Ln + φ).

Exercise 4.2: Real DFT.  Compute the DFT of the length-4 signal x[n] = {a,b,c,d}. For which values of a,b,c,d is the DFT real?

Exercise 4.3: Limits.  What is the value of the limit

      ∑N
 lim       cos(ω0n)
N→ ∞ n=-N

(in a signal processing sense)?

Exercise 4.4: Estimating the DFT graphically.  Consider a length-64 signal x[n] which is the sum of the three sinusoidal signals plotted in the following Figure (the signals are plotted as continuous lines just for clarity). Compute the DFT coefficients X[k], k = 0,1,,63.


PIC


Exercise 4.5: The structure of DFT formulas.  Consider a length-N signal x[n], N = 0,,N - 1; what is the length-N signal y[n] obtained as

           {     {   } }
y[n] = DFT  DFT   x[n]

(i.e. by applying the DFT algorithm twice in a row)?

Exercise 4.6: Two DFTs for the price of one.  When you compute the DFT of an length-N real signal, your data contains N real values while the DFT vector contains N complex values: there is clearly a redundancy of a factor of two in the DFT vector, which is apparent when you consider its Hermitian symmetry (i.e. X[k] = X*[N - k]). You can exploit this fact to compute the DFT of two real length-N signals for the price of one. This is useful if you have a pre-packaged FFT algorithm which you want to apply to real data.

Assume x[n] and y[n] are two length-N real signals. Build a complex signal c[n] = x[n] + jy[n] and compute its DFT C[k], k = 0,1,,N - 1. Show that

       1 (               )
X [k] = -- C[k]+ C *[N  - k]
       2                         k = 0,1,...,N - 1
Y[k] = 1-(C [k]- C*[N - k ])
       2j

where X[k] and Y [k] are the DFTs of x[n] and y[n], respectively.

Exercise 4.7: The Plancherel-Parseval equality.  Let x[n] and y[n] be two complex valued sequences and X(exp(jw)) and Y (exp(jw)), their corresponding DTFTs.

  1. Show that
    ⟨        ⟩      ⟨              ⟩
 x[n],y[n ] = -1- X (ejw),Y(ejw)
             2π

    where we use the inner products for 2(Z) and L2([-π,π]), respectively.

  2. What is the physical meaning of the above formula when x[n] = y[n]?

Exercise 4.8: Numerical computation of the DFT.  Consider the signal x(n) = cos(2πf0n). Compute and draw the DFT of the signal in N = 128 points, for

  1. f0 = 21128
  2. f0 = 21127

You can use any numerical package to do this. Explain the differences that we can see in these two spectra.

Exercise 4.9: DTFT vs. DFT.  Consider the following infinite non-periodic discrete time signal:

x[n] = (
|{ 0  n < 0

|( 1  0 ≤ n < a
  0  n ≥ a
  1. Compute its DTFT X(exp()).

We want to visualize the magnitude of X(exp()) using a numerical package (for instance, Matlab). Most numeric packages cannot handle continuous sequences such as X(exp()); therefore we need to consider only a finite number of points for the spectrum.

  1. Plot 10,000 points of one period of |X(exp())| (from 0 to 2π) for a = 20.

The DTFT is mostly a theoretical analysis tool, and in many cases, we will compute the DFT. Moreover, for obvious reasons, numeric computation programs as, Matlab, only compute the DFT. Recall that in Matlab we use the Fast Fourier Transform (FFT), an efficient algorithm to compute the DFT.

  1. Generate a finite sequence x1[n] of length N = 30 such that x1[n] = x[n] for n = 1,,N. Compute its DFT and plot its magnitude. Compare it with the plot obtained in (b).
  2. Repeat now for different values of N = 50, 100, 1000. What can you conclude?

© Presses polytechniques et universitaires romandes, 2008
All rights reserved

Chapter 5
Discrete-Time Filters

The previous Chapters gave us a thorough overview on both the nature of discrete-time signals and on the tools used in analyzing their properties. In the next few Chapters, we will study the fundamental building block of any digital signal processing system, that is, the linear filter. In the discrete-time world, filters are nothing but procedures which store and manipulate mathematically the numerical samples appearing at their input and their output; in other words, any discrete-time filter can be described procedurally in the form of an algorithm. In the special case of linear and time-invariant filters, such an algorithm can be concisely described mathematically by a constant-coefficient difference equation.

5.1 Linear Time-Invariant Systems

In its most general form, a discrete-time system can be described as a black box accepting a number of discrete-time sequences as inputs and producing another number of discrete-time sequences at its output.

In this book we are interested in studying the class of linear time-invariant (LTI) discrete-time systems with a single input and a single output; a system of this type is referred to as a filter. A linear time-invariant system H can thus be viewed as an operator which transforms an input sequence into an output sequence:

        {    }
y[n] = H x[n]


PIC

Figure 5.1: A single-input, single-output discrete-time system (black-box view).

Linearity is expressed by the equivalence

  {              }      {     }      {     }
H  αx1[n]+ βx2 [n ] = αH   x1[n]  + βH  x2[n]
(5.1)

for any two sequences x1[n] and x2[n] and any two scalars α,β ∈ C. Time-invariance is expressed by

y[n] = H {x[n]} ⇐ ⇒ H {x[n - n ]} = y[n - n ]
                              0           0
(5.2)

Linearity and time-invariance are very reasonable and “natural” requirements for a signal processing system. Imagine a recording system: linearity implies that a signal obtained by recording a violin and a piano playing together is the same as the sum of the signals obtained recording the violin and the piano separately (but in the same recording room). Multi-track recordings in music production are an application of this concept. Time invariance basically means that the system’s behavior is independent of the time the system is turned on. Again, to use a musical example, this means that a given digital recording played back by a digital player will sound the same, regardless of when it is played.

Yet, simple as these properties, linearity and time-invariance taken together have an incredibly powerful consequence on a system’s behavior. Indeed, a linear time-invariant system turns out to be completely characterized by its response to the input x[n] = δ[n]. The sequence h[n] = H{δ[n]} is called the impulse response of the system and h[n] is all we need to know to determine the system’s output for any other input sequence. To see this, we know that for any sequence we can always write the canonical orthonormal expansion (i.e. the reproducing formula in (2.18))

       ∑∞
x[n ] =     x[k]δ[n-  k]
      k=-∞

and therefore, if we let H{δ[n]} = h[n], we can apply (5.1) and (5.2) to obtain

        {    }    ∑∞
y[n] = H  x[n ] =       x[k ]h[n - k]
                 k=- ∞
(5.3)

5.2 Filtering in the Time Domain

The summation in (5.3) is called the convolution of sequences x[n] and h[n] and is denoted by the operator “*” so that (5.3) can be shorthanded to

y[n] = x [n] *h[n]

This is the general expression for a filtering operation in the discrete-time domain. To indicate a specific value of the convolution at a given time index n0, we may use the notation y[n0] = (x * h)[n0]

5.2.1 The Convolution Operator

Clearly, for the convolution of two sequences to exist, the sum in (5.3) must be finite and this is always the case if both sequences are absolutely summable. As in the case of the DTFT, absolute summability is just a sufficient condition and the sum (5.3) can be well defined in certain other cases as well.

Basic Properties. The convolution operator is easily shown to be linear and time-invariant (which is rather intuitive seeing as it describes the behavior of an LTI system):

     (                )
x[n ]* α ⋅y[n]+ β ⋅w[n] = α ⋅x[n]* y[n]+ β ⋅x[n ]*w [n ]
(5.4)

w [n ] = x[n]* y[n ] ⇐ ⇒ x[n]* y[n- k] = w[n - k]
(5.5)

The convolution is also commutative:

x [n]* y[n] = y[n]* x[n]
(5.6)

which is easily shown via a change of variable in (5.3). Finally, in the case of square summable sequences, it can be shown that the convolution is associative:

(          )              (         )
 x [n]* h[n] * w[n] = x[n]* h[n]*w [n]
(5.7)

This last property describes the effect of connecting two filters H and W in cascade and it states that the resulting effect is that of a single filter whose impulse response is the convolution of the two original impulse responses. As a corollary, because of the commutative property, the order of the two filters in the cascade is completely irrelevant. More generally, a sequence of filtering operations can be performed in any order.

Please note that associativity does not necessarily hold for sequences which are not square-summable. A classic counterexample is the following: consider the three sequences

x[n] = u[n] the unit step
y[n] = δ[n] - δ[n - 1] the first-difference operator
w[n] = 1 a constant signal
where clearly x[n], w[n] ∈ 2(Z). It is easy to verify that
x[n] * (y[n] * w[n]) = 0
(x[n] * y[n]) * w[n] = 1

Convolution and Inner Product. It is immediately obvious that, for two sequences x[n] and h[n], we can write:

            ⟨            ⟩
x[n ]*h[n] =  h*[n - k],x[k ]

that is, the value at index n of the convolution of two sequences is the inner product (in 2(Z)) of the first sequence – conjugated,(1) time-reversed and re-centered at n – with the input sequence. The above expression describes the output of a filtering operation as a series of “localized” inner products; filtering, therefore, measures the time-localized similarity (in the inner product sense, i.e. in the sense of the correlation) between the input sequence and a prototype sequence (the time-reversed impulse response).

In general, the convolution operator for a signal is defined with respect to the inner product of its underlying Hilbert space. For the space of N-periodic sequences, for instance, the convolution is defined as

˜x[n] *[n] = k=0N-1˜x[k][n - k] (5.8)
= k=0N-1˜x[n - k][k] (5.9)
which is consistent with the inner product definition in (3.55). We will also consider the convolution of DTFTs. In this case, since we are in the space of 2π-periodic functions of a real variable, the convolution is defined as
X(e) * Y (e) = -1-
2π⟨X*(ej(ω-σ)),Y (e)⟩ (5.10)
= -1-
2π -ππX(ej(ω-σ))Y (e) (5.11)
=  1
---
2π -ππX(e)Y (ej(ω-σ)) (5.12)
which is consistent with the inner product definition in (??).

5.2.2 Properties of the Impulse Response

As we said, an LTI system is completely described by its impulse response, i.e. by h[n] = H{x[n]}.

FIR vs IIR. Since the impulse response is defined as the transformation of the discrete-time delta and since the delta is an infinite-length signal, the impulse response is always an infinite-length signal, i.e. a sequence. The nonzero values of the impulse response are usually called taps. Two distinct cases are possibles:

Note that in the case of FIR filters, the convolution operator entails only a finite number of sums and products; if h[n] = 0 for n < N and n M, we can invoke commutativity and rewrite (5.3) as

       M∑- 1
y[n] =     h[k ]x[n - k]
       k=N

Thus, convolution sums involving a finite-support impulse response are always well defined.

Causality. A system is called causal if its output does not depend on future values of the input. In practice, a causal system is the only type of “real-time” system we can actually implement, since knowledge of the future is normally not an option in real life. Yet, noncausal filters maintain a practical interest since in some application (usually called “batch processing”) we may have access to the entirety of a discrete-time signal, which has been previously stored on some form of memory support.(2) A filter whose output depends exclusively on future values of the input is called anticausal.

For an LTI system, causality implies that the associated impulse response is zero for negative indices; this is the only way to remove all “future” terms in the convolution sum (5.3). Similarly, for anticausal systems, the impulse response must be zero for all positive indices. Clearly, between the strict causal and anticausal extremes, we can have intermediate cases: consider for example a filter F whose impulse response is zero for n < -M with M ∈ N+. This filter is technically noncausal, but only in a “finite” way. If we consider the pure delay filter D, whose impulse response is

d[n] = δ[n - 1]

we can easily see that F can be made strictly causal by cascading M delays in front of it. Clearly, an FIR filter is always causal up to a delay.

Stability. A system is called bounded-input bounded-output stable (BIBO stable) if its output is bounded for all bounded input sequences. Again, stability is a very natural requirement for a filter, since it states that the output will not “blow up” when the input is reasonable. Linearity and time-invariance do not guarantee stability (as anyone who has ever used a hands-free phone has certainly experienced).

A bounded sequence x[n] is one for which it is possible to find a finite value L ∈ R+ so that |x[n]| < L for all n. A necessary and sufficient condition for an LTI system H to be BIBO stable is that its impulse response h[n] be absolutely summable. The sufficiency of the condition is proved as follows: if x[n] < L for all n, then we have

                    ||  ∞             ||     ∞                    ∞
||y[n]|| = ||h[n]*x [n]|| = || ∑  h[k]x[n - k]|| ≤  ∑   ||h[k]x [n- k]|| ≤ L ∑   ||h[k]||
                    |                |
                     k=- ∞               k=- ∞                k=-∞

and the last term is finite if h[n] is absolutely summable. Conversely, assume that h[n] is not absolutely summable and consider the signal

          (      )
x [n] = sign h[- n]
x[n] is clearly bounded, since it takes values only in {-1,0,+1}, and yet
                  ∞∑               ∑∞  |    |
y[0] = (h * x)[0] =      h[k]x[- k] =     |h[k]| = ∞
                 k=-∞            k= -∞

Note that in the case of FIR filters, the convolution sum only involves a finite number of terms. As a consequence, FIR filters are always stable.

5.3 Filtering by Example – Time Domain

So far, we have described a filter from a very abstract point of view, and we have shown that a filtering operation corresponds to a convolution with a defining sequence called the impulse response. We now take a diametrically opposite standpoint: we introduce a very practical problem and arrive at a solution which defines an LTI system. Once we recognize that the solution is indeed a discrete-time filter, we will be able to make use of the theoretical results of the previous Sections in order, to analyze its properties.

Consider a sequence like the one in Figure 5.2; we are clearly in the presence of a “smooth” signal corrupted by noise, which appears as little wiggles in the plot. Our goal is the removal of the noise, i.e. to smooth out the signal, in order to improve its readability.


PIC

Figure 5.2: Noisy signal.

5.3.1 FIR Filtering

An intuitive and basic approach to remove noise from data is to replace each point of the sequence x[n] by a local average, which can be obtained by taking the average of the sample at n and its N - 1 predecessors. Each point of the “de-noised” sequence can therefore be computed as

       1 N∑- 1
y[n ] =--     x[n - k]
      N  k=0
(5.13)

This is easily recognized as a convolution sum, and we can obtain the impulse response of the associated filter by letting x[n] = δ[n]; it is easy to see that

         N -1          (  1
       1- ∑            { N-   for 0 ≤ n < N
h [n] = N     δ[n - k] = (
          k=0            0    for n < 0 and n ≥ N
(5.14)

The impulse response, as it turns out, is a finite-support sequence so the filter that we have just built, is an FIR filter; this particular filter goes under the name of Moving Average (MA) filter. The “smoothing power” of this filter is dependent on the number of samples we take into account in the average or, in other words, on the length N of its impulse response. The filtered version of the original sequence for small and large values of N is plotted in Figures 5.3 and 5.4 respectively. Intuitively we can see that as N grows, more and more wiggles are removed. We will soon see how to handle the “smoothing power” of a filter in a precise, quantitative way. A general characteristic of FIR filters, that should be immediately noticed is that the value of the output does not depend on values of the input which are more than N steps away; FIR filters are therefore finite memory filters. Another aspect that we can mention at this point concerns the delay introduced by the filter: each output value is the average of a window of N input values whose representative sample is the one falling in the middle; thus, there is a delay of N∕2 samples between input and output, and the delay grows with N.


PIC
PIC

Figure 5.3: Moving averages for small values of N.


PIC
PIC

Figure 5.4: Moving averages for large values of N.

5.3.2 IIR Filtering

The moving average filter that we built in the previous Section has an obvious drawback; the more we want to smooth the signal, the more points we need to consider and, therefore, the more computations we have to perform to obtain the filtered value. Consider now the formula for the output of a length-M moving average filter:

           M∑- 1
yM[n] = 1--    x[n- k ]
        M  k=0
(5.15)

We can easily see that

        M  - 1               1
yM [n] = ------yM -1[n - 1]+ ---x[n] = λyM - 1[n - 1]+ (1- λ )x [n]
          M                 M

where we have defined λ = (M - 1)∕M. Now, as M grows larger, we can safely assume that if we compute the average over M - 1 or over M points, the result is basically the same: in other words, for M large, we can say that yM-1[n] yM[n]. This suggests a new way to compute the smoothed version of a sequence in a recursive fashion:

y[n] = λy[n - 1]+ (1 - λ)x[n]
(5.16)

This no longer looks like a convolution sum; it is, instead, an instance of a constant coefficient difference equation. We might wonder whether the transformation realized by (5.16) is still linear and time-invariant and, in this case, what its impulse response is. The first problem that we face in addressing this question stems from the recursive nature of (5.16): each new output value depends on the previous output value. We need to somehow define a starting value for y[n] or, in system theory parlance, we need to set the initial conditions. The choice which guarantees that the system defined by (5.16) is linear and time-invariant corresponds to the requirement that the system response to a sequence identically zero, be zero for all n; this requirement is also known as zero initial conditions, since it corresponds to setting y[n] = 0 for n < N0 where N0 is some time in the past.


PIC
PIC

Figure 5.5: Moving averages for different values of λ.


PIC
PIC

Figure 5.6: Leaky integrator outputs for values of λ close to one.

The linearity of (5.16) can now be proved in the following way : assume that the output sequence for the system defined by (5.16) is y[n] when the input is x[n]. It is immediately obvious that y1[n] = αy[n] satisfies (5.16) for an input equal to αx[n]. All we need to prove is that this is the only solution. Assume this is not the case and call y2[n] the other solution; we have

y1[n] = λy1[n - 1] + (1 - λ)(αx[n])
y2[n] = λy2[n - 1] + (1 - λ)(αx[n])
We can now subtract the second equation from the first. What we find is that the sequence y1[n] - y2[n] is the system’s response to the zero sequence, and therefore is zero for all n. Linearity with respect to the sum and time invariance can be proven in exactly the same way.

Now that we know that (5.16) defines an LTI system, we can try to compute its impulse response. Assuming zero initial conditions and x[n] = δ[n], we have

y[n] = 0      for n < 0
y[0] = 1 - λ
y[1] = (1 - λ)λ

y[2] = (1 - λ)λ2
  .
  ..
y[n] = (1 - λ)λn
(5.17)

so that the impulse response (shown in Figure 5.7) is

               n
h [n] = (1- λ) λ u[n]
(5.18)

The impulse response clearly defines an IIR filter and therefore the immediate question is whether the filter is stable. Since a sufficient condition for stability is that the impulse response is absolutely summable, we have

 ∑∞                           n+1
      ||h[n]|| = lim  |1 - λ|1---|λ|----
n= -∞        n→ ∞         1- |λ|
(5.19)

We can see that the above limit is finite for |λ| < 1 and so the system is BIBO stable for these values. The value of λ (which is, as we will see, the pole of the system) determines the smoothing power of the filter (Fig. 5.5). As λ 1, the input is smoothed more and more as can be seen in Figure 5.6, at a constant computational cost. The system implemented by (5.16) is often called a leaky integrator, in the sense that it approximates the behavior of an integrator with a leakage (or forgetting) factor λ. The delay introduced by the leaky integrator is more difficult to analyze than for the moving average but, again, it grows with the smoothing power of the filter; we will soon see how to proceed in order to quantify the delay introduced by IIR filters.


PIC

Figure 5.7: Impulse response (portion) of the leaky integrator for λ = 0.9.

As we can infer from this simple analysis, IIR filters are much more delicate entities than FIR filters; in the next Chapters we will also discover that their design is also much less straightforward and offers less flexibility. This is why, in practice, FIR filters are the filters of choice. IIR filters, however, and especially the simplest ones such as the leaky integrator, are extremely attractive when computational power is a scarce resource.

5.4 Filtering in the Frequency Domain

The above examples have introduced the notion of filtering in an operational and intuitive way. In order to make more precise statements on the characteristics of a discrete-time filter we need to move to the frequency domain. What does a filtering operation translate to in the frequency domain? The fundamental result of this Section is the convolution theorem for discrete-time signals: a convolution in the discrete-time domain is equivalent to a multiplication of Fourier transforms in the frequency domain. This result opens up a very fruitful perspective on filtering and filter design, together with alternative approaches to the implementation of filtering devices, as we will see momentarily.

5.4.1 LTI “Eigenfunctions”

Consider the case of a complex exponential sequence of frequency ω0 as the input to a linear time-invariant system H; we have

             ∞∑                  ∞∑                        ∞∑
H {ejω0n} =      ejω0kh [n - k] =     h[k]ejω0(n-k) = ejω0n      h[k]e-jω0k = H (ejω0)ejω0n
            k= -∞               k=-∞                     k= -∞
(5.20)

where H(e0) (i.e. the DTFT of h[n] at ω = ω0) is called the frequency response of the filter at frequency ω0. The above result states the fundamental fact that complex exponentials are eigensequences(3) of linear-time invariant systems. We notice the following two properties:

5.4.2 The Convolution and Modulation Theorems

Consider two sequences x[n] and h[n], both absolutely summable. The discrete-time Fourier transform of the convolution y[n] = x[n] * h[n] is

   jω        jω     jω
Y (e ) = X (e  )H (e  )
(5.21)

The proof is as follows: if we take the DTFT of the convolution sum, we have

          ∑∞   ∑∞
Y (ejω) =            x[k ]h[n - k]e-jωn
         n=-∞ k=- ∞

and by interchanging the order of summation (which can be done because of the absolute summability of both sequences) and by splitting the complex exponential, we obtain

    jω    ∑∞       - jωk  ∑∞           -jω(n-k)
Y (e  ) =      x[k ]e          h [n - k]e
         k=- ∞          n=-∞

from which the result immediately follows after a change of variable. Before discussing the implications of the theorem, we to state and prove its dual, which is called the modulation theorem.

Consider now the discrete-time sequences x[n] and w[n], both absolutely summable, with discrete-time Fourier transforms X(e) and W(e). The discrete-time Fourier transform of the product y[n] = x[n]w[n] is

Y (ejω) = X (ejω) *W (ejω)
(5.22)

where the DTFT convolution is via the convolution operator for 2π-periodic functions, defined in (5.12). This is easily proven as follows: we begin with the DTFT inversion formula of the DTFT convolution:

1 ∫  π        jω  jωn      1  ∫ π 1  ∫ π    j(ω -σ)    jσ   jωn
2π-   (X *Y)(e  )e   dω =  2π-    2π-   X (e     )Y (e  )e   dσdω
    -π                         -π     -π

and we split the last integral to obtain

( 1  ∫ π                     ) ( 1  ∫ π             )
  ---    X(ej(ω-σ))ej(ω-σ)n dω    ---   Y (ejσ)ejσn dσ  =  x[n ]y[n]
  2π  -π                         2π  -π

These fundamental results are summarized in Table 5.1.


Table 5.1: The convolution and modulation theorems.


Time Domain Frequency Domain


x[n] * y[n] X(e)Y (e)


x[n]y[n] X(e) * Y (e)



5.4.3 Properties of the Frequency Response

Since an LTI system is completely characterized by its impulse response, it is also uniquely characterized by its frequency response. The frequency response provides us with a different perspective on the properties of a given filter, which are embedded in the magnitude and the phase of the response.

Just as the impulse response completely characterizes a filter in the discrete-time domain, its Fourier transform, called the filter’s frequency response, completely characterizes the filter in the frequency domain. The properties of LTI systems are described in terms of their DTFTs magnitude and phase, each of which controls different features of the system’s behavior.

Magnitude. The most powerful intuition arising from the convolution theorem is obtained by considering the magnitude of the spectra involved in a filtering operation. Recall that a Fourier spectrum represents the energy distribution of a signal in frequency; by appropriately “shaping” the magnitude spectrum of a filter’s impulse response we can easily boost, attenuate, and even completely eliminate, a given part of the frequency content in the filtered input sequence. According to the way the magnitude spectrum is affected by the filter, we can classify filters into three broad categories (here as before we assume that the impulse response is real, and therefore the associated magnitude spectrum is symmetric; in addition, the 2π periodicity of the spectrum is implicitly understood):

The frequency interval (or intervals) for which the magnitude of the frequency response is zero (or practically negligible) is called the stopband. Conversely, the frequency interval (or intervals) for which the magnitude is non-negligible is called the passband.

Phase. The phase response of a filter has an equally important effect on the output signal, even though its impact is less intuitive.

By and large, the phase response acts as a generalized delay. Consider Equation (5.20) once more; we can see that a single sinusoidal oscillation undergoes a phase shift equal to the phase of the impulse response’s Fourier transform. A phase offset for a sinusoid is equivalent to a delay in the time domain. This is immediately obvious in the case of a trigonometric function defined on the real line since we can always write

                (        )
cos(ωt + φ) = cosω (t- t0) ,      t0 = - φ-
                                        ω

For discrete-time sinusoids, it is not always possible to express the phase offset in terms of an integer number of samples (exactly for the same reasons for which a discrete- time sinusoid is not always periodic in its index n); yet the effect is the same, in that a phase offset corresponds to an implicit delay of the sinusoid. When the phase offset for a complex exponential is not an integer multiple of its frequency, we say that we are in the presence of a fractional delay. Now, since each sinusoidal component of the input signal may be delayed by an arbitrary amount, the output signal will be composed of sinusoids whose relative alignment may be very different from the original. Phase alignment determines the shape of the signal in the time domain, as we have seen in Section 4.7.4. A filter with unit magnitude across the spectrum, which does not affect the amplitude of the sinusoidal components, but whose phase response is not linear, can completely change the shape of a filtered signal.(5)

Linear Phase. A very important type of phase response is linear phase:

     jω     -jωd
∡H (e  ) = e
(5.23)

Consider a simple system which just delays its input, i.e. y[n] = x[n - D] with D ∈ Z; this is obviously an LTI system with impulse response h[n] = δ[n-D] and frequency response H(e) = e-jωD. This means that, if the value d in (5.23) is an integer, (5.23) defines a pure delay system; since the magnitude is constant and equal to one, this is an example of an allpass filter. If d is not an integer, (5.23) still defines an allpass delay system for which the delay is fractional, and we should interpret its effect as explained in the previous Section. In particular, if we think of the original signal in terms of its Fourier reconstruction formula, the fractionally delayed output is obtained by stepping forward the initial phase of all oscillators by a non-integer multiple of the frequency. In the discrete-time domains, we have a signal which takes values “between” the original samples but, since the relative phase of any one oscillator, with respect to the others, has remained the same as in the original signal, the shape of the signal in the time domain is unchanged.

For a general filter with linear phase we can always write

    jω    ||   jω || -jωd
H (e  ) = H (e ) e

In other words, the net effect of the linear phase filter is that of a cascade of two systems: a zero-phase filter which affects only the spectral magnitude of the input and therefore introduces no phase distortion, followed by a (possibly fractional) delay system (which, again, introduces just a delay but no phase distortion).

Group Delay. When a filter does not have linear phase, it is important to quantify the amount of phase distortion both in amount and in location. Nonlinear phase is not always a problem; if a filter’s phase is nonlinear just in the stopband, for instance, the actual phase distortion is negligible. The concept of group delay is a measure of nonlinearity in the phase; the idea is to express the phase response around any given frequency ω0 using a first order Taylor approximation. Define φ(ω) = H(e) and approximate φ(ω) around ω0 as φ(ω0 + τ) = φ(ω0) + τφ(ω0); we can write

              |           |           (|           |      )
H (ej(ω0+ τ)) = ||H (ej(ω0+τ))|| ejφ(ω0+τ) ≈ ||H (ej(ω0+τ))|| ejφ(ω0) ejφ′(ω0)τ
(5.24)

so that, approximately, the frequency response of the filter is linear phase for at least a group of frequencies around a given ω0. The delay for this group of frequencies is the negative of the derivative of the phase, from which the definition of group delay is

   {    jω }      ′       d∡H--(ejω)
grd  H (e  )  = - φ (ω ) = -   d ω
(5.25)

For truly linear phase systems, the group delay is a constant. Deviations from a constant value quantify the amount of phase distortion introduced by a filter in terms of the (possibly non-integer) number of samples by wich a frequency component is delayed.

5.5 Filtering by Example – Frequency Domain

Now that we know what to look for in a filter, we can revisit the “empirical” de-noising filters introduced in Section 5.3. Both filters are realizable, in the sense that they can be implemented with practical and efficient algorithms, as we will study in the next Chapters. Their frequency responses allow us to qualify and quantify precisely their smoothing properties, which we previously described, in an intuitive fashion.

Moving Average. The frequency response of the moving average filter (Sect. 5.3.1) can be shown to be

H (ejω ) =-1 sin(ωN-∕2)e-jN2-1ω
         N   sin (ω ∕2)
(5.26)

In the above expression, it is easy to separate the magnitude and the phase, which are plotted in Figure 5.8. The group delay for the filter is the constant (N - 1)2, which means that the filter delays its output by (N - 1)2 samples (i.e. there is a fractional delay for N even). This formalizes the intuition that the “representative sample” for an averaging window of N samples is the sample in the middle. If N is even, this does not correspond to a real sample but to a “ghost” sample in the middle.


PIC
PIC

Figure 5.8: Magnitude and phase response of the moving average filter for N = 8.

Leaky Integrator. The frequency response of the leaky integrator in Section 5.3.2 is

   jω      1 - λ
H (e  ) = 1---λe-jω
(5.27)

Magnitude and phase are, respectively,

|      |        (1- λ )2
|H (ejω)|2 = -----2-----------
           1 + λ -  2λcos(ω)
(5.28)

                 [   λ sin (ω )  ]
∡H (ejω) = arctan  ------------
                   1 - λ cos(ω )
(5.29)

and they are plotted in Figure 5.9. The group delay, also plotted in Figure 5.9, is obtained by differentiating the phase response:

   {    jω }   --λ-cos(ω-)--λ2---
grd H (e  ) =  1+ λ2 - 2λ cos(ω )
(5.30)

The group delay indicates that, for the frequencies for which the magnitude is not very small, the delay increases with the smoothing power of the filter.

Note that, according to the classification in Section 5.4.3, both the moving average and the leaky integrator are lowpass filters.


PIC
PIC
PIC

Figure 5.9: Magnitude and phase response of the leaky integrator for λ = 0.9.

5.6 Ideal Filters

The frequency characterization introduced in Section 5.4.3 immediately leads to questions such as “What is the best lowpass filter?” or “Can I have a highpass filter with zero delay?” It turns out that the answers to such questions are given by ideal filters. Ideal filters are what the (Platonic) name suggests: theoretical abstractions which capture the essence of the basic filtering operation but which are not realizable in practice. In a way, they are the “gold standard” of filter design.

Ideal Lowpass. The ideal lowpass filter is a filter which “kills” all frequency content above a cutoff frequency ωc and leaves all frequency content below ωc untouched; it is defined in the frequency domain as

           {
     jω      1   |ω | ≤ ωc
Hlp (e  ) =   0   ωc < |ω | ≤ π
(5.31)

and clearly, the filter has zero phase delay. The ideal lowpass can also be defined in terms of its bandwidth ωb = 2ωc. The DTFT inversion formula gives the corresponding impulse response:

        sin (ωcn )
hlp[n] = --------
           πn
(5.32)

The impulse response is a symmetric infinite sequence and the filter is therefore IIR; unfortunately, however, it can be proved that no realizable system (i.e. no algorithm with a finite number of operations per output sample) can exactly implement the above impulse response. More bad news: the decay of the impulse response is slow, going to zero only as 1∕n, and it is not absolutely summable; this means that any FIR approximation of the ideal lowpass obtained by truncating h[n] needs a lot of samples to achieve some accuracy and that, in any case, convergence to the ideal frequency response is only be in the mean square sense. An immediate consequence of these facts is that, when designing realizable filters, we will take an entirely different approach.


PIC

Figure 5.10: Frequency response of the ideal lowpass filter, ωc = π∕3.


PIC

Figure 5.11: Impulse response (portion) of the ideal lowpass filter, ωc = π∕3.

Despite these practical difficulties, the ideal lowpass and its associated DTFT pair are so important as a theoretical paradigm, that two special function names are used to denote the above expressions. These are defined as follows:

rect(x) = {
  1   |x| ≤ 1∕2
  0   |x| > 1∕2 (5.33)
sinc(x) = (
{ sin(πx)   x ⁄= 0
(   πx
  1         x = 0 (5.34)
Note that the sinc function is zero for all integer values of the argument except zero. With this notation, and with respect to the bandwidth of the filter, the ideal lowpass filter’s frequency response between -π and π becomes
              ( ω )
Hlp(ejω) = rect ---
                ωb
(5.35)

(obviously 2π-periodized over all R). Its impulse response in terms of bandwidth becomes

               (    )
h  [n ] = ωb-sinc ωb-n
 lp     2π      2π
(5.36)

or, in terms of cutoff frequency,

        ωc     (ωc  )
hlp[n ] =-π-sinc  -π-n
(5.37)

The DTFT pair:

       (    )           (   )
ωb-sinc  ωb-n   D←TF→T  rect  ω--
2π      2π                ωb
(5.38)

constitutes one of the fundamental relationships of digital signal processing. Note that as ωb 2π, we re-obtain the well-known DTFT pair δ[n] 1, while as ωb 0 we can re-normalize by (2π∕ωb) to obtain 1 ˜
δ (ω).

Ideal Highpass. The ideal highpass filter with cutoff frequency ωc is the complementary filter to the ideal lowpass, in the sense that it eliminates all frequency content below the cutoff frequency. Its frequency response is

           {
     jω     0   |ω| ≤ ωc
Hhp(e  ) =
            1   ωc < |ω| ≤ π
(5.39)

where the 2π-periodicity is as usual implicitly assumed. From the relation Hh(e) = 1 - rect(ω∕2ωc) the impulse response is easily obtained as

               ω     ( ω  )
hhp[n] = δ[n]- -csinc  -cn
               π       π

Ideal Bandpass. The ideal bandpass filter with center frequency ω0 and bandwidth ωb, ωb2 < ω0 is defined in the frequency domain between -π and π as

          (
          |{ 1   ω0 - ωb∕2 ≤ ω ≤ ω0 + ωb∕2
Hbp(ejω ) =  1   - ω0 - ωb∕2 ≥ ω ≥ - ω0 + ωb∕2
          |(
            0   elsewhere
(5.40)

where the 2π-periodicity is, as usual, implicitly assumed. It is left as an exercise to prove that the impulse response is

                         (    )
hbp[n] = 2 cos(ω0n ) ωb-sinc ωb-n
                  2π      2π
(5.41)


PIC
PIC

Figure 5.12: Frequency and phase response of the Hilbert filter.


PIC

Figure 5.13: Impulse response (portion) of the Hilbert filter.

Hilbert Filter. The Hilbert filter is defined in the frequency domain as

         {
H (ejω) =  - j  0 ≤ ω < π
           +j   - π ≤ ω < 0
(5.42)

where the 2π-periodicity is, as usual, implicitly assumed. Its impulse response is easily computed as

                     (
       2sin2(πn-∕2)-  { 0    for n even
h[n] =     πn      = ( -2-  for n odd
                       nπ
(5.43)

Clearly |H(e)| = 1, so this filter is allpass. It introduces a phase shift of π∕2 in the input signal so that, for instance,

h[n]* cos(ω0n ) = - sin(ω0n)
(5.44)

as one can verify from (4.39) and (4.40). More generally, the Hilbert filter is used in communication systems to build efficient demodulation schemes, as we will see later. The fundamental concept is the following: consider a real signal x[n] and its DTFT X(e); consider also the signal processed by the Hilbert filter y[n] = h[n] * x[n]. This can be defined as

         {
          X (ejω)   for 0 ≤ ω < π
A(ejω) =
          0         for - π ≤ ω < 0

i.e. A(e) is the positive-frequency part of the spectrum of x[n]. Since x[n] is real, its DTFT has symmetry X(e) = X*(e-) and therefore we can write

    jω      * - jω       jω
X (e ) = A  (e   ) + A(e  )

By separating the real and imaginary parts we can always write A(e) = AR(e) + jAI(e) and so

    jω         -jω        - jω         jω         jω
X (e ) = AR (e   )- jAI (e   )+ AR (e  )+ jAI(e  )

For the filtered signal, we know that Y (e) = H(e)X(e) and therefore

Y (ejω) = jAR (e-jω)+ AI (e- jω )- jAR (ejω) + AI(ejω)

It is, thus, easy to see that

x[n ]+ jy[n ] DT←F→T 2A (ejω)
(5.45)

i.e. the spectrum of the signal a[n] = x[n] + jy[n] contains only the positive-frequency components of the original signal x[n]. The signal a[n] is called the analytic signal associated to x[n].

5.7 Realizable Filters

Contrary to ideal filters, realizable filters are LTI systems which can be implemented in practice; this means that there exists an algorithm which computes every output sample with a finite number of operations and using a finite amount of memory storage. Note that the impulse response of a realizable filter need not be finite-support; while FIR filters are clearly realizable we have seen at least one example of realizable IIR filter (i.e. the leaky integrator).

5.7.1 Constant-Coefficient Difference Equations

Let us consider (informally) the possible mathematical description of an LTI system, seen as a “machine” which takes one input sample at a time and produces a corresponding output sample. Linearity in the input-output relationship implies that the description can involve only linear operations, i.e. sums and multiplications by scalars. Time invariance implies that the scalars be constants. Finally, realizability implies that, inside the above mentioned “machine”, there can be only a finite number of adders and multipliers (and, correspondingly, a finite number of memory cells). Such a mathematical relationship goes under the name of constant-coefficient difference equation (CCDE).

In its most general form, a constant-coefficient difference equation defines a relationship between an input signal x[n] and an output signal y[n] as

N∑-1            M∑-1
    aky[n - k] =    bkx[n - k]
 k=0              k=0
(5.46)

In the rest of this book we restrict ourselves to the case in which all the coefficients ak and bk are real. Usually, a0 = 1, so that the above equation can easily be rearranged as

       M- 1           N -1
y[n] = ∑   b x[n - k]-  ∑  a y[n - k]
            k              k
       k=0             k=1
(5.47)

Clearly, the above relation defines each output sample y[n] as a linear combination of past and present input values and past output values. However, it is easy to see that if aN-1⁄=0 we can for instance rearrange (5.46) as

               M∑- 1 ′         N∑-2 ′
y[n - N + 1] =     bkx[n - k]-     aky[n - k]
               k=0             k=0

where ak = ak∕aN-1 and bk = bk∕aN-1. With the change of variable m = n - N + 1, this becomes

        N -1              N -1
y[m] =   ∑    b′x[m  + k]- ∑   a′y[m + k]
               k               k
       k=N- M             k=1
(5.48)

which shows that the difference equation can also be computed in another way, namely by expressing y[m] as a linear combination of future values of input and output. It is rather intuitive that the first approach defines a causal behavior, while the second approach is anticausal.

5.7.2 The Algorithmic Nature of CCDEs

Contrary to the differential equations used in the characterization of continuous-time systems, difference equations can be used directly to translate the transformation operated by the system into an explicit algorithmic form. To see this, and to gain a lot of insight into the properties of difference equations, it may be useful to consider a possible implementation of the system in (5.47), shown as a C code sample in Figure 5.14.


  
      extern double a[N];     // The a’s coefficients
      extern double b[M];     // The b’s coefficients
  
      static double x[M];     // Delay line for x
      static double y[N];     // Delay line for y
  
      double GetOutput(double input)
      {
        int k;
  
        // Shift delay line for x:
        for (k = N-1; k > 0; k--)
          x[k] = x[k-1];
  
        // new input value x[n]:
        x[0] = input;
  
        // Shift delay line for y:
        for (k = M-1; k > 0; k--)
          y[k] = y[k-1];
  
        double y = 0;
        for (k = 0; k < M; k++)
          y += b[k] * x[k];
        for (k = 1; k < M; k++)
          y -= a[k] * y[k];
  
        // New value for y[n]; store in delay line
        return (y[0] = y);
      }
  

Figure 5.14: C code implementation of a generic CCDE.

It is easy to verify that

If we try to compile and execute the code, however, we immediately run into an initialization problem: the first time (actually, the first max(N,M - 1) times) we call the function, the delay lines which hold past values of x[n] and y[n] will contain undefined values. Most likely, the compiler will notice this condition and will print a warning message signaling that the static arrays have not been properly initialized. We are back to the problem of setting the initial conditions of the system. The choice which guarantees linearity and time invariance is called the zero initial conditions and corresponds to setting the delay lines to zero before starting the algorithm. This choice implies that the system response to the zero sequence is the zero sequence and, in this way, linearity and time invariance can be proven as in Section 5.3.2.

5.7.3 Filter Analysis and Design

CCDEs provide a powerful operational view of filtering; in very simple case, such as in Section 5.3.2 or in the case of FIR filters, the impulse response (and therefore its frequency response) can be obtained directly from the filter’s equation. This is not the general case however, and to analyze a generic realizable filter from its CCDE, we need to be able to easily derive the transfer function from the CCDE. Similarly, in order to design a realizable filter which meets a set of requirement, we need to devise a procedure which “tunes” the coefficients in the CCDE until the frequency response is satisfactory while preserving stability; in order to do this, again, we need a convenient tool to link the CCDE to the magnitude and phase response. This tool will be introduced in the next Chapter, and goes under the name of z-transform.

5.8 Examples

Example 5.1: Radio transmission

AM radio was one of the first forms of telecommunication and remains to this day a ubiquitous broadcast method due to the ease with which a robust receiver can be assembled. From the hardware point of view, an AM transmitter uses a transducer (i.e. a microphone) to convert sound to an electric signal, and then modulates this signal into a frequency band which correspond to a region of the electromagnetic spectrum in which propagation is well-behaved (see also Section 12.1.1). An AM receiver simply performs the reverse steps. Here we can neglect the physics of transducers and of antennas and concentrate on an idealized digital AM transmitter.


PIC
PIC
PIC

Figure 5.15: AM modulation; original baseband signal (top panel); modulated bandpass signal (middle panel); bandpass signal with explicit spectral repetitions (bottom panel).

Modulation. Suppose x[n] is a real, discrete-time signal representing voice or music. Acoustic signals are a type of lowpass (or baseband) signal; while good for our ears (which are baseband receivers) baseband signals are not suitable for direct electromagnetic transmission since propagation in the baseband is poor and since occupancy of the same band would preclude the existence of multiple radio channels. We need to use modulation in order to shift a baseband signal in frequency and transform it into a bandpass signal prior to transmission. Modulation is accomplished by multiplying the baseband signal by an oscillatory carrier at a given center frequency; note that modulation is not a time-invariant operation. Consider the signal

         {         }
y[n] = Re x[n]ejωcn

where ωc is the carrier frequency. This corresponds to a cosine modulation since

y[n] = x[n]cos(ωcn)

and (see (4.56)):

           [                        ]
Y (ejω ) = 1 X (ej(ω- ωc))+  X(ej(ω+ωc))
         2

The complex signal c[n] = x[n]ecn is called the complex bandpass signal and, while not transmissible in practice, it is a very useful intermediate representation of the modulated signal especially in the case of Hilbert demodulation.

Assume that the baseband signal has spectral support [-ωb2, ωb2] (i.e. assume that its energy is zero for |ω| > ωb2); a common way to express this concept is to say that the bandwidth of the baseband signal is ωb. What is the maximum carrier frequency ωc that we can use to create a bandpass signal? If we look at the effect of modulation on the spectrum and we take into account its 2π-periodicity as in Figure 5.15 we can see that if we choose too large a modulation frequency the positive passband overlaps with the negative passband of the first repetition. Intuitively, we are trying to modulate too fast and we are falling back into the folding of frequencies larger than 2π which we have seen in Example 2.1. In our case the maximum frequency of the modulated signal is ωc + ωb2. To avoid overlap with the first repetition of the spectrum, we must guarantee that:

     ω
ωc + -b-< π
      2
(5.49)

which limit the maximum carrier frequency to ωc < π - ωb2.

Demodulation. An AM receiver must undo the modulation process; again, assume we’re entirely in the discrete-time domain. The first step is to isolate the channel of interest by using a sharp bandpass filter centered on the modulation frequency ωc (see also Exercise 5.7). Neglecting the impairments introduced by the transmission (noise and distortion) we can initially assume that after filtering the receiver possesses an exact copy of the modulated signal y[n]. The original signal x[n] can be retrieved in several ways; an elegant scheme, for instance, is Hilbert demodulation. The idea behind Hilbert demodulation is to reconstruct the complex bandpass signal c[n] from y[n] as c[n] = y[n] + j(h[n] * y[n]) where h[n] is the impulse response of the Hilbert filter as given in (5.43). Once this is done, we multiply c[n] by the complex exponential e-cn and take the real part. This demodulation scheme will be studied in more detail in Section 12.3.1.

A more classic scheme involves multiplying y[n] by a sinusoidal carrier at the same frequency as the carrier and filtering the result with a lowpass filter with cutoff at ωb2. After the multiplication, the signal is

u [n] = y[n ]cosωcn = x[n]cos2ωcn = 1-x[n]+ 1-x[n]cos2ωcn
                                  2       2

and the corresponding spectrum is therefore

                      [                          ]
   jω    1-   jω    1-   ( j(ω+2ωc))    ( j(ω- 2ωc))
U (e  ) = 2 X (e  ) + 4  X  e        + X  e


PIC

Figure 5.16: Spectrum of the AM demodulated signal u[n], with explicit repetitions (the light gray spectral components are the sidebands from the repetitions at ±4π); the dashed lowpass filter response selects the baseband signal.

This spectrum is shown in Figure 5.16, with explicit repetitions; note that if the maximum frequency condition in (5.49) is satisfied, the components at twice the carrier frequency that may leak into the [-π,π] interval from the neighboring spectral repetitions do not overlap with the baseband. From the figure, if we choose

         {
   jω      2  |ω| < ωb∕2
H (e  ) =   0  otherwise

then Xˆ(e) = H(e)U(e) = X(e). The component at ω = 2ωc is filtered out and thus the spectrum of the demodulated signal is equal to the spectrum of the original signal. Of course the ideal low-pass is in practice replaced by a realizable IIR or an FIR filter with adequate properties.


PIC

Figure 5.17: Schematics for a galena radio receiver (from Gernsback’s book “Radio For All”, 1922).

Finally, for the fun of it, we can look at a “digital galena” demodulator. Galena receivers (whose general structure is shown in Figure 5.17) are the simplest (and oldest) type of radio receiver: the antenna and the tuning coil form a variable LC bandpass filter to select the band while a galena crystal, touched by a thin metal wire called the “cat’s whisker”, acts as a rectifying nonlinearity. A pair of high-impedance headphones is connected between the cat’s whisker and ground; the mechanical inertia of the headphones acts as a simple lowpass filter which completes the radio receiver. In a digital simulation of a galena receiver, the antenna and coil are replaced by our sharp digital bandpass filter, at the output of which we find y[n]. The rectified signal at the cat’s whisker can be modeled as yr[n] = |y[n]|; since yr[n] is positive, the integration realized by the crude lowpass in the headphone can reveal the baseband envelope and eliminate most of the high frequency content. The process is best understood in the time domain and is illustrated in Figure 5.18. Note that, spectrally, the qualitative effect of the nonlinearity is indeed to bring the bandpass component back to baseband; as for most nonlinear processing, however, no simple analytical form for the baseband spectrum is available.

Example 5.2: Can IIRs see the future?

If we look at the bottom panel of Figure 5.9 we can notice that the group delay is negative for frequencies above approximately π∕7. Does that mean that we can look into the future?
(Hint: no.)


PIC
PIC
PIC
PIC

Figure 5.18: Galena demodulation in the time domain; original signal x[n] (top panel); modulated bandpass signal y[n] (second panel); rectified signal yr[n] (third panel); lowpass-filtered envelope at the headphones (bottom panel).

To see what we mean, consider the effect of group delay on a narrowband signal x[n] centered at ω0; a narrowband signal can be easily constructed by modulating a baseband signal s[n] (i.e. a signal so that S(e) = 0 for |ω| > ωb and ωb very small). Set

x[n] = s[n]cos(ω0n)

and consider a real-valued filter H(e) such that for τ small it is |H(ej(ω0+τ))|1 and the antisymmetric phase response is

    ( j(ω0+τ))                 ( j(-ω0+τ))
∡H   e       = θ - gdτ,    ∡H  e         = - θ + gdτ

clearly the group delay of the filter around ω0 is gd. If we filter the narrowband signal x[n] with H(e), we can write the DTFT of the output for 0 ω < π as

Y (ejω) = X (ejω )ej(θ-gd(ω- ω0))

since, even though the approximation for H(e) holds only in a small neighborhood of ω0, X(e) is zero everywhere else so “we don’t care”. If we write out the expression for the full spectrum we have

          1 [ (       )                (        )             ]   1 [ (       )       (        )    ]
Y (ejω) = --S  ej(ω- ω0) ej(θ- gd(ω-ω0)) + S ej(ω+ω0) ej(-θ+gd(ω-ω0)) = -- S ej(ω-ω0) ejφ + S ej(ω+ω0) e-jφ e-jgd&#x
          2                                                       2

where we have put φ = θ + gdω0. We can recognize by inspection that the first term is simply s[n] modulated by a cosine with a phase offset of φ; the trailing linear phase term is just a global delay. If we assume gd is an integer we can therefore write

y[n] = s[n- gd]cos(nω0 + θ)
(5.50)

so that the effect of the group delay is to delay the narrowband envelope by exactly gd samples. The analysis still holds even if gd is not an integer, as we will see in Chapter 9 when we deal with fractional delays.

Now, if gd is negative, (5.50) seems to imply that the envelope s[n] is advanced in time so that a filter with negative group delay is able to produce a copy of the input before the input is even applied; we would have a time machine which can look into the future! Clearly there must something wrong but the problem cannot be with the filter since the leaky integrator is an example of a perfectly realizable filter with negative group delay in the stopband. In fact, the inconsistency lies with the hypothesis of having a perfectly narrowband signal: just like the impulse response of an ideal filter is necessarily an infinite two-sided sequence, so any perfectly narrowband signal cannot have an identifiable “beginning”. When we think of “applying” the input to the filter, we are implicitly assuming a one-sided (or, more likely, a finite-support) signal and this signal has nonzero spectral components at all frequencies. The net effect of these is that the overall delay for the signal will always be nonnegative.

5.9 Further Reading

Discrete-time filters are covered in all signal processing books, e.g. a good review is given in Discrete-Time Signal Processing, by A. V. Oppenheim and R. W. Schafer (Prentice-Hall, last edition in 1999).

5.10 Exercises

Exercise 5.1: Linearity and time-invariance – I.  Consider the transformation H{x[n]} = nx[n]. Does H define an LTI system?

Exercise 5.2: Linearity and time-invariance – II.  Consider a discrete-time system H{⋅}. When the input is the signal x[n] = cos((2π∕5)n), the output is H{x[n]} = sin((π∕2)n). Can the system be linear and time-break invariant? Explain.

Exercise 5.3: Finite-support convolution.  Consider the finite-support signal h[n] defined as

      {
        1  for |n| ≤ M
h[n ] =
        0  otherwise

  1. Compute the signal x[n] = h[n] * h[n] for M = 2 and sketch the result.
  2. Compute the DTFT of x[n], X(e), and sketch its value in the interval [0].
  3. Give a qualitative description of how |X(e)| changes as M grows.
  4. Compute the signal y[n] = x[n] * h[n] for M = 2 and sketch the result. For a general M, is the behavior of the sequence y[n]? (E.g. is it linear in n? Is it quadratic?)
  5. Compute Y (e) and sketch its value.

Exercise 5.4: Convolution – I.  Let x[n] be a discrete-time sequence defined as

       (
       |{ M - n   0 ≤ n ≤ M
x [n] =   M + n   - M  ≤ n ≤ 0
       |(
         0       otherwise

for some odd integer M.

  1. Show that x[n] can be expressed as the convolution of two discrete-time sequences x1[n] and x2[n].
  2. Using the previous results, compute the DTFT of x[n].

Exercise 5.5: Convolution – II.  Consider the following discrete-time signals:

x[n] = cos(1.5n)
y[n] = 1-
5sinc(  )
 n-
 5
Compute the convolution: (x[n])2 * y[n]

Exercise 5.6: System properties.  Consider the following input-output relations and, for each of the underlying systems, determine whether the system is linear, time invariant, BIBO stable, causal or anti-causal. Characterize the eventual LTI systems by their impulse response.

  1. y[n] = x[-n].
  2. y[n] = e-jωnx[n].
  3. y[n] = k=n-n0n+n0 x[k].
  4. y[n] = ny[n - 1] + x[n], such that if x[n] = 0 for n < n0, then y[n] = 0 for n < n0.

Exercise 5.7: Ideal filters.  Derive the impulse response of a bandpass filter with center frequency ω0 and passband ωb:

          (
          |{ 1   ω0 - ωb∕2 ≤ ω ≤ ω0 + ωb∕2
Hbp(ejω ) =  1   - ω0 - ωb∕2 ≥ ω ≥ - ω0 + ωb∕2
          |(
            0   elsewhere

(Hint: consider the following ingredients: a cosine of frequency ω0, a lowpass filter of bandwidth ωb and the modulation theorem.)

Exercise 5.8: Zero-phase filtering.  Consider an operator R which turns a sequence into its time-reversed version:

  {    }
R  x[n] =  x[- n]

  1. The operator is clearly linear. Show that it is not time-invariant.

Suppose you have an LTI filter H with impulse response h[n] and you perform the following sequence of operations in the followin order:

s[n] = H{x[n]}
r[n] = R{s[n]}
w[n] = H{r[n]}
y[n] = R{w[n]}
  1. Show that the input-output relation between x[n] and y[n] is an LTI transformation.
  2. Give the frequency response of the equivalent filter realized by the series of transformations and show that it has zero phase.

Exercise 5.9: Nonlinear signal processing.  Consider the system H implementing the input-output relation y[n] = H{x[n]} = x2[n].

  1. Prove by example that the system is nonlinear.
  2. Prove that the system is time-invariant.

Now consider the following cascade system:


PIC


where G is the following ideal highpass filter:

         {
G(ejω) =  0   for |ω| < π∕2
          2   otherwise

(as per usual, G(e) is 2π-periodic – i.e. prolonged by periodicity outside of [-π,π]). The output of the cascade is therefore v[n] = G{  {   } }
 H  x[n].

  1. Compute v[n] when x[n] = cos(ω0n) for ω0 = 3π∕8. How would you describe the transformation operated by the cascade on the input?
  2. Compute v[n] as before, with now ω0 = 7π-
 8

Exercise 5.10: Analytic signals and modulation.  In this exercise we explore a modulation-demodulation scheme commonly used in data transmission systems. Consider two real sequences x[n] and y[n], which represent two data streams that we want to transmit. Assume that their spectrum is of lowpass type, i.e. X(e) = Y (e) = 0 for |ω| > ωc. Consider further the following derived signal:

c[n] = x[n ]+ jy[n ]

and the modulated signal:

           jω0n
r[n] = c[n]e   ,      ωc < ω0 < π - ωc

  1. Set ωc = π∕6, ω0 = π∕2 and sketch |R(e)| for whatever shapes you choose for X(e),Y (e). Verify from your plot that r[n] is an analytic signal.

The signal r[n] is called a complex bandpass signal. Of course it cannot be transmitted as such, since it is complex. The transmitted signal is, instead,

        {    }
s[n] = Re  r[n]

This modulated signal is an example of Quadrature Amplitude Modulation (QAM).

  1. Write out the expression for s[n] in terms of x[n],y[n]. Now you can see the reason behind the term QAM, since we are modulating with two carriers in quadrature (i.e. out of phase by 90 degrees).

Now we want to recover x[n] and y[n] from s[n]. To do so, follow these steps:

  1. Show that s[n] + j(h[n] * s[n]) = r[n], where h[n] is the Hilbert filter. In other words, we have recovered the analytic signal r[n] from its real part only.
  2. Once you have r[n], show how to extract x[n] and y[n].

© Presses polytechniques et universitaires romandes, 2008
All rights reserved

Chapter 6
The Z-Transform

Mathematically, the z-transform is a mapping between complex sequences and analytical functions on the complex plane. Given a discrete-time signal x[n], the z-transform of x[n] is formally defined as the complex function of a complex variable z ∈ C

                   ∞∑
X (z) = Z {x[n]} =      x[n]z-n
                  n=-∞
(6.1)

Contrary to the Fourier transform (as well as to other well-known transforms such as the Laplace transform or the wavelet transform), the z-transform is not an analysis tool per se, in that it does not offer a new physical insight on the nature of signals and systems. The z-transform, however, derives its status as a fundamental tool in digital signal processing from two key features:

Probably the best approach to the z-transform is to consider it as a clever mathematical transformation which facilitates the manipulation of complex sequences; for discrete-time filters, the z-transform bridges the algorithmic side (i.e. the CCDE) to the analytical side (i.e. the spectral properties) in an extremely elegant, convenient and ultimately beautiful way.

6.1 Filter Analysis

To see the usefulness of the z-transform in the context of the analysis and the design of realizable filters, it is sufficient to consider the following two formal properties of the z-transform operator:

In the above, we have conveniently ignored all convergence issues for the z-transform; these will be addressed shortly but, for the time being, let us just make use of the formalism as it stands.

6.1.1 Solving CCDEs

Consider the generic filter CCDE (Constant-Coefficient Difference Equation) in (5.46):

       M∑- 1           N∑-1
y[n] =     bkx[n - k]-     aky[n - k]
       k=0             k=1

If we apply the z-transform operator to both sides and exploit the linearity and time-shifting properties, we have

       M∑-1    -k       N∑- 1   -k
Y (z) =     bkz   X (z )-     akz  Y (z)
        k=0             k=1
(6.2)

          M-1
          ∑   b z-k
               k
Y (z) = --k=N0-1------X (z)
            ∑     - k
        1+     akz
            k=1
(6.3)

Y(z) = H (z)X (z)
(6.4)

H(z) is called the transfer function of the LTI filter described by the CCDE. The following properties hold:

6.1.2 Causality

As we saw in Section 5.7.1, a CCDE can be rearranged to express either a causal or a noncausal realization of a filter. This ambiguity is reflected in the z-transform and can be made explicit by the following example. Consider the sequences

x1[n] = u[n] (6.6)
x2[n] = δ[n] - u[-n] (6.7)
where u[n] is the unit step. For the first sequence we have
         ∞∑
X1 (x) =    z-n = ---1---
         n=0      1 - z-1
(6.8)

(again, let us neglect convergence issues for the moment). For the second sequence we have

          ∑∞   n       -1---  ---1---
X2 (x) = -    z  = 1-  1- z = 1 - z-1
          n=1
(6.9)

so that, at least formally, X1(z) = X2(z). In other words, the z-transform is not an invertible operator or, more precisely, it is invertible up to a causality specification. If we look more in detail, the sum in (6.8) converges only for |z| > 1 while the sum in (6.9) converges only for |z| < 1. This is actually a general fact: the values for which a z-transform exists define the causality or anticausality of the underlying sequence.

6.1.3 Region of Convergence

We are now ready to address the convergence issues that we have put aside so far. For any given sequence x[n], the set of points on the complex plane for which x[n]z-n exists and is finite, is called the region of convergence (ROC) for the z-transform. In order to study the properties of this region, it is useful to split the sum in (6.1) as

        -∑ 1      -n   M∑       -n
X(z) =      x[n]z   +    x[n]z
       n=-N           n=0
(6.10)

       ∑N      n   M∑  x-[n]
X (z) =    x [n]z  +     zn
       n=1         n=0
(6.11)

X (z) = Xa (z )+ Xc(z)
(6.12)

where N,M 0 and where both N and M can be infinity. Now, for X(z0) to exist and be finite, both power series Xa(z) and Xc(z) must converge in z0; since they are power series, when they do converge, they converge absolutely. As a consequence, for all practical purposes, we define the ROC in terms of absolute convergence:(1)

                      ∑∞
z ∈ ROC {X (z)} ⇐ ⇒        ||x[n]z-n|| < ∞
                     n= -∞
(6.13)

Then the following properties are easily derived:


PIC

Figure 6.1: ROC shapes (hatched area): (a) causal sequence; (b) anticausal (b) sequence.

6.1.4 ROC and System Stability

The z-transform provides us with a quick and easy way to test the stability of a linear system. Recall from Section 5.2.2 that a necessary and sufficient condition for an LTI system to be BIBO stable is the absolute summability of its impulse response. This is equivalent to saying that a system is BIBO stable if and only if the z-transform of its impulse response is absolutely convergent in |z| = 1. In other words, a system is BIBO stable if the ROC of its transfer function includes the unit circle.

6.1.5 ROC of Rational Transfer Functions
and Filter Stability

For rational transfer functions, the analysis of the ROC is quite simple; indeed, the only ”trouble spots” for convergence are the values for which the denominator of (6.3) is zero. These values are called the poles of the transfer functions and clearly they must lie outside of the ROC. As a consequence, we have an extremely quick and practical rule to determine the stability of a realizable filter.

Consider a causal filter:

For an anticausal system the procedure is symmetrical; once the largest-magnitude pole is known, the ROC will be a disk of radius |p0| and therefore in order to have stability, all the poles will have to be outside of the unit circle.

6.2 The Pole-Zero Plot

The rational transfer function derived in (6.3) can be written out explicitly in terms of the CCDEs coefficients, as follows:

        b0 + b1z-1 + ⋅⋅⋅+ bM -1z-(M- 1)
H (z) = --------1---------------(N--1)-
        1 + a1z   + ⋅⋅⋅+ aN- 1z
(6.14)

The transfer function is the ratio of two polynomials in z-1 where the degree of the numerator polynomial is M - 1 and that of the denominator polynomial is N - 1. As a consequence, the transfer function can be rewritten in factored form as

         M -1
          ∏  (1-  z z-1)
                  n
H (z) = b0Nn=-11----------
          ∏          -1
             (1- pnz   )
          n=1
(6.15)

where the zn are the M - 1 complex roots of the numerator polynomial and are called the zeros of the system; the pn are the N - 1 complex roots of the denominator polynomial and, as we have seen, they are called the poles of the system. Both poles and zeros can have arbitrary multiplicity. Clearly, if zi = pk for some i and k (i.e. if a pole and a zero coincide) the corresponding first-order factors cancel each other out and the degrees of numerator and denominator are both decreased by one. In general, it is assumed that such factors have already been removed and that the numerator and denominator polynomials of a given rational transfer function are coprime.

The poles and the zeros of a filter are usually represented graphically on the complex plane as crosses and dots, respectively. This allows for a quick visual assessment of stability which, for a causal system, consists of checking whether all the crosses are inside the unit circle (or, for anticausal systems, outside).

6.2.1 Pole-Zero Patterns

The pole-zero plot can exhibit distinctive patterns according to the properties of the filter.

Real-Valued Filters. If the filter coefficients are real-valued (and this is the only case that we consider in this text book) both the numerator and denominator polynomials are going to have real-valued coefficients. We can now recall a fundamental result from complex algebra: the roots of a polynomial with real-valued coefficients are either real or they occur in complex- conjugate pairs. So, if z0 is a complex zero of the system, z0* is a zero as well. Similarly, if p0 is a complex pole, so is p0*. The pole-zero plot will therefore shows a symmetry around the real axis (Fig. 6.2a).


PIC

Figure 6.2: Examples of pole-zero patterns: (a) real-valued IIR filter (note the symmetry around the x-axis); (b) linear phase FIR (each zero appears with its reciprocal).

Linear-Phase FIR Filters. First of all, note that the pole-zero plot for an FIR filter is actually just a zero plot, since FIR’s have no poles.(2) A particularly important case is that of linear phase FIR filters; as we will see in detail in Section 7.2.2, linear phase imposes some symmetry constraints on the CCDE coefficients (which, of course, coincide with the filter taps). These constraints have a remarkable consequence: if z0 is a (complex) zero of the system, 1∕z0 is a zero as well. Since we consider real-valued FIR filters exclusively, the presence of a complex zero in z0 implies the existence of three other zeros, namely in 1∕z0, z0* and 1∕z0* (Fig. 6.2b). See also the discussion in Section 7.2.2

6.2.2 Pole-Zero Cancellation

We have seen in Section 5.2.1 that the effect of a cascade of two or more filters is that of a single filter whose impulse response is the convolution of all of the filters’ impulse responses. By the convolution theorem, this means that the overall transfer function of a cascade of K filters Hi, i = 1,,K is simply the product of the single transfer functions Hi(z):

        K∏
H (z) =    Hi(z)
        i=1

If all filters are realizable, we can consider the factored form of each Hi(z) as in (6.15). In the product of transfer functions, it may happen that some of the poles of a given Hi(z) coincide with the zeros of another transfer function, which leads to a pole-zero cancellation in the overall transfer function. This is a method that can be used (at least theoretically) to stabilize an otherwise unstable filter. If one of the poles of the system (assuming causality) lies outside of the unit circle, this pole can be compensated by cascading an appropriate first- or second-order FIR section to the original filter. In practical realizations, care must be taken to make sure that the cancellation is not jeopardized by numerical precision problems.

6.2.3 Sketching the Transfer Function
from the Pole-Zero Plot

The pole-zero plot represents a convenient starting point in order to estimate the shape of the magnitude for a filter’s transfer function. The basic idea is to consider the absolute value of H(z), which is a three-dimensional plot (|H(z)| being a real function of a complex variable). To see what happens to |H(z)| it is useful to imagine a “rubber sheet” laid over the complex plane; then,

so that the shape of |H(z)| is that of a very lopsided “circus tent”. The magnitude of the transfer function is just the height of this circus tent measured around the unit circle.

In practice, to sketch a transfer function (in magnitude) given the pole-zero plot, we proceed as follows. Let us start by considering the upper half of the unit circle, which maps to the [0] interval for the ω axis in the DTFT plot; for real-valued filters, the magnitude response is an even function and, therefore, the [-π,0] interval need not be considered explicitly. Then:

  1. Check for zeros on the unit circle; these correspond to points on the frequency axis in which the magnitude response is exactly zero.
  2. Draw a line from the origin of the complex plane to each pole and each zero. The point of intersection of each line with the unit circle gives the location of a local extremum for the magnitude response.
  3. The effect of each pole and each zero is made stronger by their proximity to the unit circle.

PIC

Figure 6.3: Sketch of the magnitude response for the pole-zero plot of Figure 6.2(a).


PIC

Figure 6.4: Sketch of the magnitude response for the pole-zero plot of Figure 6.2(b).

As an example, the magnitude responses of the pole-zero plots in Figure 6.2 are displayed in Figures 6.3 and 6.4.

6.3 Filtering by Example – Z-Transform

We will quickly revisit the examples of the previous chapter to show the versatility of the z-transform.


PIC

Figure 6.5: Pole-zero plots and ROC: (a) moving average filter with N = 8; (b) leaky integrator with λ = 0.65.

Moving Average. From the impulse response in (5.14), the transfer function of the moving average filter is

        1 N∑-1       1  1- z-N
H(z) = --     z-k = -- ------1-
       N  k=0       N  1- z
(6.16)

from which the frequency response (5.26) is easily derived by setting z = e. It is easy to see that the poles of the filter are on all the roots of unity except for z = 1, where the numerator and denominator in (6.17) cancel each other out. A factored representation of the transfer function for the moving average is therefore

          N∏-1
H (z) = 1-    (1- W Nkz-k)
        N  k=1
(6.17)

and the pole-zero plot (for N = 8) is shown in Figure 6.5(a). There being no poles, the filter is unconditionally stable.

Leaky Integrator. From the CCDE for the leaky integrator (5.16) we immediately have

Y (z) = λz -1Y(z) + (1 - λ)X (z)
(6.18)

from which

         1 - λ
H (z) = 1--λz--1-
(6.19)

The transfer function has therefore a single real pole in z = λ; for a causal realization, this implies that the ROC is the region of the complex plane extending outward from the circle of radius λ. The causal filter is stable if λ lies inside the unit circle, i.e. if λ < 1. An example of pole-zero plot together with the associated ROC is shown in Figure 6.5(b) for the (stable) case of λ = 0.65.

6.4 Examples

Example 6.1: Transform of periodic functions

The z-transform converges without fuss for infinite-energy sequences which the Fourier transform has some difficulties dealing with. For instance, the z-transform manages to “bring down” the unit step because of the vanishing power of z-n for |z| > 1 and n large and this is the case for all one-sided sequences which grow no more than exponentially. However, if |z-n|→ 0 for n →∞ then necessarily |z-n|→∞ for n →-∞ and this may pose a problem for the convergence of the z-transform in the case of two-sided sequences. In particular, the z-transform does not converge in the case of periodic signals since only one side of the repeating pattern is “brought down” while the other is amplified limitlessly. We can circumvent this impasse by “killing” half of the periodic signal with a unit step. Take for instance the one-sided cosine:

x[n] = cos(ω0n)u [n]

its z-transform can be derived as

X(z) = n=-∞z-n cos(ω 0n)u[n]
= n=0z-n cos(ω 0n)
= 1-
2 n=0e0nz-n + 1-
2 n=0e-0nz-n
= 1-
2(                           )
  -----1----- + ------1------
  1 - ejω0 z-1  1 - e-jω0 z-1
=    1 - cos(ω )z-1
------------0-1-----2
1- 2 cos(ω0 )z   + z
Similar results can be obtained for signals such as x[n] = sin(ω0n)u[n] or x[n] = αn cos(ω0n)u[n].

Example 6.2: The impossibility of ideal filters

The z-transform of an FIR impulse response can be expressed as a simple polynomial P(z) of degree L - 1 where L is the number of nonzero taps of the filter (we can neglect leading factors of the form z-N). The fundamental theorem of algebra states that such a polynomial has at most L - 1 roots; as a consequence, the frequency response of an FIR filter can never be identically zero over a frequency interval since, if it were, its z-transform would have an infinite number of roots. Similarly, by considering the polynomial P(z) - C, we can prove that the frequency response can never be constant C over an interval which proves the impossibility of achieving ideal (i.e. “brickwall”) responses with an FIR filter. The argument can be easily extended to rational transfer functions, confirming the impossibility of a realizable filter whose characteristic is piecewise perfectly flat.

6.5 Further Reading

The z-transform is closely linked to the solution of linear, constant coefficient difference equations. For a more complete treatment, see, for example, R. Vich, Z Transform Theory and Applications (Springer, 1987), or A. J. Jerri, Linear Difference Equations with Discrete Transforms Method (Kluwer, 1996).

6.6 Exercises

Exercise 6.1: Interleaving.  Consider two two-sided sequences h[n] and g[n] and consider a third sequence x[n] which is built by interleaving the values of h[n] and g[n]:

x[n] = , h[-3], g[-3], h[-2], g[-2], h[-1], g[-1], h[0],
g[0], h[1], g[1], h[2], g[2], h[3], g[3],
with x[0] = h[0].
  1. Express the z-transform of x[n] in terms of the z-transforms of h[n] and g[n].
  2. Assume that the ROC of H(z) is 0.64 < |z| < 4 and that the ROC of G(z) is 0.25 < |z| < 9. What is the ROC of X(z)?

Exercise 6.2: Properties of the z-transform.  Let x[n] be a discrete-time sequence and X(z) be its corresponding z-transform with appropriate ROC.

  1. Prove that the following relation holds:
            Z      d
nx [n] ←→  - z---X (z)
              dz

  2. Using (a), show that
            n      Z   ----1------
(n+ 1)α  u[n ]← →   (1 - αz- 1)2 ,      |z| > |α|

  3. Suppose that the above expression corresponds to the impulse response of an LTI system. What can you say about the causality of such a system? What about its stability?
  4. Let α = 0.8: what is the spectral behavior of the corresponding filter? What if α = -0.8?

Exercise 6.3: Stability.  Consider a causal discrete system represented by the following difference equation:

y[n]- 3.25y[n - 1]+ 0.75 y[n- 2] = x[n- 1]+ 3x [n - 2]

  1. Compute the transfer function and check the stability of this system both analytically and graphically.
  2. If the input signal is x[n] = δ[n] - 3δ[n - 1], compute the z-transform of the output signal and discuss the stability.
  3. Take an arbitrary input signal that does not cancel the unstable pole of the transfer function and repeat (b).

Exercise 6.4: Pole-zero plot and stability – I.  Consider a causal LTI system with the following transfer function:

       3 + 4.5 z-1       2
H (z) = 1-+-1.5-z-1 - 1---0.5-z-1

Sketch the pole-zero plot of the transfer function and specify its region of convergence. Is the system stable?

Exercise 6.5: Pole-zero plot and stability – II.  Consider the transfer function of an anticausal LTI system

                       1
H(z) = (1- z- 1)- ---------1
                  1 - 0.5 z

Sketch the pole-zero plot of the transfer function and specify its region of convergence. Is the system stable?

Exercise 6.6: Pole-zero plot and magnitude response.  In the following pole-zero plots, multiple zeros at the same location are indicated by the multiplicity number shown to the upper right of the zero. Sketch the magnitude of each frequency response and determine the type of filter.


PIC


Exercise 6.7: z-transform and magnitude response.  Consider a causal LTI system described by the following transfer function:

        1+  1z- 1 + 1-z-2 + 1-z-3
H (z) =-6---2------2-------6----
               1 + 1-z-2
                   3

  1. Sketch the magnitude response H(e) from the z-transform. You can use a numerical package to find the poles and the zeros of the transfer function. What type of filter is H(z)?
  2. Sketch the pole-zero plot. Is the system stable?

Now fire up your favorite numerical package (or write some C code) and consider the following length-128 input signal:

      {
        0   n ∈ [1 ... 50]
x[n ] =
        1   n ∈ [51 ... 128]

  1. Plot the magnitude of its DTFT |X[k]|.
  2. We want to filter x[n] with H(z) to obtain y[n]: Compute and plot y[n] using the Matlab function filter. Plot also the magnitude of its DTFT |Y [k]|.
  3. Explain qualitatively the form of y[n].

Exercise 6.8: DFT and z-transform.  It is immediately obvious that the DTFT of a sequence x[n] is simply its z-transform evaluated on the unit circle, i.e. for z = e. Equivalently, for a finite-length signal x, the DFT is simply the z-transform of the finite support extension of the signal evaluated at z = WN-k for k = 0,,N - 1:

                  |
        N∑-1     -n||         N∑-1      nk
X [k] =    x [n]z  ||       =     x[n ]W N
        n=0        z=W -Nk   n=0

By taking advantage of this fact, derive a simple expression for the DFT of the time-reversed signal

     [                                 T
xr =  x[N  - 1] x [N -  2] ... x [1]  x[0 ]]

Exercise 6.9: A CCDE.  Consider an LTI system described by the following constant-coefficient difference equation:

y[n - 1]+ 0.25y[n- 2 ] = x[n]

  1. Write the transfer function of the system.
  2. Plot its poles and zeros, and show the ROC.
  3. Compute the impulse response of the system.

Exercise 6.10: Inverse transform.  Write out the discrete-time signal x[n] whose z-transform is

                     -1
X (z) = (1+ 2z)(1 + 3z  )

Exercise 6.11: Signal transforms.  Consider a discrete-time signal x[n] whose z-transform is

X (z) = 1 + z-1 + z-3 + z-4

  1. Compute the DTFT of x[n], X(e). Your final result should be in the form of a real function of ω times a pure phase term.
  2. Sketch the magnitude of X(e) as accurately as you can.

Consider now the length-4 signal y[n]:

y[n] = x[n],     n = 0,1,2,3

  1. Compute, for y[n], the four DFT coefficients Y [k], k = 0,1,2,3.

© Presses polytechniques et universitaires romandes, 2008
All rights reserved

Chapter 7
Filter Design

In discrete-time signal processing, filter design is the art of turning a set of requirements into a well-defined numerical algorithm. The requirements, or specifications, are usually formulated in terms of the filter’s frequency response; the design problem is solved by finding the appropriate coefficients for a suitable difference equation which implements the filter and by specifying the filter’s architecture. Since realizable filters are described by rational transfer functions, filter design can usually be cast in terms of a polynomial optimization procedure for a given error measure. Additional design choices include the computational cost of the designed filters, i.e. the number of mathematical operations and storage necessary to compute each output sample. Finally, the structure of the difference equation defines an explicit operational procedure for computing the filter’s output values; by arranging the terms of the equation in different ways, we can arrive at different algorithmic structures for the implementation of digital filters.

7.1 Design Fundamentals

As we have seen, a realizable filter is described by a rational transfer function; designing a filter corresponds to determining the coefficients of the polynomials in transfer function with respect to the desired filter characteristics. For an FIR filter of length M, there are M coefficients that need to be determined, and they correspond directly to the filter’s impulse response. Similarly, for an IIR filter with a numerator of degree M - 1 and a denominator of degree N - 1, there are M + N - 1 coefficients to determine (since we always assume a0 = 1). The main questions are the following:

As is apparent, real-world filters are designed with a variety of practical requirements in mind, most of which are conflicting. One such requirement, for instance, is to obtain a low “computational price” for the filtering operation; this cost is obviously proportional to the number of coefficients in the filter, but it also depends heavily on the underlying hardware architecture. The tradeoffs between disparate requirements such as cost, precision or numerical stability are very subtle and not altogether obvious; the art of the digital filter designer, although probably less dazzling than the art of the analog filter designer, is to determine the best design strategy for a given practical problem.

7.1.1 FIR versus IIR

Filter design has a long and noble history in the analog domain: a linear electronic network can be described in terms of a differential equation linking, for instance, the voltage as a function of time at the input of the network to the voltage at the output. The arrangement of the capacitors, inductances and resistors in the network determine the form of the differential equation, while their values determine its coefficients. A fundamental difference between an analog filter and a digital filter is that the transformation from input to output is almost always considered instantaneous (i.e. the propagation effects along the circuit are neglected). In digital filters, on the other hand, the delay is always explicit and is actually the fundamental building block in a processing system. Because of the physical properties of capacitors, which are ubiquitous in analog filters, the transfer function of an analog filter (expressed in terms of its Laplace transform) is “similar” to the transfer function of an IIR filter, in the sense that it contains both poles and zeros. In a sense, IIR filters can be considered as the discrete-time counterpart of classic analog filters. FIR filters, on the other hand, are the flagship of digital signal processing; while one could conceive of an analog equivalent to an FIR, its realization would require the use of analog delay lines, which are costly and impractical components to manufacture. In a digital signal processing scenario, on the other hand, the designer can freely choose between two lines of attack with respect to a filtering problem, IIR or FIR, and therefore it is important to highlight advantages and disadvantages of each.

FIR Filters. The main advantages of FIR filters can be summarized as follows:

While their disadvantages are mainly:

IIR Filters. IIR filters are often an afterthought in the context of digital signal processing in the sense that they are designed by mimicking established design procedures in the analog domain; their appeal lies mostly in their compact formulation: for a given computational cost, i.e for a given number of operations per input sample, they can offer a much better magnitude response than an equivalent FIR filter. Furthermore, there are a few fundamental processing tasks (such as DC removal, as we will see later) which are the natural domain of IIR filters. The drawbacks of IIR filters, however, mirror in the negative the advantages of FIR’s. The main advantages of IIR filters can be summarized as follows:

While their disadvantages are mainly:

For these reasons, in this book, we will concentrate mostly on the FIR design problem and we will consider of IIR filters only in conjunction with some specific processing tasks which are often encountered in practice.

7.1.2 Filter Specifications and Tradeoffs

A set of filter specifications represents a set of guidelines for the design of a realizable filter. Generally, the specifications are formulated in the frequency domain and are cast in the form of boundaries for the magnitude of the frequency response; less frequently, the specifications will take the phase response into account as well.

A set of filter specifications is best illustrated by example: suppose our goal is to design a half-band lowpass filter, i.e. a lowpass filter with cutoff frequency π∕2. The filter will possess a passband, i.e. a frequency range over which the input signal is unaffected, and a stopband, i.e. a frequency range where the input signal is annihilated. In order to turn these requirements into specifications the following practical issues must be taken into account:


PIC

Figure 7.1: Typical lowpass filter specifications.

These specifications can be represented graphically as in Figure 7.1; note that, since we are dealing with real-valued filter coefficients, it is sufficient to specify the desired frequency response only over the [0] interval, the magnitude response being symmetric. The filter design problem consists now in finding the minimum size FIR or IIR filter which fulfills the required specifications. As an example, Figure 7.2 shows an IIR filter which does not fulfill the specifications since the stopband error is above the maximum error at the beginning of the stopband. Similarly, Figure 7.3 shows an FIR filter which breaks the specifications in the passband. Finally, Figure 7.4 shows a monotonic IIR filter which matches and exceeds the specifications (i.e. the error is always smaller than the maximum error).


PIC

Figure 7.2: Example of monotonic filter which does not satisfies the specifications.


PIC

Figure 7.3: Example of FIR filter which does not satisfies the specifications.


PIC

Figure 7.4: Example of monotonic filter which satisfies (and exceeds) the specifications.

7.2 FIR Filter Design

In this section we will explore two fundamental strategies for FIR filter design, the window method and the minimax (or Parks-McClellan) method. Both methods seek to minimize the error between a desired (and often ideal) filter transfer function and the transfer function of the designed filter; they differ in the error measure which is used in the minimization. The window method is completely straightforward and it is often used for quick designs. The minimax method, on the other hand, is the procedure of choice for accurate, optimal filters. Both methods will be illustrated with respect to the design of a lowpass filter.

7.2.1 FIR Filter Design by Windowing

We have already seen in Section 5.6 that if there are no constraints (not even realizability) the best lowpass filter with cutoff frequency ωc is the ideal lowpass. The impulse response is therefore the inverse Fourier transform of the desired transfer function:

        1 ∫ π    jω  jωn       1 ∫ wc  jωn        1  [jω n   -jωn]   sin(ωcn)   ωc     (ωc  )
h [n] = 2π-    H (e  )e    dω = 2π-     e   dω =  2πjn-e  c - e   c  = ---πn--- = π--sinc  π--n
           -π                     -wc

The resulting filter, as we saw, is an ideal filter and it cannot be represented by a rational transfer function with a finite number of coefficients.

Impulse Response Truncation. A simple idea to obtain a realizable filter is to take a finite number of samples from the ideal impulse response and use them as coefficients of a (possibly rather long) FIR filter:(2)

      {
ˆ       h[n]  - N ≤ n ≤ N
h[n] =  0     otherwise
(7.1)

This is a (2N + 1)-tap FIR obtained by truncating an ideal impulse response (Figs 5.10 and 5.11). Note that the filter is noncausal, but that it can be made causal by using an N-tap delay; it is usually easier to design FIR’s by considering a noncausal version first, especially if the resulting impulse response is symmetric (or antisymmetric) around n = 0. Although this approximation was derived in a sort of “intuitive” way, it actually satisfies a very precise approximation criterion, namely the minimization of the mean square error (MSE) between the original and approximated filters. Denote by E2 this error, that is

      ∫ π|               |
E2 =     |H (ejω) - ˆH (ejω)|2dω
       -π

We can apply Parseval’s theorem (see (4.59)) to obtain the equivalent expression in the discrete-time domain:

        ∑
E2 = 2 π   ||h[n]- ˆh [n]||2
        n∈Z

If we now recall that ĥ[n] = 0 for |n| > N, we have

        [ ∑N                  ∑∞           -∑N-1      ]
E  =  2π      ||h[n]- ˆh[n]||2 +      ||h [n]||2 +     ||h[n]||2
  2      n= -N               n=N+1         n=- ∞

Obviously the last two terms are nonnegative and independent of ĥ[n]. Therefore, the minimization of E2 with respect to ĥ[n] is equivalent to the minimization of the first term only, and this is easily obtained by letting

ˆ
h[n ] = h[n],     for n = - N,...,N

In spite of the attractiveness of such a simple and intuitive solution, there is a major drawback. If we consider the frequency response of the approximated filter, we have

           N
ˆH (ejω) =  ∑   h[n]e-jω

         n=-N

which means that Ĥ(e) is an approximation of H(e) obtained by using only 2N + 1 Fourier coefficients. Since H(e) has a jump discontinuity in ωc, Ĥ(e) incurs the well-known Gibbs phenomenon around ωc. The Gibbs phenomenon states that, when approximating a discontinuous function with a finite number of Fourier coefficients, the maximum error in an interval around the jump discontinuity is actually independent of the number of terms in the approximation and is always equal to roughly 9% of the jump. In other words, we have no control over the maximum error in the magnitude response. This is apparent in Figure 7.5 where |Ĥ(e)| is plotted for increasing values of N; the maximum error does not decrease with increasing N and, therefore, there are no means to meet a set of specifications which require less than 9% error in either stopband or passband.


PIC

Figure 7.5: Gibbs phenomenon in lowpass approximation; magnitude of the approximated lowpass filter for N = 4 (light gray), N = 10 (dark gray) and N = 50 (black).

The Rectangular Window. Another way to look at the resulting approximation is to express ĥ[n] as

ˆh[n] = h [n]w[n]
(7.2)

with

          (   )   {
w[n] = rect  n-  =   1   - N ≤ n ≤ N
            N       0   otherwise
(7.3)

w[n] is called a rectangular window of length (2N + 1) taps, which in this case is centered at n = 0.

We know from the modulation theorem in (5.22) that the Fourier transform of (7.2) is the convolution (in the space of 2π-periodic functions) of the Fourier transforms of h[n] and w[n]:

    jω    1  ∫ π    jω     j(ω- θ)
Hˆ(e  ) = 2π-   H (e  )W (e     )dθ
              -π

It is easy to compute W(e) as

                          (  (       ))
                                   1-
          ∑N           sin  ω   N + 2
W (ejω) =      e-jωn = --------(ω-)-----
         n= -N              sin  2-
(7.4)

An example of W(e) for N = 6 is shown in Figure 7.6. By analyzing the form of W(e) for arbitrary N, we can determine that:


PIC

Figure 7.6: Fourier transform of the rectangular window for N = 6.

In order to understand the shape of the approximated filter, let us go back to the lowpass filter example and try to visualize the effect of the convolution in the Fourier transform domain. First of all, since all functions are 2π-periodic, everything happens circularly, i.e. what “goes out” on the right of the [-π,π] interval “pops” immediately up on the left. The value at ω0 of Ĥ(e) is the integral of the product between H(e) and a version of W(e) circularly shifted by ω0. Since H(e) is zero except over [-ωcc], where it is one, this value is actually:

    jω     1  ∫ ωc    j(ω-ω )
Hˆ(e  0) = 2π-    W (e     0)dθ
               -ωc

When ω0 is such that the first right sidelobe of W(e) is outside of the [-ωcc] interval, then the integral reaches its maximum value, since the sidelobe is negative and it’s the largest. The maximum value is dependent on the shape of the window (a rectangle in this case) but not on its length. Hence the Gibbs phenomenon.

To recap, the windowing operation on the ideal impulse response, i.e. the circular convolution of the ideal frequency response with W(e), produces two main effects:

The sharpness of the transition band and the size of the ripples are dependent on the shape of the window’s Fourier transform; indeed, by carefully designing the shape of the windowing sequence we can trade mainlobe width for sidelobe amplitude and obtain a more controlled behavior in the frequency response of the approximation filter (although the maximum error can never be arbitrarily reduced).

Other Windows. In general, the recipe for filter design by windowing involves two steps: the analytical derivation of an ideal impulse response followed by a suitable windowing to obtain an FIR filter. The ideal impulse response h[n] is obtained from the desired frequency response H(e) by the usual DTFT inversion formula

       1 ∫ π
h[n ] =---    H (ejω )ejωn dω
      2π  - π

While the analytical evaluation of the above integral may be difficult or impossible in the general case, for frequency responses H(e) which are piecewise linear, the computation of h[n] can be carried out in an exact (if nontrivial) way; the result will be a linear combination of modulated sinc and sinc-squared sequences.(3) The FIR approximation is then obtained by applying a finite-length window w[n] to the ideal impulse response as in (7.2). The shape of the window can of course be more sophisticated than the simple rectangular window we have just encountered and, in fact, a hefty body of literature is devoted to the design of the “best” possible window. In general, a window should be designed with the following goals in mind:

It is clear that the first two requirements are openly in conflict; indeed, the width of the main lobe Δ is inversely proportional to the length of the window (we have seen, for instance, that for the rectangular window Δ = 4π∕M, with M, the length of the filter). The second and third requirements are also in conflict, although the relationship between mainlobe width and sidelobe amplitude is not straightforward and can be considered a design parameter. In the frequency response, reduction of the sidelobe amplitude implies that the Gibbs phenomenon is decreased, but at the “price” of an enlargement of the filter’s transition band. While a rigorous proof of this fact is beyond the scope of this book, consider the simple example of a triangular window (with N odd):

              (
              { N----n   |n| < N
wt[n] = w [n] =   N
              ( 0        otherwise
(7.5)

It is easy to verify that wt[n] = w[n] * w[n], with w[n] = rect(2n∕(N - 1)) (i.e. the triangle can be obtained as the convolution of a half-support rectangle with itself) so that, as a consequence of the convolution theorem, we have

                           [         ]2
Wt (ejω ) = W (ejω)W (ejω) =  sin(ωN-∕2)-
                             sin(ω∕2 )
(7.6)

The net result is that the amplitude of the sidelobes is quadratically reduced but the amplitude of the mainlobe Δ is roughly doubled with respect to an equivalent-length rectangular window; this is displayed in Figure 7.7 for a 17-point window (values are normalized so that both frequency responses are equal in ω = 0). Filters designed with a triangular window therefore exhibit a much wider transition band.


PIC

Figure 7.7: Fourier transform of the 17-point rectangular window (gray) vs. an equal-length triangular window (black).


PIC

Figure 7.8: Hamming window (N = 32 ).


PIC

Figure 7.9: Blackman window (N = 32 ).

Other commonly used windows admit a simple parametric closed form representation; the most important are the Hamming window (Fig. 7.8):

                    (   n + N )
w(n) = 0.54- 0.46cos  2π------  ,      |n| ≤ N - 1
                          2N

and the Blackman window (Fig. 7.9):

w(n) = 0.42 - 0.5cos(   n + N )
 2π -2N--- + 0.08cos(   n + N )
  4π--2N--, |n|≤ N - 1
The magnitude response of both windows is plotted in Figure 7.10 (on a log scale so as to enhance the difference in sidelobe amplitude); again, we can remark the tradeoff between mainlobe width (translating to a wider transition band in the designed filter) and sidelobe amplitude (influencing the maximum error in passband and stopband).

PIC

Figure 7.10: Magnitude response (dB scale) of the 17-point rectangular (light gray), Hamming (dark gray) and Blackman (black) windows.

Limitations of the Window Method. Lack of total control on passband and stopband error is the main limitation inherent to the window method; this said, the method remains a fundamental staple of practical signal processing as it yields perfectly usable filters via a quick, flexible and simple procedure. The error characteristic of a window-designed filter can be particularly aggravating in sensitive applications such as audio processing, where the peak in the stopband error can introduce unacceptable artifacts. In order to improve on the filter performance, we need to completely revise our design approach. A more suitable optimization criterion may, for instance, be the minimax criterion, where we aim to explicitly minimize the maximum approximation error over the entire frequency support; this is thoroughly analyzed in the next section. We can already say, however, that while the minimum square error is an integral criterion, the minimax is a pointwise criterion; or, mathematically, that the MSE and the minimax are respectively L2([-π,π])- and L([-π,π])-norm minimizations for the error function E(ω) = Ĥ(e) - H(e). Figure 7.11 illustrates the typical result of applying both criteria to the ideal lowpass problem. As can be seen, the minimum square and minimax solutions are very different.


PIC    PIC

Figure 7.11: Error shapes in passband for MSE and minimax optimization methods.

7.2.2 Minimax FIR Filter Design

As we saw in the opening example, FIR filter design by windowing minimizes the overall mean square error between the desired frequency response and the actual response of the filter. Since this might lead to a very large error at frequencies near the transition band, we now consider a different approach, namely the design by minimax optimization. This technique minimizes the maximum allowable error in the filter’s magnitude response, both in the passband and in the stopband. Optimality in the minimax sense requires therefore the explicit stating of a set of tolerances in the prototypical frequency response, in the form of design specifications as seen in Section 7.1.2. Before tackling the design procedure itself, we will need a series of intermediate results.

Generalized Linear Phase. In Section 5.4.3, we introduced the concept of linear phase; a filter with linear phase response is particularly desirable since the phase response translates to just a time delay (possibly fractional) and we can concentrate on the magnitude response only. We also introduced the notion of group delay and showed that linear phase corresponds to constant group delay. Clearly, the converse is not true: a frequency response of the type

    jω   |    jω|  -jωd+jα
H (e  ) = |H (e )| e

has constant group delay but differs from a linear phase system by a constant phase factor e. We will call this type of phase response generalized linear phase. Important cases are those for which α = 0 (strictly linear phase) and α = π∕2 (generalized linear phase used in differentiators).

FIR Filter Types. Consider a causal, M-tap FIR filter with impulse response h[n], n = 0,1,,M - 1; in the following, we are interested in filters whose impulse response is symmetric or antisymmetric around the “midpoint”. If the number of taps is odd, the midpoint of the impulse response coincides with the center tap h[(M - 1)2]; if the number of taps is even, on the other hand, the midpoint is still at (M - 1)2 but this value does not coincide with a tap since it is located “right in between” taps h[M∕2 - 1] and h[M∕2]. Symmetric and antisymmetric FIR filters are important since their frequency response has generalized linear phase. The delay introduced by these filters is equal to (M - 1)2 samples; clearly, this is an integer delay if M is odd, and it is fractional (half a sample more) if M is even. There are four different possibilities for linear phase FIR impulse responses, which are listed here with their corresponding generalized linear phase parameters :


Type Nb. of Taps Symmetry Delay
Phase





Type I odd symmetric integer α = 0
Type II even symmetric fractional α = 0
Type III odd antisymmetric integer α = π∕2
Type IV even antisymmetric fractional α = π∕2

The generalized linear phase of (anti)symmetric FIRs is easily shown. Consider for instance a Type I filter, and define C = (M - 1)2, the location of the center tap; we can compute the transfer function of the shifted impulse response hd[n] = h[n + C], which is now symmetric around zero, i.e. hd[-n] = hd[n]:

          C                     -1            C                   C
H  (z) = ∑    h [n ]z-n = h [0]+ ∑   h  [n]z-n+ ∑  h [n]z-n = h [0]+ ∑  h  [n](zn+z -n)
  d            d          d          d            d         d         d
        n= -C                 n= -C          n=1                 n=1
(7.7)

By undoing the time shift we obtain the original Type I transfer function:

H (z) = z- M--21H (z)
                d
(7.8)

On the unit circle (7.7) becomes

                 ∑C                              ∑C
Hd (ejω) = hd[0]+    hd[n ](ejωn + e- jωn) = hd[0]+ 2   hd[n ]cosnω
                 n=1                             n=1
(7.9)

which is a real function; the original Type I frequency response is obtained from (7.8):

          ⌊                                 ⌋
            [       ]      M∑-1
H (ejω) = ⌈h  M----1 +  2         h[n]cosnω ⌉ e-jωM-21-
                2        n=(M+1 )∕2

which is clearly linear phase with delay d = (M - 1)2 and α = 0. The generalized linear phase of the other three FIR types can be shown in exactly the same way.

Zero Locations. The symmetric structures of the four types of FIR filters impose some constraints on the locations of the zeros of the transfer function. Consider again a Type I filter; from (7.7) it is easy to see that Hd(z-1) = Hd(z); by using (7.8) we therefore have

{
  H (z) = z- M-21Hd (z)
     - 1     M-1-
  H (z   ) = z 2 Hd (z)

which leads to

    -1     M -1
H (z  ) = z    H (z)
(7.10)

It is easy to show that the above relation is also valid for Type II filters, while for Type III and Type IV (antisymmetric filters) we have

H(z- 1) = - zM- 1H (z)
(7.11)

These relations mean that if z0 is a zero of a linear phase FIR, then so is z0-1. This result, coupled with the usual fact that all complex zeros come in conjugate pairs, implies that if z0 is a zero of H(z), then:

Consider now equation (7.10) again; if we set z = -1,

         (   )M- 1
H (- 1) = - 1     H(- 1)
(7.12)

for Type II filters, M - 1 is an odd number, which leads to the conclusion that H(-1) = 0; in other words, Type II filters must have a zero at ω = π. Similar results can be demonstrated for the other filter types, and they are summarized below:


Filter Type Relation Constraint on Zeros



Type I H(z-1) = zM-1H(z) No constraints
Type II H(z-1) = zM-1H(z) Zero at z = -1 (i.e. ω = π)
Type III H(z-1) = -zM-1H(z) Zeros at z = ±1 (i.e. at ω = 0, ω = π)
Type IV H(z-1) = -zM-1H(z) Zero at z = 1 (i.e. ω = 0)

These constraints are important in the choice of the filter type for a given set of specifications. Type II and Type III filters are not suitable in the design of highpass filters, for instance; similarly, Type III and Type IV filters are not suitable in in the design of lowpass filters.

Chebyshev Polynomials. Chebyshev polynomials are a family of orthogonal polynomials {Tk(x)}k∈N which have, amongst others, the following interesting property:

cos nω = Tn(cosω )
(7.13)

in other words, the cosine of an integer multiple of an angle ω can be expressed as a polynomial in the variable cosω. The first few Chebyshev polynomials are

T0(x) = 1
T1(x) = x
T2(x) = 2x2 - 1
T3(x) = 4x3 - 3x
T4(x) = 8x4 - 8x2 + 1
and, in general, they can be derived from the recursion formula:
Tk+1(x) = 2xTk (x)- Tk- 1(x)
(7.14)

From the above table it is easy to see that we can write, for instance,

cos(3ω) = 4cos3ω - 3 cosω

The interest in Chebyshev polynomials comes from the fact that the zero-centered frequency response of a linear phase FIR can be expressed as a linear combination of cosine functions, as we have seen in detail for Type I filters in (7.9). By using Chebyshev polynomials we can rewrite such a response as just one big polynomial in the variable cosω. Let us consider an explicit example for a length-7 Type I filter with nonzero coefficients h[n] = [d c b a b c d]; we can state that

     jω
Hd (e  ) = a + 2bcosω + 2ccos 2ω + 2dcos3ω

and by using the first four Chebyshev polynomials we can write

Hd(e) = a + 2bcosω + 2c(2cos2ω - 1) + 2d(4cos3ω - 3cosω)
= (a - 2c) + (2b - 6d)cosω + 4ccos2ω + 8dcos3ω (7.15)
In this case, Hd(e) turns out to be a third degree polynomial in the variable cosω. This is the case for any Type I filter, for which we can always write
         (M -1)∕2
    jω     ∑          k
Hd(e  ) =        ck cos ω
           k=0
(7.16)

               ||
Hd (ejω) = P (x)|
                x=cosω
(7.17)

where P(x) is a polynomial of degree (M - 1)2 whose coefficients ck are derived as linear combinations of the original filter coefficients ak as illustrated in (7.15). For the other types of linear phase FIR, a similar representation can be obtained after a few trigonometric manipulations. The general expression is

                L
H  (ejω) = f(ω) ∑  c coskω
  d            k=0 k
(7.18)

                   ||
Hd (ejω) = f (ω )P(x)|
                    x=cosω
(7.19)

where the ck are still linear combinations of the original filter coefficients and where f(ω) is a weighting trigonometric function. Both f(ω) and the polynomial degree K vary as a function of the filter type.(4) In the following Sections, however, we will concentrate only on the design of Type I filters, so these details will be overlooked; in practice, since the design is always carried out using numerical packages, the appropriate formulation for the filter expression is taken care of automatically.

Polynomial Optimization. Going back to the filter design problem, we stipulate that the FIR filters are (generalized) linear phase, so we can concentrate on the real frequency response of the zero-centered filter, which is represented by the trigonometric polynomial (7.19). Moreover, since the impulse response is real and symmetric, the aforementioned real frequency response is also symmetric around ω = 0. The filter design procedure can thus be carried out only for values of ω over the interval [0], with the other half of the spectrum obtained by symmetry. For these values of ω, the variable x = cosω is mapped onto the interval [1,-1] and the mapping is invertible. Therefore, the filter design problem becomes a problem of polynomial approximation over intervals.

To illustrate the procedure by example, consider once more the set of filter specifications in Figure 7.1 and suppose we decide to use a Type I filter. Recall that we required the prototype filter to be lowpass, with a transition band from ωp = 0.4π to ωs = 0.6π; we further stated that the tolerances for the realized filter’s magnitude must not exceed 10% in the passband and 1% in the stopband. This implies that the maximum magnitude error between the prototype filter and the FIR filter response H(e) must not exceed δp = 0.1 in the interval [0p] and must not exceed δs = 0.01 in the interval [ωs]. We can formulate this as follows: the frequency response of the desired filter is

           {
     jω     1   ω ∈ [0,ωp]
HD (e  ) =
            0   ω ∈ [ωs,π]

(note that HD(e) is not specified in the transition band). Since the tolerances on passband and stopband are different, they can be expressed in terms of a weighting function HW (ω) such that the tolerance on the error is constant over the two bands:

          (
          { 1         ω ∈ [0,ωp]
HW  (ω) = ( δp
            δs = 10   ω ∈ [ωs,π]
(7.20)

With this notation, the filter specifications amount to the following:

            {       |    jω         jω |}
ω∈[0m,ωax]∪[ω ,π] HW  (ω )|Hd (e  )- HD (e  )| ≤ δp = 0.1
     p   s
(7.21)

and the question now is to find the coefficients for h[n] (their number M and their values) which minimize the above error. Note that we leave the transition band unconstrained (i.e. it does not affect the minimization of the error).

The next step is to use (7.19) to reformulate the above expression as a polynomial optimization problem. To do so, we replace the frequency response Hd(e) with its polynomial equivalent and set x = cosω; the passband interval [0p] and the stopband interval [ωs] are mapped into the intervals for x:

Ip = [cosωp,1]
Is = [-1,cosωs]
respectively; similarly, the desired response becomes:
        {
          1  ω ∈ Ip
D (x) =
          0  ω ∈ Is
(7.22)

and the weighting function becomes:

        {
          1      ω ∈ Ip
W  (x ) =  δ ∕δ   ω ∈ I
           p  s       s
(7.23)

The new set of specifications are shown in Figure 7.12. Within this polynomial formulation, the optimization problem becomes:

       {     |           |}       { |    |}
x∈mIax∪I  W (x)|P (x )- D (x )|  = max   |E (x)| ≤ δp
   p  s
(7.24)

where P(x) is the polynomial representation of the FIR frequency response as in (7.19).


PIC

Figure 7.12: Filter specifications as in Figure 7.1 formulated here in terms of polynomial approximation, i.e. for x = cosω, ω ∈ [0].

Alternation Theorem. The optimization problem stated by (7.24) can be solved by using the following theorem:

Theorem 7.1 Consider a set {Ik} of closed, disjoint intervals on the real axis and their union I = kIk. Consider further:

Consider now the approximation error function

E (x) = W (x )[D (x)- P (x)]

and the associated maximum approximation error over the set of closed intervals

Emax = max  {||E(x)||}
        x∈I

Then P(x) is the unique order-L polynomial which minimizes Emax if and only if there exist at least L + 2 successive values xi in I such that |E(xi)| = Emax and

E(xi) = - E (xi+1)

In other words, the error function must have at least L + 2 alternations between its maximum and minimum values. Such a function is called equiripple.

Going back to our lowpass filter example, assume we are trying to design a 9-tap optimal filter. This theorem tells us that if we found a polynomial P(x) of degree 4 such that the error function (7.24) over Ip and Is as is shown in Figure 7.13 (6 alternations), then the polynomial would be the optimal and unique solution. Note that the extremal points (i.e. the values of the error function at the edges of the optimization intervals) do count in the number of alternations since the intervals Ik are closed.


PIC

Figure 7.13: Approximation error function E(x) for a 9-tap lowpass prototype; alternations are marked by a dot.

The above theorem may seem a bit far-fetched since it does not tell us how to find the coefficients but it only gives us a test to verify their optimality. This test, however, is at the core of an iterative algorithm which refines the polynomial from an initial guess to the point when the optimality condition is met. Before considering the optimization procedure more in detail, we will state without formal proof, three consequences of the alternation theorem as it applies to the design of Type I lowpass filters:

Optimization Procedure. Finally, by putting all the elements together, we are ready to state an algorithmic optimization procedure for the design of optimal minimax FIR filters; this procedure is usually called the Parks-McClellan algorithm. Remember, we are trying to determine a polynomial P(x) such that the approximation error in (7.24) is equiripple; for this, we need to determine both the degree of the polynomial and its coefficients. For a given degree L, for which the resulting filter has 2L + 1 taps, the L coefficients are found by an iterative procedure which successively refines an initial guess for the L + 2 alternation points xi until the error is equiripple.(5) After the iteration has converged, we need to check that the corresponding Emax satisfies the upper bound imposed by the specifications; when this is not the case, the degree of the polynomial (and therefore the length of the filter) must be increased and the procedure must be restarted. Once the conditions on the error are satisfied, the filter coefficients can be obtained by inverting the Chebyshev expansion.

As a final note, an initial guess for the number of taps can be obtained using the empirical formula by Kaiser; for an M-tap FIR h[n], n = 0,, M - 1:

M  ≃ --10-log10(δpδs)--13 + 1
           2.324Ω

where δp is the passband tolerance, δs is the stopband tolerance and Ω = ωs - ωp is the width of the transition band.

The Final Design. We now summarize the design steps for the specifications in Figure 7.1. We use a Type I FIR. We start by using Kaiser’s formula to obtain an estimate of the number of taps: since δpδs = 10-3 and Ω = 0.2π, we obtain M = 12.6 which we round up to 13 taps. At this point we can use any numerical package for filter design to run the Parks-McClellan algorithm. In Matlab this would be

[h, err] = remez(12, [0 0.4 0.6 1], [1 1 0 0], [1 10]);

The resulting frequency response is plotted in Figure 7.14; please note that we are plotting the frequency responses of the zero-centered filter hd[n], which is a real function of ω. We can verify that the filter has indeed (M - 1)2 = 6 alternation by looking at enlarged picture of the passband and the stopband, as in Figure 7.15.


PIC

Figure 7.14: An optimal 13-tap Type I filter which does not meet the error specifications.


PIC

Figure 7.15: Details of passband and stopband of the frequency response in Figure 7.14.

The maximum error as returned by Matlab is however 0.102 which is larger than what our specifications called for, i.e. 0.1. We are thus forced to increase the number of taps; since we are using a Type I filter, the next choice is M = 15. Again, the error turns out to be larger than 0.1, since in this case we have Emax = 0.1006. The next choice, M = 17, finally yields an error Emax = 0.05, which exceeds the specifications by a factor of 2. It is the designer’s choice to decide whether the computational gains of a shorter filter (M = 15) outweigh the small excess error. The impulse response and the frequency response of the 17-tap filter are plotted in Figure 7.16 and Figure 7.17. Figure 7.18 shows the zero locations for the filter; note the typical linear-phase zero pattern as well as the zeros on the unit circle in the stopband.


PIC

Figure 7.16: Impulse response of the 17-tap filter meeting the specifications.

Other Types of Filters. The Parks-McClellan optimal FIR design procedure can be made to work for arbitrary filter types as well, such as highpass and bandpass, but also for more sophisticated frequency responses. The constraints imposed by the zero locations as we saw on page § determine the type of filter to use; once the desired response HD(e) is expressed as a trigonometric function, the optimization algorithm can take its course. For arbitrary frequency responses, however, the fact that the transition bands are left unconstrained may lead to unacceptable peaks which render the filter useless. In these cases, visual inspection of the obtained response is mandatory and experimentation with different filter lengths and tolerance may improve the final result.


PIC

Figure 7.17: Frequency response of the 17-tap filter meeting the specifications.


PIC

Figure 7.18: Pole-zero plot for the equiripple FIR in Figure 7.17.

7.3 IIR Filter Design

As we mentioned earlier, no optimal procedure exists for the design of IIR filters. The fundamental reason is that the optimization of the coefficients of a rational transfer function is a highly nonlinear problem and no satisfactory algorithm has yet been developed for the task. This, coupled with the impossibility of obtaining an IIR with linear phase response(6) makes the design of the IIR filter a much less formalized art. Many IIR designed techniques are described in the literature and their origin is usually in tried-and- true analog filter design methods. In the early days of digital signal processing, engineers would own voluminous books with exhaustive lists of capacitance and inductance values to be used for a given set of (analog) filter specifications. The idea behind most digital IIR filter design techniques was to be able to make use of that body of knowledge and to devise formulas which would translate the analog design into a digital one. The most common such method is known as bilinear transformation. Today, the formal step through an analog prototype has become unnecessary and numerical tools such as Matlab can provide a variety of routines to design an IIR.

Here we concentrate only on some basic IIR filters which are very simple and which are commonly used in practice.

7.3.1 All-Time Classics

There are a few applications in which simple IIR structures are the design of choice. These filters are so simple and so well behaved that they are a fundamental tool in the arsenal of any signal processing engineer.

DC Removal and Mean Estimation. The DC component of a signal is its mean value; a signal with zero mean is also called an AC signal. This nomenclature comes from electrical circuit parlance: DC is shorthand for direct current, while AC stands for alternating current;(7) you might be familiar with these terms in relation to the current provided by a battery (constant and hence DC) and the current available from a mains socket (alternating at 50 or 60 Hz and therefore AC).

For a given sequence x[n], one can always write

x[n ] = xAC [n]+ xDC

where xDC is the mean of the sequence values. Please note the followings:

In most signal processing applications, where the input signal comes from an acquisition device (such as a sampler, a soundcard and so on), it is important to remove the DC component; this is because the DC offset is often a random offset caused by ground mismatches between the acquisition device and the associated hardware. In order to eliminate the DC component we need to first estimate it, i.e. we need to estimate the mean of the signal.

For finite-length signals, computation of the mean is straightforward since it involves a finite number of operations. In most cases, however, we do not want to wait until the end of the signal before we try to remove its mean; what we need is a way to perform DC removal on line. The approach is therefore to obtain, at each instant, an estimate of the DC component from the past signal values, with the assumption that the estimate converges to the real mean of the signal. In order to obtain such an estimate, i.e. in order to obtain the average value of the past input samples, both approaches detailed in Section 5.3 are of course valid (i.e. the Moving Average and the Leaky Integrator filters) . We have seen, however, that the leaky integrator provides a superior cost/benefit tradeoff and therefore the output of a leaky integrator with λ very close to one (usually 10-3) is the estimate of choice for the DC component of a signal. The closer λ is to one, the more accurate the estimation; the speed of convergence of the estimate however becomes slower and slower as λ 1. This can easily be seen from the group delay at ω = 0, which is

grd{H (1)} = --λ--
             1- λ

Resonator Filter. Let us look again at how the leaky integrator works. Consider its z-transform:

         1 - λ
H (z) = -------1-
        1- λz

and notice that what we really want the filter to do is to extract the zero-frequency component (i.e. the frequency component that does not oscillate, that is, the DC component). To do so, we placed a pole near z = 1, which of course corresponds to z = e for ω = 0. Since the magnitude response of the filter exhibits a peak near a pole, and since the peak will be higher, the closer the pole is to the unit circle, we are in fact amplifying the zero-frequency component; this is apparent from the plot of the filter’s frequency response in Figure 5.9. The numerator, 1 - λ, is chosen such that the magnitude of the filter at ω = 0 is one; the net result is that the zero-frequency component will pass unmodified while all the other frequencies will be attenuated. The value of a filter’s magnitude at a given frequency is often called the gain.


PIC

Figure 7.19: Pole-zero plots for the leaky integrator and the simple resonator.

The very same approach can now be used to extract a signal component at any frequency. We will use a pole whose magnitude is still close to one (i.e. a pole near the unit circle) but whose phase is that of the frequency we want to extract. We will then choose a numerator so that the magnitude is unity at the frequency of interest. The one extra detail is that, since we want a real-valued filter, we must place a complex conjugate pole as well. The resulting filter is called a resonator and a typical pole-zero plot is shown in Figure 7.19. The z-transform of a resonator at frequency ω0 is therefore determined by the pole p = λe0 and by its conjugate:

H (z) = --------G0----------=  -----------G0------------
        (1- pz-1)(1 - p*z-1)   1- (2λ cosω0)z- 1 + λ2z- 2
(7.25)

The numerator value G0 is computed so that the filter’s gain at ±ω0 is one; since in this case |H(e0)| = |H(e-0)|, we have

            ∘ ------------------
G0 =  (1 - λ)  1 + λ2 - 2λ cos2ω0

The magnitude and phase of a resonator with λ = 0.9 and ω0 = π∕3 are shown in Figure 7.20.

A simple variant on the basic resonator can be obtained by considering the fact that the resonator is just a bandpass filter with a very narrow passband. As for all bandpass filters, we can therefore place a zero at z = ±1 and sharpen its midband frequency response. The corresponding z-transform is now

                   1 - z-2
H (z) = G1 ---------------1----2--2-
           1- (2λ cos ω0)z   + λ z

with

G  = ∘-----G0-------
 1     2(1 - cos2ω0)

The corresponding magnitude response is shown in Figure 7.21.


PIC
PIC

Figure 7.20: Frequency response of the simple resonator.


PIC
PIC

Figure 7.21: Frequency response of the modified resonator.

7.4 Filter Structures

We have seen in Section 5.7.2 a practical implementation of a constant-coefficient difference equation (written in C). That was just one particular way of translating Equation (5.46) into a numerical procedure; in this Section we explore other alternatives for both FIR and IIR and introduce the concept of computational efficiency for filters.

The cost of a numerical filter is dependent on the number of operations per output sample and on the storage (memory) required in the implementation. If we consider a generic CCDE, it is easy to see that the basic building blocks which make up the recipe for a realizable filter are:

By properly combining these elements and by exploiting the different possible decomposition of a filter’s rational transfer function, we can arrive at a variety of different working implementations of a filter. To study the possibilities at hand, instead of relying on a specific programming language, we will use self explanatory block diagrams.

Cascade Forms. Recall that a rational transfer function H(z) can always be written out as follows:

         M -1
          ∏  (1-  z z-1)
                  n
H (z) = b0Nn=-11----------
          ∏          -1
             (1- pnz   )
          n=1
(7.26)

where the zn are the M - 1 (complex) roots of the numerator polynomial and the pn are the N - 1 (complex) roots of the denominator polynomial. Since the coefficients of the CCDE are assumed to be real, complex roots for both polynomials always appear in complex-conjugate pairs. A pair of first-order terms with complex-conjugate roots can be combined into a second-order term with real coefficients:

      - 1      * -1           { }  -1     2 -2
(1- az  )(1 - a z  ) = 1- 2 Re a  z   + |a| z
(7.27)

As a consequence, the transfer function can be factored into the product of first- and second-order terms in which the coefficients are all strictly real; namely:

          M             M
          ∏r        -1  ∏c         {  }  -1      2- 2
             (1 - znz  )   (1 - 2Re  zn  z  + |zn|z   )
H (z) = b0-n=1----------n=1----------------------------
         ∏Nr        -1 ∏Nc(        {  }  -1      2 -2)
            (1 - pnz  )    1 - 2Re  pn  z  + |pn| z
         n=1           n=1
(7.28)

where Mr is the number of real zeros, Mc is the number of complex-conjugate zeros and Mr + 2Mc = M (and, equivalently, for the poles, Nr + 2Nc = N). From this representation of the transfer function we can obtain an alternative structure for a filter; recall that if we apply a series of filters in sequence, the overall transfer function is the product of the single transfer functions. Working backwards, we can interpret (7.28) as the cascade of smaller sections. The resulting structure is called a cascade and it is particularly important for IIR filters, as we will see later.

Parallel Forms. Another interesting rewrite of the transfer function is based on a partial fraction expansion of the type:

       ∑            ∑      A
H (z) =    Dn z-n +    -----n--1
        n            n 1 - pnz
         ∑        Bn + Cn z-1
       +     ---------1------*--1--
          n  (1- pnz   )(1 - pnz  )
(7.29)

where the multiplicity of the three types of terms as well as the relative coefficients are dependent (in a non-trivial way) on the original filter coefficients. This generates a parallel structure of filters, whose outputs are summed together. The first branch corresponds to the first sum and it is an FIR filter; a further set of branches are associated to each term in the second sum, each one of them a first order IIR; the last set of branches is a collection of second order sections, one for each term of the third sum.

7.4.1 FIR Filter Structures

In an FIR transfer function all the denominator coefficients an other than a0 are zero; we have therefore:

               -1              -(M -1)
H (z) = b0 + b1z  + ⋅⋅⋅+ bM -1z

where, of course, the coefficients correspond to the nonzero values of the impulse response h[n], i.e. bn = h[n]. Using the constitutive elements outlined above, we can immediately draw a block diagram of an FIR filter as in Figure 7.22. In practice, however, additions are distributed as shown in Figure 7.23; this kind of implementation is called a transversal filter. Further, ad-hoc optimizations for FIR structures can be obtained in the the case of symmetric and antisymmetric linear phase filters; these are considered in the exercises.


PIC

Figure 7.22: Direct FIR implementation.


PIC

Figure 7.23: Transversal FIR implementation.

7.4.2 IIR Filter Structures

For an IIR filter, all the an and bn in (5.46) are nonzero. One possible implementation based on the direct form of the transfer function is given in Figure 7.24. This implementation is called Direct Form I and it can immediately be seen that the C-code implementation in Section 5.7.2 realizes a Direct Form I algorithm. Here, for simplicity, we have assumed N = M but obviously we can set some an or bn to zero if this is not the case.

By the commutative properties of the z-transform, we can invert the order of computation to turn the Direct Form I structure into the structure shown in Figure 7.25 (shown for a second order section); we can then combine the parallel delays together to obtain the structure in Figure 7.26. This implementation is called Direct Form II; its obvious advantage is the reduced number of the required delay elements (hence of memory storage).


PIC

Figure 7.24: Direct Form implementation of an IIR filter.


PIC

Figure 7.25: Direct form I with inverted order.


PIC

Figure 7.26: Direct Form II implementation of a second-order section.

The second order filter

        1 + b1z-1 + b2z-2
H (z) = 1---a-z-1---a-z-2-
             1       2

which gives rise to the second order section displayed in Figure 7.26, is particularly important in the case of cascade realizations. Consider the factored form of H(z) as in (7.28): if we combine the complex conjugate poles and zeros, and group the real poles and zeros in pairs, we can create a modular structure composed of second order sections. For instance, Figure 7.27 represents a 4th order system. Odd order systems can be obtained by setting some of the an or bn to zero.


PIC

Figure 7.27: 4th order IIR: cascade implementation.

7.4.3 Some Remarks on Numerical Stability

A very important issue with digital filters is their numerical behavior for a given implementation. Two key questions are:

One important difference is that, in the first case, the system is at least guaranteed to be linear; in the second case, however, we can have non-linear effects such as overflows and limit cycles.

Precision and computational issues are very hard to analyze. Here, we will just note that the direct form implementation is more sensible to