Our emphasis is on frequency-domain specification of the desired filter characteristics since this is most common in practice. Our objective is to give a general picture of the wide range of possibilities available for digital filter design while also giving sufficient details about some of the techniques.
The chapter concludes with some remarks on the choice between the two classes of digital filters. The main point of the discussion is that the choice in not always clear cut and may depend on a multitude of factors that may be difficult to evaluate. However, it should be clear that digital filters are characterized by great flexibility in design and implementation this fact makes it possible to implement rather sophisticated signal processing schemes that in many cases would be difficult, if not impossible, to implement by analog means.
Design of FIR Filter Using Windows
:
The most straightforward approach to FIR filter design is to obtain
a finite length impulse response by truncating an infinite-duration impulse
response sequence. If we suppose that Hd(ejw) is an ideal desire frequency
response ,then
Hd (ejw ) =S hd(n)e-jwn ……………..
1
Where h d (n) is the corresponding impulses response sequence i.e
h d (n) = 1/ 2 Hd(ejw )ejw dw…………2
In general , Hd(ejw ) for a frequency selective filter may be
piece wise constant with discontinues at the boundary between bands .In
such cases the sequence hd(n) is of infinite duration & it must be
truncated to obtain a finite sequence impulse response .Eqs. 1 & 2
can be thought of a Fourier series representation of the periodic frequency
response Hd(ejw ) with the sequence hd(n) playing the role of the "Fourier
coefficient". The most familiar concept from this theory is the Gibbs phenomenon.
In the following discussion we shall see how this non uniform convergence
phenomenon manifests itself in the design of FIR filters.
If hd(n) has an infinite duration ,one way to obtain a finite duration
causal impulse response is to simply truncate h(n) ,i.e., define
h(n) = hd(n), 0< n <
N -1
0 , otherwise
In general ,we can represent h(n) as the product of the desired impulse
response & a finite duration "Window" w(n) ; i.e.,
h(n) = hd(n) w(n)
w(n) = 1, 0 < n < N-1
0, otherwise
Using the complex convolution theorem we see that
Hd (ejw ) = (1/ 2
) Hd(ejq )w(ej ( w- q ) )dq
That is, H(ejw) is the periodic continues convolution of the desired
frequency response with the fourier transform of the Window. Thus the frequency
response Hd(ejw ) will be a "smeared" version of the desired response
Hd(ejw ). The choice of the windows is governed by the desire to have w(n)
as short as possible in duration so as to minimize computation in the implementation
of the filter, while having w(ejw ) as narrow as possible in frequency
so as to faithfully reproduce the desire frequency response.
However, for the rectangular Window, the " sidelobes " are not insignificant
& in fact, as N increases, the peak amplitude of the main lobe &
the side lobes grow in a manner such that the area under each lobe is a
constant, while the width of each lobe decreases with N. The result of
this is that as w(ej(w - q) ) slides by the discontinuity
of Hd( ejq ) with increasing w , the integral of w(ej(w - q)
)Hd ( ejq ) will vary in an oscillatory manner as each lobe of w(ej(w -
q) ) moves past the discontinuity.
In the theory of fourier series, it is well known that this non-uniform
convergence, the Gibbs phenomenon, can be moderated through the use of
a less abrupt truncation of the fourier series. By tapering the window
smoothly to zero at each end ,the height of the side lobes can be diminished;
however ,it is achieved at the expense of a wider main lobes & thus
a wider transition at the discontinuity. e.g. of some commonly used windows
are shown :
Rectangular : | w(n) =1 , 0 < n < N-1 |
Bartlett : | w(n) = 2n
; 0 <
n < N-1 N-1 2 2 - 2n ; N-1 < n < N-1 N-1 2 |
Hanning : | w(n) = 1/2 { 1 - cos 2pn
} ; 0 < n < N-1 N -1 |
Hamming : | w(n) = 0.54 - 0.46 cos {
2pn } ; 0 < n < N-1 N - 1 |
Blackmann : | w(n) = 0.42 - 0.5 cos { 2pn }
+ 0.08 cos { 4pn }; 0 < n < N-1 N - 1 N -1 |
since these windows are all symmetrical the phase is linear. The rectangular window has the narrowest main lobe & thus for a given length, N should yield the sharpest transition of H(ejw) at a discontinuity of Hd(ejw ). However, the first side lobe is only about 13 dB below the main peak resulting in oscillation of H(ejw) of considerable size at a discontinuity of Hd(ejw). By tapering the window smoothly to zero the side lobes are greatly reduced however; it is clear that the price paid is much wider main lobes & thus wider transitions at discontinuities of Hd(ejw ).
Window Peak amplitude of side lobe ( dB ) Transition width
of main lobe Minimum stopband attenuation( dB )
Rectangular | - 13 4p/N | - 21 |
Bartlett | - 25 8p/N | - 25 |
Hanning | - 31 8p/N | - 44 |
Hamming | - 41 8p/N | - 53 |
Blackman | - 57 12p/N | - 74 |
Limitation of the procedure is that it is some what difficult to determine in advance the type of windows and the duration N required to meet a given prescribed frequency response specification however very simple digital computer program can be used to make such a determination by a trial-and-error procedure. Thus, design of digital filter using a window is often a convenient & satisfactory approach.
COMPUTER - AIDED DESIGN OF FIR
FILTERS
:
There are several iterative design procedure for FIR filters which
in general yield better filters then the window method at the expense of
greater complexity in the design procedure approach.
FREQUENCY - SAMPLING DESIGN :
First proposed by Gold & Jordan, finite-duration sequence can be
represented by its discrete fourier transform. Thus an FIR filter has representation
in terms of the "frequency samples".
H(k) = H(z)| z = e j ( 2p / N ) kn
; k = 0,1,2………, N-1
H(z) can be represented in terms of the samples H(k) by the expression
:
N-1
H(z) = 1-z-N å
H(k) .
N k = 0 1-ej(2p/N)kz-1
If we let z = ejw, then the frequency response has the representation:
N-1
H(ejw) = 1 - e-jwN å
H(k) .
N k = 0
1 - e j ( 2p/ N ) ke -j w
= e-j w ((N-1)/ 2) å H(k)ejp k (1-1/ n) sin [N(w-2p/N)k/2]
N
sin[w-(2p/N)k/2]
Consider the approximation of an ideal low pass filter with cut off
frequency wc = p/2. Fig. shows the desired frequency response Hd(ejw)
& the samples H(k) for N = 33. As can be seen the magnitude of the
frequency response is specified at multiples of 2p/33 radians with
the cut of the frequency wc = p /2 being between w = 16p/33 &
18p/33. If we evaluate the frequency response correspondent to such a filter,
we obtain the rather disappointing curve shown in fig. This fig shows 20
log10 H(ejw), with the fixed sample points being indicated by the heavy
dots in the passband & the points indicating infinite attenuation at
the zero samples in the stopband. We note that there is a smooth transition
between 16p/33 & 18p/33; however, the minimum stopband attenuation
is some what less then 20 dB. This filter would unsatisfactorily for most
purposes. One way to improve the stop band attenuation is to widen the
transition band. This can easily done in this case by allowing a sample
at the boundary between passband & stopband to take on the value different
from either 1 or 0, as depicted in fig. The transition band is now about
twice as wide, but the minimum stopband attenuation is considerably greater.
H(ejw) is a linear function of the parameters H(k). Thus linear
optimization technique can be used to vary these parameter so as to give
the best approximation to the desired filter. A simple gradient search
technique can be used to choose the value of H1 such that the maximum error
is either the passband or stopband is minimized. If further improvement
is required, we can broaden the transition region further by allowing a
second sample to differ from 1 or 0. If N is held fixed this results gives
results in a transition region twice as wide. However, greater attenuation
can be achieved for lowpass filter with up to four variable transition
samples result in the transition region twice or a wide. However greater
then attenuation can be achieved. If we double N, the transition width
remains the same while allowing two transition sample to vary.
Frequency sampling design are particularly attractive for a narrow-band
frequency selective filter where only a few of the sample of the frequency
response are nonzero. In such cases a frequency sampling realization may
be considerably more efficient then either direct convolution using the
DFT. In general, even if more then a few sample are nonzero, the frequency-sampling
design method yields excellent results. However, the method lack the flexibility
in specifying the passband & stopband cut off frequencies since the
placement of the one & zero & transition samples are constrained
to integer multiples of 2p/N. By making N large samples can obtain arbitrarily
close to any given frequency; however this is an inefficient approach.
For this reason, particularly if the filter is not to be realized using
the frequency sampling structure, other algorithmic design technique has
been developed with more attractive features of general frequency selective
frequency design. In the case of frequency selective filter design. In
the case of frequency selective filters designed by this technique there
is an desirable limitation on the choice of cut off frequencies. Furthermore,
the approximation error tends to be highest around the transition region
and smaller in region that are remote from the region in which the transition
samples are located. It seems intuitively reasonable that if the approximation
error were spread out uniformly in frequency, a given design specification
should be met with a lower-order filter than if the approximation just
meets the specification at one frequency & far exceeds it at others.
Equiripple approximation for FIR
Filter :
Suppose that we wish to design a lowpass filter to approximation
1 in the band 0< | w| < wp with maximum error d1 &
we wish to approximation to zero in the band ws < |w | <
p with maximum error d2. Design algorithm have being developed in which
some of these parameters are fixed and an iterative procedure is used to
obtain optimum adjustments of the remaining parameters. Two rather distinct
approaches have being developed. Herrmann & Schuessler & later
Hofstetter developed the procedure in which M, d1 &
d2 are fixed & wp & ws are variable. Parks &
McClellan & Rebiner developed a procedure in which M, wp &
ws are fixed & d1 & d2
are variable.
HERRMANN AND SCHUESSLER APPROACH :
The number of the maxima and minima in the frequency range
0 <= w <= p is an important parameter of equiripple approximation.
To examine the dependence of this parameter on the length of this impulses
response we use the fact that (cos wk) can be expressed as a sum of power
of (cos w) to write
M
H(ej w) = ä a k ( cos
w )k
k = 0
where ak`s are constant that are related to the value of the unit sample
response. There can be at most M-1 local maximum & minimum in
the interval 0< w < p. H( ejw) will always have
either a maximum or a minimum at w = 0 & w = p. Thus there will
be at most M+1 local extrema in the closed interval 0 < w < p.
Using this fact Herrmann & Schuessler showed how to write
a set of equation that guarantees the equiripple behavior. The parameter
M ,d1, d2 are fixed & wp & ws are allowed to be free to vary
being defined by the equations
H( ejp ) = 1 - d1
H( ejs ) = d2
For e.g. we can write
H( ej 0) = | 1 + d1, | H(ejp) = d2 |
H( ej w1 ) = | 1 - d1, | H¢( ejw1 ) = 0 |
H( ej w2 ) = | 1 + d2, | H¢( ejw2) = 0 |
H( ej w3 ) = | -d2, | H¢( ejw3) = 0 |
H( ej w4 ) = | d2, | H¢( ejw4) = 0 |
H( ej w5 ) = | -d2, | H¢( ejw5) = 0 |
In this e.g. there are three extrema in the passband & four
extrema in the stop band .Thus we must have
M + 1 = 4+ 3 = 7
i.e., M = 6 or N = 13. There are M +1 = 7 unknown
coefficient & five unknown frequency w1, w2, w3,…., w5, at which
the extrema occur, making a total of 12 unknown to be determined as a solution
to the above set of 12 equations. In general, we can have Np extrema in
the passband & Ns extrema in the stopband, where
Np + Ns = M + 1
& we can write 2M equations relating the M+1 filter coefficients
& the M-1 frequencies at which extrema occurs. These equations are
unfortunately nonlinear & must be solved by an iterative procedure.
This approach has only been satisfactory for rather low values of M (on
the order of 30). For given values of M d1 &d 2 ,there are only M different
equiripple filters, corresponding to the fact that only M different values
of Np(or Ns) can be chosen. That is for fixed values of M, d1 and
d2, only M different choices of wp are available.
HOFSTETTER, OPPENHEIM &
SIEGEL :
Instead of writing a set of nonlinear equations, Hofstetter et al.
used an iterative technique for producing a trigonometric polynomial that
has extrema of the desired value. The procedure beings by choosing Np &
Ns & then estimating the frequencies at which the extrema occur. Then
standard Larange interpolation methods are used to compute a polynomial
that take on the prescribed extremal values (1 +- d1 in the
passband & + - d 2 in the stopband) at the estimated
frequencies. If the maxima & minima have the prescribed values the
procedure is terminated; otherwise a new polynomial is computed with the
new estimate of extremum frequencies being the extrema of the previous
polynomial. However, there remains the problem of lack of precise control
over the passband and stopband edges. To formalize the approximation problem
in this case let us define an approximation error function.
E(w)=W(w)[Hd (ejw) - Hejw ]
where E(w) is evaluated over the passband and stopbands of the desired
filter, and W(w) is a weighting function.
For example, suppose that we wish to obtain an approximation
as in figure where M ,wp & ws are fixed. In this case,
Hd ( ejw) =í1, 0 £
w £ wp
0, ws £ w £ p
W (w) = { 1/k, 0 £
w £ wp
1, ws £ w £
p
This choice of W(w) specifies a relation ship between the relative sizes of the passband and stopband approximation errors. That is, K should be equal to is desired ratio d1/d2.
PARKS & McCLELLEN FORMULATION
:
Parks & McClellen reformulation a theorem of approximation
theory in terms of the above filter-design problems, thereby obtaining
the following theorem.
ALTERNATION THEOREM: Let F be any closed subset of the closed
interval 0 £ w £ p. In order that H(ejw) be unique
best approximation on F to Hd (ejw) ,it is necessary & sufficient that
the error function E(w) exhibit on F at least M + 2 "alternation,"
thus : E (wi) = - E (wi-1) = +- ||E|| = max | E (w)| with wo
< w1 < w2 ... < w M+1
& wi contained in F.
H (ejw) can have most M - 1 local maxima & minima
in the interval 0 < w < p & like wise in the combined
open interval 0< w < wp & ws < w <
p. Furthermore, by definition of the passband & the stopband, H(ejw)
is constrained such that
H(ejwp ) = 1- d1
Hd(ejws) = + d2
Also we recall that H(ejw) will always have a local maximum or minimum
at w=0 & w=p thus there can be at most M + 3 frequencies at which
the error curve attains its maximum. Therefore, the unique best approximation
for the desired lowpass response has either M+2 or M+3 alternations
of the error function. Four different possible frequency-response curves
for a value of M = 7 are shown in fig. The case when the maximum error
is attained at both w = 0 & w=p, and there are M+3 alternations.
Figure show the cases where the maximum error is attained only at
w=p & w=0, respectively. In these two cases there are only M+2
alternations. Figure shows the case where there are only M+2 alternations
and the maximum error is attained at both w=0 and w=p. According to the
alternation theorem, all of this filter are optimum approximations for
their prescribed passband & stopband cutoff frequencies. Filters of
the type depicted in figure were called " extra ripple " filters by Parks
& McClellen. This terminology was motivated by the fact that such filters
have more than the minimum number (M+2) of alternations of the error required
for optimality. Parks & McClellen also presented an iterative
procedure of obtaining optimum filters. This procedure is similar to the
Hofstetter algorithm except in this case, K, M, wp & ws are the fixed
parameters & d2 is allowed to vary. The procedure begins by estimating
M+2 frequencies {wi } i= 0,1---- M+1, at which the error function E(w)
attained its maximum value. These frequency must lies in the regions
0 < w < wp and ws < w < p. Assuming that these
estimation frequency are the desired error extremum frequencies we can
compute the value of the magnitude of the peak error, which we call ‘r’.
from equation. This set of equations can be solved for all the M + 2 unknowns
{ h(n) } and r, however, it is more efficient to only solve for r. Then
a trigonometric polynomial is determined which has the correct value at
the frequencies {w I };i.e., 1 + Kp if 0 < wi < wp
and ws < wi < p. As in the Hofstetter algorithm,
the new estimate of the extremum frequencies is taken to be the peak of
the interpolating polynomial. These peaks are found by searching over a
finely divided set of points in the passband & stopband. In this
case, wp & ws , are again known to be extremum frequencies of the error
function . In addition, there are (M-1) local minima & maxima in the
open intervals (0 < w < wp) and (ws < w <
p) . The remaining extremum can be either at w=0 or w=p . If there is a
peak of the error function at both 0 & p, the frequency at which the
greatest error occurs is taken as the new estimate of extremum frequency.
The cycle -computation of r fitting a polynomial to the assumed error peaks
& then locating actual error peak - is repeated until r does not changes
from it previous value this value of r than the desired minimum of d2 .
The resulting filter has the minimum d2(d 1= Kd 2) for a prescribed transition
band (ws - wp). If given values of d1& d2 are desired, the above algorithm
can be employed iteratively to determine a filter with prescribed values
of d1 and d2 by fixing wp and varying ws until the desired d1 and
d2 are obtained. A summary of results of this type are given in figure
where Dw = ws - wp is plotted against wp for fixed values of
M , d1 and d2(K=1). This curve shows that as wp increases,
the transition width Dw attains local minima. Thus the filters obtained
by Herrmann - Schuessler & Hofstetter approaches are special
case of the approximation to obtained by the Parks-McClellan algorithm.
Rabiner has discussed the formulation of the equiripple design problem
that is equivalent to Parks-McClellan formulation but it is specially structured
to be amenable to solution by linear programming techniques. The essence
of this approach is to note that H(ejw ) can be expressed as a linear
combination of cosine functions. The linear programming solution is slower
computationally but is more flexible in that is allowed time domain constraints
as well as frequency domain constraints.
COMPARISON OF IIR AND FIR DIGITAL
FILTERS :
What type of system is the best, IIR or FIR? Why give so many design
methods? Which method yield the best results? The answer to these questions
is that for both IIR & FIR filters no single type of filter nor single
design method is best for all circumstances.
The choice between FIR & IIR filter depends upon the relatives
weight that one attaches to the advantages & disadvantages of each
type of filter. IIR filter, for e.g., have the advantage that the variety
of the frequency selective can be designed using closed form design formulas.
That is, once the problem has being specified in terms appropriate
for a given kind of filter (e.g. butterworth, Chebyshev or elliptic ),
then the co-efficient (or poles & zeroes) of the desired digital filters
are obtained by straight forward substitution into a set of design equation.
This kind of simplicity of the design procedure is attractive if only a
few filters are to be designed or if limited computation facilities are
available.
In case of FIR filters, closed form design equations do not exist.
Although the window method can be applied in a rather straight forward
manner, some iteration may be necessary to meet a prescribed specification.
Most of the other FIR design methods are iterative procedures, requiring
rather powerful computation facilities for the implementation. In contrast,
it is often possible to design frequency selective IIR digital filters
using only a hand calculator & table of analog filter design parameters.
However, the price paid for simplicity of the design process can be measured
in term of loss of flexibility in the attainable filter response. The closed-form
IIR designs are primarily limited to low pass, band pass and high pass filters,
etc. Furthermore these designs generally disregard the phase response of
the filter. Thus, although we might obtain an elliptic low pass filter with
excellent amplitude response characteristics by a relatively simple computational
procedure, its phase response will be very nonlinear (especially at the
band edge).
In contrast, FIR filters, can have precisely linear phase. Also, the
window method and most of the algorithmic methods afford the possibility
of a approximating rather arbitrary frequency response characteristics
with little more difficulty than is encountered in the design of low pass filters. Also, it appears that the design problem for FIR filters is much
more under control than the IIR design problem because there is an optimality
theorem for FIR filters that is meaningful in a wide range of practical
situations. Finally, there are questions of economics in implementing a
digital filter. Concerns of this types are usually measured in terms of
hardware complexity or computational speed. Both of these factors are more
or less directly related to the order of the filter required to meet a
given specification. If we put aside phase considerations, it is generally
true that a given amplitude response specification will be met most efficiently
with an IIR filter. However, in many cases, the linear phase available
with an FIR filter may be well worth the extra cost and, in some cases
one may not need to sacrifice efficiency in choosing an FIR filter.
Thus a multitude of tradeoffs must be considered in designing
a digital filter. Clearly, the final choice will most often be made in
terms of engineering judgment on such questions as the formulation of specifications,
the method of implementation, and computational facilities available for
carrying out the design.