Decimating.A wideband SDR (software defined radio) may look for narrowband signals in a digital data stream of very high bandwidth.Conventionally (in the time domain) the frequency of the desired signal is shifted to (near) zero where it is obtained as a complex valued signal (I and Q) which can represent signals with positive as well as negative frequencies. (Above and below the frequency that would give DC voltages for I and Q.) As an example, consider a signal sampled at 96kHz. It could contain all frequencies from 48 kHz to +48 kHz, but in real life it would contain less because the source from where it was obtained needs a filter that supresses signals above 48 kHz. In order to decimate by 4 to 24 kHz we would first have to perform complex multiplication with a sine and a cosine of a frequency near our desired frequency to move it to near zero. Then we would have to run the signal through a digital low pass filter that attenuates frequencies above 24+X kHz sufficiently while leaving frequencies below 24X kHz unaffected. In case X=8, the output signal would be sampled 24/16 = 150% faster than necessary so it would be oversampled by 50%. Once the filter is applied, there is no need to keep all the samples. One can simply use every fourth sample because there is no new information in the three one would skip. (In real life only every fourth sample would actually be computed by the filter algorithm.) Another possibility is to do the decimation in the frequency domain. Assume the same signal sampled at 96 kHz. In case we need the power spectrum anyway, we can just pick one quarter of the frequency bins around the desired signal and perform the back transformation of a transform that is four times slower and only contains the desired frequency range. Decimation involves various compromises between processing delay, oversampling and dynamic range. For decimating in the time domain, all compromises lie in the design of the low pass filter. It is all well known. When decimating in the frequency domain one is actually doing the same thing but since it is done at the other side of a linear transformation (the FFT) it is not at all obvious. The FFT and the Top Hat filter.The FFT of size N gives the same result as a bank of N (complex) Top Hat filters where each filter is preceeded by a frequency mixer that shifts the frequency by k*F where k goes from 1N/2 to N/2 in steps of 1 and F is the fundamental, the frequency that has exactly a single period in the time spanned by N samples. The Top Hat filter is the simplest FIR filter. All filter coefficients are 1.0. It is simply an integrator, an "integrate and dump" filter.The bare (unwindowed) FFT has of course the same frequency response as the Top Hat filter. To get high dynamic range spectra one has to use windowing. There is some information here: Sliding FFT and DSP Filtering One would typically use simple window functions with large FFTs in SDR receivers becaused the desired response would be as narrow as possible which would call for a window that would be as wide as possible. That means that the window function wouold always have the same sign so different windows just differ in how flat they are at the center and in what way they fall off to zero towards the ends. Frequency domain decimation filters.To understand the problem with picking a subset of the fft bins from a large FFT in order to resample and reject aliases consider a single very strong signal at some distance from where we look for a weak signal.The very strong signal inevitably has a width that spans several FFT bins as a consequence of the requirement of a window function. In case ALL of those bins are included in the subset, there is no problem, the very strong signal will be present unaffected in the resampled output after reverse transforming the subset. In case all the bins that contain the very strong signal signal fall outside the subset, the very strong signal becomes completely eliminated. In case the very strong signal partly falls inside and partly outside the subset, back transformation would produce segments of something similar to the very strong signal, but those segments would not have quite the correct amplitude and phase. The error would vary with time and subsequent transforms would not fit to each other. As a result transcients would occur periodically and they would constitute spurs that limit the dynamic range. In the above example, decimating from 96 kHz to 24 kHz, one would apply a window function to pick the bins near the center with their full amplitude while picking bins further out with gradually falling amplitudes. If the rate at which the window falls off is small relative to the bin bandwidth of the full FFT, all the bins corresponding to the very strong signal would be treated nearly equal and then the errors would be small, the signal would just become attenuated by the nearly constant amplitude over its width in the window function. There is some more on the decimation problem here: Linrad: New Possibilities for the Communications Experimenter Part 3 (396080 bytes, PDF file) Linux and the Linrad software package. QEX May/June 2003. p 3643 Decimation filters in Linrad.Up until Linrad02.55 the decimation filter was a flat center region over 12.5% of the range with a parabolic falloff to reach zero at the full width. Figure 1 shows how the parabolic window behaves for a strong signal that sweeps across a desired weak signal. The weak signal is on/off keyed at a level 100 dB below the very strong signal. The input was generated by The internal signal generator in Linrad. as an ideal 16 bit signal. 
Fig 1. The parabolic decimation window used in Linrad up to version Linrad0255. 
Fig 2. The gaussian error function used for the decimation window in Linrad02.56 and later. 
Figures 1 and 2 show a poor dynamic range.
That is on purpose in order to make the differences clearly visible.
The soundcard sampling speed is 8 kHz in complex format (I and Q) for
a total bandwidth of 8 kHz.
The main spectrum and waterfall display about 50% of the spectrum.
The strong signal sweeps from the frequency of the desired signal
downwards by about 2.2 kHz.
The parabolic window has a constant second derivative and produces spurs of constant amplitude over its entire falloff. The dynamic range is limited to about 85 dB when the strong signal is anywhere between 250 and 1000 Hz away from the desired signal. The window using the gausian error function erf(x) has a longer flat region and falls faster towards zero once it begins. The spurs produced by the error function have a sharp maximum when the interfering signal is about 700 Hz away from the desired signal where dynamic range to about 75 dB. There are two ways to improve the dynamic range. One can use a longer transform at 8 kHz to make the signal narrower in the main spectrum. Figure 2 uses an fft1 size of 256 with transforms that span a time of 32 ms. The window is sin^{3} with an associated bandwidth of 75 Hz. Figure 3 shows what when the fft1 size is doubled for a bin bandwidth of 37 Hz. 
Fig 3. The fft1 size is here twice as large compared to figure 2. Spurs drop by about 20 dB. 
The doubled size of fft1 improves the dynamic range by 20 dB but there
is an important disadvantage and that is the processing delay caused
by fft1 which has increased to 64 ms.
Already 32 ms is a bit too much for use as a transceiver with QSK.
The other way to improve the dynamic range is to reduce the second derivative of the resampling window. Figures 1 to 3 use the parameter "First mixer bandwidth reduction in powers of two" with a value of 2 for a timf3 sampling rate of 2 kHz. Figure 4 shows the effect of doubling the timf3 sampling rate with the same fft1 parameters as in figure 2. 
Fig 4. The timf3 sampling rate here twice as large compared to figure 2. Spurs drop by about 20 dB. 
The dynamic range is a function of the fft1 bin bandwidth in relation
to the width of the sampling window falloff region which in turn
is proportional to the timf3 sampling rate.
Not unexpectedly, the dynamic range improvement over figure 2 is the same in
figures 3 and 4.
In order to have a short time delay in fft1 one would lake to set the fft1 bandwidth to e.g. 300 Hz for high speed CW with QSK. The associated fft1 time delay would then be 8 ms. With a timf3 bandwidth of 8 kHz one would expect something similar to figure 2. Figure 5 shows a signal with a sample rate of 128 kHz and with such settings. The dynamic range is indeed similar to what can be found in figure 2. 
Fig 5. Here the fft1 bin bandwidth is 300Hz with a timf3 sampling speed of 8 kHz. 
Figures 6 and 7 show the same fft1 parameters as figure 5,
but they have 16 and 32 kHz for the timf3 sampling rate respectively.

Fig 6. timf3 speed is 16 kHz. Otherwise as figure 5. 
Fig 7. timf3 speed is 32 kHz. Otherwise as figure 5. 
The keyed weak signal is 100 dB below the strong signal.
It is obvious from figures 6 and 7 that the false signals are
100 dB and 120 dB relative to the strong signal at the worst
frequency separation.
Better spur suppression can be obtained by making the timf3 sampling
speed even larger  but there is a maximum,
it must be no more than half the input sampling rate.
With a much narrower bin bandwidth in the first FFT the spur
suppression is no issue, but then processing delays become
longer.
Conclusions.Decimation in the frequency domain has the same characteristics as decimation in the time domain. In case a large decimation ratio is desired, it is necessary to use long FIR filters to provide a high suppression of false signals. In the frequency domain that means large FFT sizes.If one is willing to accept a small decimation ratio, four only in figure 7, an FFT size of 1024 is sufficient to place the false signals at 120 dB. For N times larger resampling ratios, the size of the original FFT has to be N times larger to keep the spurs at the same level. In situations where fast response is not first priority one often uses very large transforms because the high resolution one then can get in waterfall graphs is desireable. Then resampling spurs is not an issue. To SM 5 BSZ Main Page 