HEARING CW IN NOISE.

by Chuck MacCluer, W8MQV.

maccluer@math.msu.edu

(a synopsis of a paper to be presented at the 1996 EME Conference in Baltimore, August 1996.)


The detection problem.

Our task in EME communication is to hear Morse code deeply buried in noise. We know in advance that the signal is a (say) 500 Hz tone. We wish to detect its presence or nonpresence. Is there a dit being sent at this moment? This is the `detection' problem.

The optimal filter for detecting the presence of a known signal in white noise is the `matched' filter. A matched filter `plays' the received signal backwards against an uncorrupted version of itself --- the incoming noisy signal is convolved with a local clean 500 Hz signal. Such a strategy will yield the best possible improvement in signal-to-noise ratio.

I have implemented a matched filter using the powerful but low cost Texas Instruments TMS320C5X DSP Starter Kit, referred to as the DSK. Texas Instruments sells the DSK for $99, but it can be found discounted 40%. The board is powered by any 9 vac wall transformer @250 ma, has a serial DB9 port and two RCA audio in/out jacks. The levels at the analog ports are such that the DSK can be plugged directly between receiver audio output and headphones. I will send the code for the matched filter via Email upon request. A reprint of my 95 Central States article on the matched filter is obtainable with a manila SASE. Active EME operators such as VE7BQH have found this filter very effective.

But a fundamental question remains. Since the bulk of the signal processing in EME is done by the human ear and brain, what preprocessing would be of maximum assistance? Possibly the noise reduction from a matched filter is not the best spectral distribution to be presented to the ear. This has launched me on a modest review of the psychoacoustic literature to discover what abilities of human hearing could be exploited to improve EME reception.

The psychoacoustic results given below are primarily from three sources: David M. Green's "An Introduction to Hearing" (1976)
Wm.A. Yost's "Fundamentals of Hearing" (1994)
W.M. Hartmann's " Signals, Sounds and Sensation" (1996)
These books are in turn compilations of the experimental findings of hundreds of researchers.

Depth of hearing

Although it is not germane to our goals, it is astonishing to learn of the sensitivity of the human ear. Our ears can detect sound pressure at 0.0002 dynes/sqcm, which researchers declare as 0 dB SPL. Allow me, to put this in perspective. It has been seen in many experiments that at all observable levels, the displacement of the ear drum is linearily dependent on sinusoidal sound pressure. Extrapolating then to 0 dB SBL, one obtains the following astonishing statement:

The ear drum can detect displacement less that one hundreths of the diameter of a hydrogen molecule.
In fact, W. Bialek (1983) argues that "the ear is not a classical system," that it can hear at the quantum noise level. This implies the ear achieves "perfect biological amplification".

What we should carry away from all this is the certain belief that the brain/ear is an extremely powerful signal processing system.

Tones buried in white noise

There have been many studies to determine how deeply we can hear a sinusoidal tone that is buried in white noise. This is directly relevant to EME. Studies show that our ability to hear a sinusoidal in white noise

1. Is independent of level. Adjusting the audio gain has little effect.

2. Is independent of the bandwidth of the noise. The ear rejects noise outside its critical bandwidth (typically 50 Hz) centered at the tone. More on this below.

3. Is dependent on the duration of the tone. A dit shorter than 0.1 second requires a higher amplitude to be heard. For durations between 10 and 100 ms, the detectable signal power is inversely proportional to duration.
Sending faster than 12 WPM is counterproductive. On the other hand, dits longer than 100 ms give no significant improvement in detectability.

4. Is dependent on the pitch of the tone. Higher pitch requires a higher S/N. As the pitch rises from 400 Hz to 4000 Hz, both the necessary S/N rises and the critical bandwidth widens exponentially.

5. Is more or less the same for all (unimpared) individuals. When absolute magnitudes are adjusted, the S/N required appears constant across individuals. This gainsays the `gifted ear' hypothesis.

6. Is dependent on the purity of the tone. Roughness degrades detectability.

All of the above generalizations are yields of `masking' studies. A single tone of say 500 Hz is partially masked by a band of white noise. Subjects are asked to decide presence or nonpresence of the 500 Hz signal. By varying the signal strength as well as the center frequency and bandwith of the imposed noise, researchers can determine the required S/N and map the ear/brain's built-in filter width and shape. For example, he ear's skirt shape is much steeper on the low frequency side. Moreover, Q is constant for f > 500 Hz, but decreases with frequency below 500 Hz.

The filter's (critical) bandwidth varies from a width of 50 Hz in the 400--500 Hz range, to a width of 200 Hz at 2000 Hz.The ear ignores noise that falls outside its critical bandwidth. If the noise bandwidth is lowered below the critical bandwidth, no improvement is noted. Thus the use of narrow CW filters serves only to reduce fatigue, and, on the downside, to correlate the nearby noise into ringing at the signal frequency. For unlike the psychoacoustic experiments where the signal and noise are independently generated then mixed, our EME signals and noise must pass through the same filters.

Binaural phenomena

Survival pressures have evidently forced the brain to evolve powerful signal processing algorithms to exploit our binaural hearing. Phase information greatly improves the detectability of a sinusoidal in noise. Below in Table 1 is a synopsis of experimental results where the signal and/or noise were fed in phase or exactly out of phase to the right and left ear.


	    Table 1. Improvement in detection of a tone in white noise


	  signal                noise           dB improvement
	  ____________________________________________________

	  in phase             in phase               0
	  one ear only         one ear only           0
	  one ear only         uncorrelated           0
	  180 phase            180 phase              0
	  180 phase            uncorrelated           3 
	  in phase             uncorrelated           4
	  one ear only         180 phase              6
	  one ear only         in phase               9
	  in phase             180 phase             13
	  180 phase            in phase              15

(Conditions: signal is 500 Hz sinusoidal of duration 100 ms, noise at least 20 dB 
above audible threshold.)
So for example, if the signal is fed to one ear but out of phase to the other, and if the noise is fed in phase to both, you can expect the miraculous improvement of 15 dB in your ability to extract the signal from the noise.

Unfortunately all this is wishful thinking. If we could separate out the signal from the noise in order to reverse its phase to one ear, we would have the signal in hand --- no further processing would be necessary.

Airline pilots exploit the 3 dB configuration of Table 1 by feeding their receiver's audio out of phase to each ear to suppress cockpit noise.

An optimal EME receiver

All this psychoacoustic evidence points to an optimal EME receiver architecture. As I argue in these proceedings, mixing of any sort imposes wideband 1/f noise on the resulting detected audio that cannot be masked by low noise preamplifiers. To minimize receiver induced noise and to obtain the purest audio, a direct conversion phasing receiver such as KK7B's single board miniR2 should be employed, with the injection frequency supplied by a variable crystal oscillator-multiplier chain. The audio from the DC receiver is buffered, split, and sent to two Texas Instruments DSK boards. The output of the two DSKs is sent to a two-channel headphone audio mixer with gain control and left/right pan. Op-amp inverters might also be optionally available, but inversion could more easily be done in software. With the twin DSKs, two distinct algorithms could run concurrently, fed in various mixes to both ears.

I have noticed the following phenomenon: with my matched filter MATCH running on one DSK sent in phase to both ears, but with a unprocessed signal (using my straight through routine THRU on the other DSK) at low level in one or both ears, I experience an improvement in detectability. Perhaps this is an example of the exciting new method of `comodulation masking release'. In this approach, the noise amplitude structure about the signal is used to modulate a noise at a removed frequency. When this second noise is mixed with the given S+N, a significant improvement in detectability is observed. Please inform the EME community if your own experiments yield results. As yet I have come to no conclusion about whether or not the matched filter is the best preprocessor to exploit the special abilities of the human ear.


Top Page.


For comments, etc: Rein, W6/PA0ZN