Here are some questions I've been asked:

1. Are you running Linux as a sub directory on a drive partioned/formatted for Microsoft? Or do you have a separate hard drive you have partioned/formatted under Linux?

 
2. Is Redhat 6.1 only required?  

3.What is the delay like through the software?  Can you tune in a signal in real time or is it like SPECTRAN/HAMVIEW where you have to click on the signal with a mouse before you can do any form of tuning. In other words, the time delay is such that it is virtually impossible to tune in a signal without first placing the filter right on the signal? While there is some time delay through DSP Blaster, in essence it is in real time.  

4.  Do I need a supercomputer to run this?

5.  The main things stopping me ( and many others I suspect) are:
        a.  Lack of RX hardware to convert down to audio. What we really need is a kit of some sort to get people started.

        b. And what kind of mixer is the TUF-1 and where can I get it?

        c.  Well why did you use a First IF of 40.455 and a Second LO of 40.470 MHz?  

6.  What sound cards are you using?

7.  What about transmitting?  How do you do that?

8.  How good is the noise blanker?

9.  This is not really a question, but:  I really really wish all of this could be done with standard equipment.

10. I also have no experience of Linux (nor really wanted any until now!)

11.  I have no idea how I would transfer the files from my Windows PC to the Linux PC. I do have an ethernet connection between them and software for doing that in Windows 98, but Linux?

12.  As far as polarization goes, what about cable lengths and preamp gain and phase matching, can these be compensated for in the software?

13.  I notice there are now fft3 avgn and fft2 avgn boxes on the screen.  Is this something new with dsp13?

14.  I assume fft2avgn is just the number of averages of fft2 to be displayed; it won't go above 3.

15.  How does one decide where to set selective limiter S/N, and what is its purpose?  Does it define the level above which something is considered a potential  noise pulse?

16.  I do not know what the 'default output mode' means, but when I set it to '1' the computer locks up.

17.  How do I get the computer to use just the SoundBlaster PCI 64 card for both input and output, when I have both Delta44 and PCI cards installed?

18.  Why when I use the SoundBlaster soundcard for both input and output can I only put the cursor in the main spectral window at one place, 1500 Hz?

19. When I compile dsp I get screen after screen of error messages that look like 'gcc -u -Wimplicit -Wreturn-type -Wunused -Wcomment -Wunititialized -Wparentheses -c -o pol_graph.o polgraph.c  gcc -odsp -lvga -lvgal -ui.o' .....Why?

20.   I get occasional DA Sync errors.

21.  I only get one Noise blanker box beneath the high resolution (FFT2) window to change from - to A or M.  Why?

22.  I am not sure if the vertical scale can be altered by the user in FFT2 window.

23.  How do I change the filter width and the shape factor ?

24.  Can you change the beat note of the audio that is heard in the phones (i.e. the BFO frequency)?

25.  How do I get 'soundon' to run automatically when the computer boots?

26.  I need to get some simple way of  getting ossmix setup correctly automatically (saving or restoring).

27.  How can I find out what ossmix commands my sound card (s) uses?

28.  How can I find out what audio and related devices are running under linux?

 
1. Are you running Linux as a sub directory on a drive partioned/formatted for Microsoft? Or do you have a separate hard drive you have partioned/formatted under Linux?

I am running it on a hard drive I share with Windows ME [subsequently changed to Windows 98].  I have it set up so if I want to boot up in Linux I put in a Linux boot floppy; if I want to boot in Windows I don't.  I could have done dual boot all from the hard drive, but I had concerns because I couldn't find anywhere that Windows ME and Linus liked each other, and because there were issues related to the fact that my hard drive is rather large (80 GB), and that Linux might cause problems if I booted from it.  More knowledge on my part might have resolved these issues, but its no big deal to have a floppy in the drive and to push it in when I want Linux.   top

2. Is Redhat 6.1 only required?

Leif has tried several versions of Linux, I believe.  I am running RedHat 7.2 now; previously I had it running on RedHat 6.2 and 7.0.  I think he mentions some of this on his website.   I've been happy with RedHat, but then Im a newbie...they have a LOT of support pages, and a lot of books written about their software version, so that's important to me!!   top

3.What is the delay like through the software? Can you tune in a signal in real time or is it like SPECTRAN/HAMVIEW where you have to click on the signal with a mouse before you can do any form of tuning. In other words, the time delay is such that it is virtually impossible to tune in a signal without first placing the filter right on the signal? While there is some time delay through DSP Blaster, in essence it is in real time.  

All tuning is done by clicking on peaks observed.  There is no 'tuning' in the conventional sense.  This is actually what I wanted, and is to me preferable to any knob or keyboard tuning.  ON THE WATERFALL YOU SEE SIGNALS THAT MAY WELL BE BELOW ANYTHING YOU CAN HEAR WITH ANY REALTIME METHOD (as you know).  You click on them and if there is an audible signal there, you hear it. If you are slightly off frequency with your first click, the high resolution display will tell you and you can click there to refine your tuning.  On the high resollution screen you see even weaker signals that may not have been visible above, and get better frequency resolution so that some signals pop out from behind others.  To me all other modes of tuning are now surpassed.

There may or may not be a significant delay, it depends on your processor and the parameters you have chosen.  I started out with no perceivable delay, but have since set the parameters so there is a several second delay.  The delay makes NO DIFFERENCE when you point and click, and have the info repeated as we do with EME.  To me it is far more important to have maximal processing power and this is what I am trying to do.  The reason the delay is a problem with HAMVIEW or SPECTRAN, for example, is that there is some mechanical tuning of the rig involved, and as you know this makes any significant delay intolerable.  It is not a problem with point and click tuning, where I think it is irrelevant for our (EME) purposes.  For other uses, like contesting, with 3 second exchanges, it would be a problem.  But for those purposes you don't need all the processing, and Leif has it set up so that you can use different parameters for CW, weak signal CW, SSB, etc.  So you would just set it up so on those modes you had no (appreciable) delay.  BY the way, he has not fully implemented those other modes yet, as his interest is also EME...lucky for us.     top

4.  Do I need a supercomputer to run this?

Leif has some data on his site on using it with a Pentium 60.  He indicates it should work on a 486 depending upon the bandwidth to be processed.  Graphics requirements are modest, I think. The reason I got the Delta44 card is that it can sample up to 96 KHz, so that one could easily have a spectral bandwidth of 48 KHz (or more than 90 KHz with quadrature mixing), and thus see most of the important part of the EME band (e.g. 144.015 to 144.105) on one screen!  For lower bandwidths the soundblaster would do.  I use soundblaster for output.   I set the FFT parameters within Linrad in a very computation intensive way, as I knew my computer could handle it (Pentium 4 1.4 GHz), but in a way that would be far too calculation intensive for a slow machine.  But before I got this machine I was running on  Pentium Pro 200 with reasonable results.  How fast a machine do you need?  I don't know, but Linrad indicates that my machine is idling 92.4% of the time.  In otherwords, only 7.6% of its processing power is being used by the program.  Leif has a very nice page that discusses timing / computation intensity issues and gives some examples for various hardware combinations.  It is at:   www.nitehawk.com/sm5bsz/linuxdsp/fft1time/fft1time.htm   I have also run the program successfully (with different parameters) on my Pentium Pro 200 using a SoundBlaster AWE32 ISA soundcard, and on the 1.4 GHz Pentium 4 using a SoundBlaster PCI 64 for both input and output as well as with the Delta44 for input and the SoundBlaster PCi 64 as output.       top


5.  The main things stopping me (and many others I suspect) are:
        a.  Lack of RX hardware to convert down to audio. What we really need is a kit of some sort to get people started.

As of September 2002 there are several options.  You can get The Time Machine from Expanded Spectrum Systems, or get some superb hardware from Svenska Antennspecialisten AB, or build your own.  See here , and page down a little bit to see more on these options.

The hardware I am using is very simple.  As of 10/4/2001 it consists of a TUF-1H mixer followed by a crystal filter followed by a BFR90 followed by another mixer followed by a simple ultra-low noise op-amp audio amp.  x 2 for two channels.  The LO's are surplus frequency synthesizers.  It may not be ideal, but its enough to start playing and experimenting and having fun. Its really cool to hear EME (even if its just the big guns) using homebrew stuff.  John K3PGP got me started on all of this when he sent me a homebrew receiver in early 2000, and I've modified it several times since then based on suggestions from Leif, SM5BSZ, and built another for the other polarity channel.  Total construction time for me, about 2 hours tops.  Plus a bunch of time making the computer control work, but that is not necessary; I could have controlled the receiver frequency manually.  The computer stuff also started with John, although I ended up using different hardware that required a new program written here.  Nothing fancy, just hacker's qbasic...

Total parts for one channel:

2 TUF-1H mixers from Minicircuits, or equivalent
1 BFR90 from Radio Shack On Line
1 AD797AN ultra-low noise op amp ordered direct from the Analog Devices OnLine Store www.analog.com
13 1/4 watt resistors
10 capacitors

plus the frequency synthesizers: one for each LO, with a splitter to send each LO to each receive channel
Here    is a link to the schematic.   top

Here are several links to pages on Leif's Mirror on this CD that address the hardware issues. hware.htm discusses general hardware issues and shows a block diagram of a dual conversion receiver front end for the Linux PC Receiver. iqmixer.htm shows the design of the dual quadrature I/Q mixer Leif uses as the mixer before the audio board. optiq.htm discusses some of the details of designing and building an optimized 144 MHz front end for the Linux PC receiver.

 

 
b. What kind of mixer is the TUF-1 and where can I get it?

Well I actually ended up using a TUF-1H mixer. The TUF-1 and TUF-1H are close relatives. The TUF-1 is a diode mixer made by minicircuits. Go to http://www.minicircuits.com for the corporate page,
http://www.minicircuits.com/cgi-bin/mixer?model=TUF-1&pix=b02.png&bv=4 for the part reference.
Its available from Downeast Microwave www.downeastmicrowave.com "TUF-1 500MHz mixer, small package 4.50".

The TUF-1H costs $12.00 from Downeast Microwave www.downeastmicrowave.com
Its specs can be found at  http://www.minicircuits.com/cgi-bin/mixer?model=TUF-1H&pix=b02.png&bv=4
  top

 

c.  Why did you use a First IF of 40.455 and a Second LO of 40.470 MHz?  

The xtal filter can be whatever reasonable frequency you have available.  I used a 24 KHz filter at 40.455, as that is what I had available.  That dictated my 1st IF frequency.  If you have a filter for a particular frequency, make that your IF.  My LO of 40.470 with my first IF of 40.455 gives output centered at 15 KHz.  This is the input to the sound card.  With the 24 KHz filter output to the card is 3-27 KHz.

Original signal:      144        MHz
LO1:                    184.455 MHz
1st IF                     40.455 MHz
2nd LO                  40.470 MHz
Output                    00.015 MHz i.e. 15 KHz. =====> soundcard

Exactly what you do isn't important.  I have since changed the first IF to 10.72 MHz because I got some nice filters for that frequency.

I think to build ALL of the hardware Leif has would be very difficult for most of us.  But implementing this is not, and building all of Leif's hardware is not necessary to do it.     top

6.  What sound cards are you using?

 
For input I am using the M Audio Delta44. I got this becase it samples at up to 96 KHz, giving me up to 48 KHz of usable spectrum. I just got it from Sweetwater Sound.  I didn't shop around.  It was $294.97 with shipping.  They are at www.sweetwatersound.com  I have gotten a fair amount of audio equipment from them, so I didn't look around as I have always had superb service from them.  It is not essential to the project.  If you are content with 22 KHz of spectrum, a standard SoundBlaster should do (see immediately below).  I use a standard SoundBlaster PCI 64 for output to my headphones/speakers. SoundBlasters/clones are cheap.  I have also run the system using the PCI 64 for both input and output, and using a SoundBlaster AWE 32 as both input and output, and with the Delta 44 as input and the AWE 32 as output.

If you want to sample a 24 KHz swath of spectrum, the Nyquist rule says you need to sample at 48 KHz (unless you do quadrature mixing).  If you want to do 48 KHz, you need to sample at 96 KHz.  Otherwise you get aliasing.  So I needed a card that would do 96 KHz to get 48 KHz without aliasing.  I am currently only doing 24 KHz, so I could pretty well use a regular soundcard, as they will go up to 44KHz sampling.  I started off using just one SoundBlaster AWE 32 for both input and output (because that is what I had, an on ISA card) and it worked fine.  Then I got more fancy, as just discussed.   top

7.  What about transmitting?  How do you do that?

To me the transmitter is just an add-on.  I have separate transmit and receive feedlines all the way.  All that is necessary is a function to transmit at just milliwatts to zero-beat the receiver.  This is just a relay in the transverter to disable the finals and briefly fire up the low level stages of the transmitter.  Already all of the transmit and receive sequencing are external to both units here as I suspect they are for you and most EME'ers.  To go to EME-power transmit with this receiver, the sequencers do what they always do and an extra set of relays (most sequencers have a spare or two; if not, easy to add) removes the 12 volts from the homebrew receiver.  To go to receive, put the 12 volts back.  There is nothing to 'settle down' as the LO oscillators are not switched off and are always running right on frequency ;)  Instant T/R switching [sequenced, of course].   top

8.  How good is the noise blanker?

I think it is very good.  But still not perfect.  I have occasional 40 dB (i.e. 10,000 times) increase in the noise floor here.  It doesn't stop that, at least not yet.  But I'm not optimized yet.  As I type this I am watching the display on the receiver and there is something making the noise level in the main spectrum display rise by 20 dB intermittently.  I see this here on all of my receivers, in spite of bandpass filters limiting input to 144-146 MHz.  The noise level in the high resolution and DSP-filter displays is not budging at all from the lower baseline value.  This suggests that the noiseblanker is doing something good; the later stages in the receiver don't seem to see the noise level rise at all!!
I tend to keep Linrad running when I am home (and even not home) to monitor 144 MHz. This morning the noise level was a bit higher than usual, so I decided to take some png and wav file recordings with Linrad in SSB mode (to better show the noise), centered on the beacon at 144.283.
The top portion of the waterfall display will show the current signal condition, and the main spectrum immediately below that will show the current signal condition as well. The main spectrum shows the signal BEFORE noise blanking, and the waterfall and the high resolution spectrum show the signal AFTER noise blanking has taken place.
Please note that my software is NOT optimally calibrated as I've had trouble getting the pulser I built to work properly. Nevertheless, you can see and hear how the dumb and smart noiseblankers very effectively get rid of noise here even in this suboptimal setting. The performance of the smart blanker is affected by the lack of optimal calibration.
Of course, going to weak signal cw mode and using the 20 Hz filter I usually use totally eliminates this level of noise even without blankers, but that is another strength of Linrad.
I usually have no idea of what the noise level really is unless I look at the main spectrum on Linrad, as the program so effectively elliminates the noise I don't hear it or see it on the high resolution display, where my eyes are usually focused.

The files are at:
http://w3sz.com/noblankers.png
http://w3sz.com/noblankers.wav

http://w3sz.com/bothblankers.png
http://w3sz.com/bothblankers.wav

http://w3sz.com/smartblanker.png
http://w3sz.com/smartblanker.wav

http://w3sz.com/dumbblanker.png
http://w3sz.com/dumbblanker.wav

Any questions or comments are welcome. I will try again this afternoon to do a better job with the pulser for calibrating, and if I get a better result will replace the files on the website and repost a message after repeating the experiment when hi noise conditions return.   top

9.  This is not really a question, but:  I really really wish all of this could be done with standard equipment.

It can be done with very standard equipment from a computer standpoint (as noted just above), and the receiver hardware stuff is nothing hard.  Its simpler than building a preamp (which I have never done).  I have the same worry that people will be afraid to try.  And I don't understand it.  Compared to the way-out projects EME'ers do as a matter of course, this is reasonably tame from a user's point of view.  Is it the fear of something new?  Fear of computers?  I don't know.  I've gotten enough email that I know there is interest.  And there has been a LOT of interest on the reflector.   top

10. I also have no experience of Linux (nor really wanted any until now!)

I am pretty much computer illiterate, or at least I was till I started this...If I can do it anyone doing EME can do it!!     top


11.  I have no idea how I would transfer the files from my Windows PC to the Linux PC. I do have an ethernet connection between them and software for doing that in Windows 98, but Linux?

Ah...what I do is have both Linux and Windows on the computer, and on the same hard drive.  My Linux machine is also a Windows machine.  Linux is invisible to Windows, but Linux can see, read, and write to the Windows part of the disk.  So just put any file you want to be able to get in Linux on the hard drive while you are in Windows, and you can fetch it from Linux.  Or to get a file from the Linux world to the Windows world, write from Linux to the Windows part of the hard drive, and you can then get the file when you are back in Windows just as if you had written it in Windows.  Linux comes up on the network here, and I can get out to the internet no problem.  In fact, I am editing this page in Linux, using files html files I originally wrote and stored in the Windows part of the machine, and am uploading them to qsl.net via gFTP, a Linux FTP utility that came with RedHat Linux 7.2.    top

12.  As far as polarization goes, what about cable lengths and preamp gain and phase matching, can these be compensated for in the software?

The answer is yes! The phase delay can really just be 0-360 degrees regardless of the line lengths, amplifiers, etc and should be constant for the small range of frequencies we use, as should amp chain gain differences.  The software sorts all of that out. Finally I've gotten around to running the separate H and V lines into my receiver hardware that feeds Leif's receiver.  I've had separate receiver feedchains for a couple of years but switched between them just before the receiver till now as I didn't have a dual input receiver till Leif built his.  So to go to dual polarity on Leif's receiver I just removed the relay and ran a second 5 foot piece of coax to the second channel of the receiver.

It is really phenomenal...just clicking the mouse changes the receive phase angle and I can really affect the perceived signal strength of the W3CCX beacon by rotating the phase with the mouse.  Leif's program gives a green output signal on the screen and a red one, as you have seen.  I believe green is the strength of the received horizontal component and red the vertical, but it may well be that as usual I have it wrong and it is more complicated than that.  At zero degrees the green signal is maximal and I hear a loud signal in the headphones from the beacon.  When I dial in 90 degrees the red signal is maximum and the signal in the headphones is nulled! I can just click to get any angle in between, with the expected results.  Leif notes that:

When you turn the angle (let us call it v) from 0 to 90 degrees you form
two new signals A and B from the old ones H and V:

A=cos(v)*H + sin(v)*V
B=cos(v)*V - sin(v)*H

Before the operation one of the signals is phase shifted anywhere between
-90 and +90 degrees as specified by the line in the green bar.

The angle of the line controls the relative amplitudes while the green bar
controls the relative phases before the two signals are combined to
a new orthogonal pair.

Click here   to see some screens with the two channels (H and V) with me listening to W3CCX at 144.283:  I did H, V, and just for grins Circular Right screens.  I don't know if I have the baseline phase correction for cable length differences, etc set right.  We'll see what people think...

Note from W3SZ:  As Leif has pointed out before, moving the bar along the green line also shifts between linear, elliptical, and circular polarity (L and R).

Note from 2002:  Leif now has fully automated polarization angle working, and so I just let Linrad set the received polarization angle for me.
   top

 
13.  I notice there are now fft3 avgn and fft2 avgn boxes on the screen.  Is this something new?

 
Leif explains:  It is new, at least fft3 avgn. I do not remember when I added for fft2. Up to 00-12 the baseband spectrum (fft3) was averaged for a certain time. I do not remember how long any more, but somewhere 2 to 5 seconds. Now you can set how many times to average. It affects the display only and may be fun if you look for really weak and extremely stable signals. You may average at 0.01Hz bandwidth for 24hours on the VLF bands if you like.   top

14.  I assume fft2avgn is just the number of averages of fft2 to be displayed; it won't go above 3.

 
Leif explains:  This depends on how much buffer you have allowed.  All the data needed to recalculate the average at another frequency is stored for second fft average time seconds.  It is in the form of both transforms, correlations and powers and occupies a lot of memory. The purpose of it all is to allow immediate switch and AFC lock to a different station.   top


15.  How does one decide where to set selective limiter S/N, and what is its purpose?  Does it define the level above which something is considered a potential  noise pulse?

Leif explains:  It is to decide how strong a signal can be before it is considered a strong signal and routed outside the noise blanker.   top

16.  I do not know what the 'default output mode' means, but when I set it to '1' the computer locks up.

Leif explains:  I do not know either yet.

There will be several modes like
Normal filtered output
Dynamic range filtered output.
Stereo with different phase and/or frequency response for the ears
Coherent processing with I in one ear and Q in the other or
I in phase between ears and Q out of phase.
Coherent processing with I for both ears
Coherent processing with expanded dynamic range
Coherent processing with repeat averaging enhancement
(hopefully this will be very powerful!)
Maybe more.
Each one will have a number and setup decides which one
to use for default.  top


17.  How do I get the computer to use just the SoundBlaster PCI 64 card for both input and output, when I have both Delta44 and PCI cards installed?

By setting the device numbers when you set up the A/D and D/A setup routine (option "U" on the initial Linrad screen).  This routine uses the audio device numbers that are displayed when you type cat /dev/sndstat to set the input and output devices.  I keep several directories for Linrad on the disk, with different setup parameters (specifying different sets of input/output hardware, among other things) in each directory.  When I want to use the PCI 64 for both input and output I go to that directory to run Linrad.  The way OSS numbers the audio devices (at least on my machine) numbers 0 to 4 are assigned to Delta44 and 5 and 6 are for the SoundBlaster PCI 64 card. Mixer 0 is for the Delta44, and Mixer 1 for the PCI 64.   top

 
18.  Why when I use the SoundBlaster soundcard for both input and output can I only put the cursor in the main spectral window at one place, 1500 Hz?

Leif explains:  This happens because hires graph covers a wider frequency range than fft1. Increase second fft bandwidth N (was it 0 or 1 perhaps? Try 4, for example; that works here at W3SZ).   top


19. When I compile dsp I get screen after screen of error messages that look like 'gcc -u -Wimplicit -Wreturn-type -Wunused -Wcomment -Wunititialized -Wparentheses -c -o pol_graph.o polgraph.c  gcc -odsp -lvga -lvgal -ui.o' .....Why?

 
Leif explains:  These are not error messages. It is simply the command line from Makefile when compiling (gcc) the file ui.c
I as for options -Wimplicit to get a warning message if I use implicit declarations etc.  top

20.   I get occasional DA Sync errors.

Leif explains:  Yes. The two boards run from different X-tals with frequencies that may have a non integer ratio. The routines to adjust the
resampling ratio to fit are far from perfect at this stage.   top

21.  I only get one Noise blanker box below the high resolution screen to change from - to A or M.  I thought there were two noiseblankers.  Why don't I see both?

It is because you have not calibrated dsp with a pulser and the 'smart' noise blanker will work only when you have done so.  Right now you just have the 'dumb' NB available to you.  After you calibrate with a pulser you will have two: one dumb, one smart.  They work together to better reduce the noise.  top

22.  I am not sure if the vertical scale can be altered by the user in the FFT2 (high resolution) window.

This is an excellent question.  It can only be altered by setting the baseline level and the gain of the waterfall display.  Unfortunately, sometimes what you want for good FFT2 spectrum window makes for bad waterfall, and vice versa.  top

23.  How do I change the filter width and the shape factor?

Click near the bottom of the filter bandwidth curve (in the filter window) to adjust width, near the top to adjust shape...   top

24.  Can you change the beat note of the audio that is heard in the phones (i.e. the BFO frequency)?

Yes, by moving the red bar on the right side of the noise blanker window.   top

25.  How do I get 'soundon' to run automatically when the computer boots?

I do this semiautomatically with a batch file so I just type ./s after the computer boots.  It can be made FULLY automatic on boot, but then if a problem develops with the OSS driver you can never get into Linux, as I discovered.  The 4-front technologies site tells you how to get OSS to start on boot.   top

26.  I need to get some simple way of  getting ossmix setup correctly automatically (saving or restoring).

I do this with a batch file, also.  I just then type ./r and it sets ossmix parameters and runs Linrad.   top

27.  How can I find out what ossmix commands my sound card (s) uses?

While in the oss directory (I use /usr/lib/oss) just type ./ossmix to get the commands for the default soundcard mixer (d0).  Typing ./ossmix -d0 would have the same effect. Typing ./ossmix -d1 would show the commands for device d1, etc.  Note that the device numbers are those of the device mixers as given by cat /dev/sndstat (see question 28).  The audio device numbers are not the same.  top

28.  How can I find out what audio and related devices are running under linux?

To see what devices you have, type 'cat /dev/sndstat'.  There are several classes of devices: audio devices, synth devices, midi devices, timers, and mixers.  You must already have oss running (have typed soundon) for this to work.  top

 


Copyright © 1997-2007 COPYRIGHT Roger Rehr W3SZ. All Rights Reserved

Brought to you by the folks at W3SZ