Notes on Python:

                From:   "Ray"  
                Date:  Thu Jul 31, 2003  12:55 am
                Subject:  1004x JG software stuff

                Hi all (Jon),

                I've spent some few hours recently investigating my capture issues
                and
                I'm happy to say I've found out a few things.
                In May, there was posted a DirectShow video capture library for
                Python, so I recently beat it into submission. It has very simple
                calls so you can capture frames to memory with as little as 5
                lines of
                code. Since the capturing is all to memory buffers, I have no
                problem
                with 640x480x24bit at 30fps with my Celeron 300 and Hauppauge
                card. It
                should also be callable from C++.

                I had noticed something odd before with my dark frames and iCatch,
                so
                I wrote another script to examine it all. It seems that not only
                is
                the exact frame that the bright image appears in a variable, but the
                desired set of scan lines within the frame is not always the same! It
                is usually the even lines in the 3rd frame caught after tripping the
                pin 5 relay, but almost as often it is the odd lines, and sometimes in
                the 2nd frame.
                Capture after setting pin 2 usually is stable at the 2nd frame after,
                even lines: ~95%.
                Any nasty image corruption is always in the first frame/field of the
                pair: ~10%. I'm thinking of ways to minimize it. It might require
                capturing more 'dummy' frames before tripping the relay to 'synch up'. 
                But, is it a capture-synch problem or something more electrically
                concerned on the camera?

                Capture reliability drops for exposures under 1 second, and is very
                bad for <.05 second.

                Python allows millisecond timing accuracy, so I'll next set timers
                around and note what the precise time line is for the steps required.
                I'll also add a stamp to files saved, eventually.

                I also am considering comparing the 2 main fields of lines, as they
                should not be more than a small factor different after adjusting for
                exposure. If the first set looks bad, we can always bin the second set
                together so it's not a total loss. I loath waiting 30-60 seconds at a
                time and seeing a corrupt capture. I had made a binning 'action' in
                Photoshop to salvage those long ones!

                I presumed that the image's even lines are sent with the first 'bite'
                retrieved, and so I put them there in the final image; testing with
                angled-shadow images seems to verify that is correct, otherwise the
                edges get jagged.

                If anyone has suggestions/ideas, would like to try the Python
                themselves, or discuss particulars (Jon?), please let me know.

                My initial goal was just a script to try out my stacking idea, but you
                know how it goes.

                Ray

                Python links:
                http://python.org
                http://lheawww.gsfc.nasa.gov/users/bridgman/AstroPy/
                http://www.ess.stsci.edu/susd/ssb/SSB.html
                http://wxpython.org