2023-04-17: APP 2.0.0-beta17 has been released !
RAW support for camera color matrix with Bayer Drizzle integration, fixed couple of image viewer issues.
We are very close now to releasing APP 2.0.0 stable with a complete printable manual...
Astro Pixel Processor Windows 64-bit
Astro Pixel Processor macOS Intel 64-bit
Astro Pixel Processor macOS Apple M Silicon 64-bit
Astro Pixel Processor Linux DEB 64-bit
Astro Pixel Processor Linux RPM 64-bit
Blink mode to remove bad frames
I know I can double click on any image to load that image into the previewer window but adding a blink mode where you can blink through all of your light frames and delete the poor frames from the image stack would be very useful.
Yes, agreed, it's on the wish-list if I'm not mistaken. Personally, I don't use blink all that much anymore as stacking with the "quality" option on and sliding the amount of frames to stack to about 90% gives me great result anyway. And, for instance subs with satellite stripes in them can still have great stars and will still benefit a stack.
I did not know there was an option for automatic quality restriction. I see on the load tab at the bottom there is a pulldown for "sort" with "quality", is this what you use?
Also, on the Integration tab is where you adjust to 90%? Do you also use the pulldown on "weights" and set this to quality, SNR or noise?
And as long as I'm asking you a million questions, on the outlier rejection which one of the MAD filters do you use ... I never integrate 1000's of frames.
Thank you for pointing me down this path.
Yes, in the integrate tab (6) you have an option called "weights" and there you have the "quality" option. Above that there is a slider called "lights to stack xx %", that's where I set it to 90-95% (personally).
For the outlier rejection filters, I usually go for the LN Sigma or LN Windsor options. Winsor is almost always preferred.
So, I just ran through my data again with the suggested settings. I have one frame out of 117 with an airplane track and one frame out of 117 with the object of interest being significantly off to the left. When I run the data it appears that the one frame of the airplane track is still being used and the one frame of the object off-center is causing the final integrated frame to shift the object of interest significantly. If I manual deselect both of the trouble frames and run the data then the object of interest is well centered and I have no visible airplane track lines in the final integrated frame. Any thoughts on what settings would automatically remove both of the troubled frames from the final integrated frame without me deselected them manually?
Mm, maybe airplane tracks are a bit too much even for sigma clipping (which I guess you had on?). The shifting I've never seen, but that's likely because I never had a frame shift that much. So, I guess that's coming down to me personally having no issues with airplanes and shifts, which results in me never having to blink much. Slightly elongated stars and such are not contributing much and will get lower quality values, satellite trails will get rejected and the rest can still be used. So in your specific case, a blinking option might actually be of good value. 🙂
Was the shifted sub, by any chance, selected as the reference frame? If so, that might cause a shift I think.
I'll check on if it was selected as a reference frame. I think some clouds rolled by and PHD2 got confused and then luckily after the bad frame was collected it was time to recenter and plate solve again and the rest of the frames were centered. Thank you for your help. I'm going to try this on some other data ... some that do not have these crazy outliers. I really do like knowing how to automatically select the best 90% now! Thank you again.
I use the quality method of stacking frames, but it's pretty much making an in-the-dark decision about how much data to throw away. I wish there was a way to analyze the data and discover outliers before integration so you could make an informed decision about % to keep.
It's not a "in the dark decision", it's all based on a lot of statistical data. Signal-to-noise and star shapes are some of those and you can decide to throw away say the worst 5% or so by quality stacking 95%. Personally I never blink my data anymore as the result is always very good, if you do have a problem with the end result due to some really bad frames, it might be wordt to do it. I think it is on the todo-list of Mabula.
Help me understand how the % number is not an in the dark decision?
You mention 5% how do you come to that number? What if all my subs are good ? How do I know that it’s best to keep them all?
What if high clouds rolled in and my SNR dropped dramatically? 5% needs to turn in to 50%.
How do I know what the optimal % to throw away is?
You don't, it's simply analyzing all frames, showing you the statistics in the overview list below and there will always be a range of what is the best versus the "worst". That 5% won't influence the end result all too much if I loose it and I'm sure it'll have thrown away the least data usually based on the star shapes. Which is the reasoning behind it. If it needs to be 50% I usually already know it was a very bad night and would increase that, I however prevent that data from even going in because I tell my capture program to start over when guiding is off or the star is lost due to clouds. Other problems in subs like satellites can be clipped away and with proper flats there isn't much left to be very bad. If clouds are present, these should influence the statistics a lot and be thrown away in that 5%. For me that works in all cases until now so I had no need to tweak it more. If you have a situation where it does cause an issue, you'd have to tweak it a bit or change the entire workflow to reduce the problems. It's been very reliable for me. But a blinking tool can still be a peace of mind or sometimes still necessary so it is on the to-do list.
Right now I use PI’s sub frame selector tool to identify bad frames and give me a statistical plot of things like star count, FWHM, SNR, Eccentricity, etc. It makes it very easy to see where the outliers are.
I then cull the outliers and load everything back into APP.
Glad to hear this is on the todo list. APP already has all the information to do the same thing, just no way to visualize it before integration.
So I guess you can then make an educated estimate of a few of those PI analysis right? How much % on average do you get by doing that? I would then just test that in APP and maybe later go back to PI to analyse it again and see what it does with the end result.
Man, you must have very different skies than I do. I have absolutely zero “average” nights. One night will be cloudless with excellent seeing, the next will be cloudless and seeing gets worse as the night goes on, the next night high altitude clouds roll in for 3 hours and disappear by the morning.
I have no way of knowing what kind of night it was until I statistically analyze my results. Taking a flat average just does not work.
Ah, well I think that has more to do when I decide to image. I took a lot of data in the Netherlands where clouds are the norm. I simply didn't go out if there was even the slightest chance of it and like I mentioned, my capture program wouldn't have allowed that data to come through anyway, my blinking was basically done at that stage. I limited the number of nights by a huge amount doing that, so indeed I decided to go remote to better skies anyway after a few years. 🙂 Looking at my end results, personally ofcourse, I decided not to mess a lot with the statistics anymore in the Netherlands as it simply was all over the place anyway and the end-results where always as good as I could get them, the throwing away of a lot of data didn't influence that by a big enough amount for me. The biggest difference in quality is a darker sky anyway, when I went to New Zealand, I was blown away and the downside of that was that I quit taking images from my own backyard in the Netherlands. But anyway, we all should make the most of it, and a blinking tool will be nice for some for sure.
Wow I’m jealous!
I image exclusively from a permanent pier in my back yard right outside of Washington DC. I don’t have to do anything except tell NINA to start so image on every clear night. Lots of bad data, lots of good data too.
I just stumbled onto this issue when trying to figure out how to blink data in APP. Yes, I already know about the % threshold, and with 1.083b2 the preview renderer is a bit quicker (that was my main issue, it really took some time to "blink" between images).
My issue with the threshold is best shown with these two examples:
Looks okay, nice round(ish) stars. I would consider integrating it. But this image, that has a higher quality score:
Is a mess in my eyes: it was a windy evening and you can see the star tails that were the result.
And many of the images from last night were the same: good data had lower scores than data with clear problems..... Maybe I am doing something wrong when I am registering the data?
The quality score isn't only looking at the stars, so that may be why it still was a bit higher. If you have a night where the data overall was worse, I would opt for integrating that night separately. But for manual selection, a better tools may be interesting as well indeed.
Maybe you all know this, but there is some degree of comparison of frames. After the normalization step, but before integration, you can right click on the list of subs at the bottom of APP screen (file names with star density, quality, the table of values). After right clicking, go to the bottom option to analyse data plot, and choose things like star shape, SNR, quality etc. and you will get a plots of all the data in a graph. then you can see which subs to untick in the list and they will not form part of the integration. You can still set to average+quality (although the automatic mode in APP is excellent usually) and set to 100% frames, but the unticked bad ones are not counted.
Try it and see what you think. I use it all the time just before integration.
In my experience the quality scores can also change quite markedly once Normalisation has been run. Unless I see frames with really obvious problems I do not generally deselect any until all of the calibration fields have been filled out. The second image that you judge poor may well drop down the 'league table' once e.g SNR is taken into account
Oooh, great tip. I often go back to the "load" tab and sort from best to worst based on quality, and then see if I can find a "break" where the quality numbers suddenly get worse. Absent that, I do a binary search in the bottom half and examine those frames, looking for ones that have trailed stars or whatever. But graphing those values would make that process a ton faster.
Yep it does, I sometimes forget to mention that. 🙂 But a blinking tool would still be handy I think.
If necessary, APP could trade off for performance by:
- Only presenting the raw, uncalibrated view of the data (none of the preprocessing choices available in the main viewer)
- Requiring the use to select an area of interest
None of these would speed up file load times, but could make squirting the bits to screen more performant. (2), in particular, would let you preload a set of much smaller frames (smaller still if you used (3)) into memory and conceivably let the user start looking through them before they're all loaded. So there would be a wait time while some or all of the frames were preloaded, but then navigating from frame to frame would be pretty much instantaneous. This would give a much better user experience than a delay between each frame switch.
Certainly for how I'd use a blink feature, primarily evaluating star shapes to cull frames, a zoomed-in AOI would be perfectly cromulent.