20 January 2021: Soon to be released APP 1.083-beta2 : improved comet registration, updated tools, new Star Reducer Tool and more...
16 November 2020 - Wouter van Reeven has officially joined the Astro Pixel Processor Team as a moderator on our forum, welcome Wouter 🙂 !
Extract Ha & OIII from OSC and drizzle scale 0.5 - no differnce in integration
I am trying to extract Ha & OIII from my data to load it to RGBHOO combine RGB (shot with ASI183MC Pro an L-eXtreme dual band filter).
This seems to work well looking at the stretched linear data. If I load my OSC Lights to APP and select the appropriate "Ha-OIII extract Ha/OIII" algorithm in 0) RAW/FITS - lights will be clearly different depending on the selected algorithm.
As my setup does strongly oversample (~0,3"/pixel), I decided I will try to drizzle my integration with:
1. kernel "topHatKernel"
2. droplet size "2.5"
3. mode "Bayer/X-Trans drizzle"
4. scale "0.5"
When running the integration for each "Ha-OIII extract" algorithm, it seems like there is no difference in the separate Ha/OIII integration files...?
I am now running the same procedure without drizzle to compare the result of my Ha vs. OIII integration, but maybe someone can already see what I do wrong in my drizzle process?
I have to say that I don't have too much experience with drizzling, but I usually only use that when I have undersampling. That's when you would want to use drizzling as far as I understand to get more data to fill in what you're missing. For that kind of data I go for droplet sizes of 0.5-0.6 and 2x, using lots of data and big dithering steps. With oversampled data it might be better to just upscale without changing the droplet size.
My intention to drizzle with downscale was to do binning in APP to improve SNR which is currently pretty bad - since my current camera (ASI183mc 2,4 ym pixel size) does not play nice with my C9 EHD resulting in strong oversampling.
If I understood well the following post from Mabula https://www.astropixelprocessor.com/community/postid/10205/ using drizzle with scale < 1 should improve the SNR. I was hoping to therefore improve my Ha and OIII channels which I extract from my RGB data (shot with dualband filter). Since I have much more pixel resolution than in seeing, there is no loss in details when downscaling. still drizzling with a scale of 0,5 should therefore behave somehow similar to BIN2 in the CMOS camera...?
For the moment this is an exercise to see what I can achieve with my current hardware without switching to hardware with larger pixels - such as ASI2400MC for example.
Let me know if I got something completely wrong in the idea of using software binning for the given scenario...!?
Ah sorry, you're downscaling. I misread that part, never used it in that way. But I reread that post from Mabula and I'll be making a note of that as I mention there that I learned something, but my issue with memory is that I need to write that down. 😉
I probably would then start with the simplest approach first, which is to downscale and leaving the droplet size to 1. That post is specifically mentioning lower noise to be the result and possibly better signal. But at reduced resolution. If you do it with this more simple approach do you see a difference then? Maybe applying a bit of sharpness using the tools on the right hand side may also help a bit.
I think I found what I did wrong... in the tutorial Mabula mentions at the very end that in 6) Integration the drizzle has to be set to "mode = interpolation" - instead I used "mode = Bayer/X-Trans drizzle".
After running this I can confirm that my SNR in OIII is indeed much better with drizzle scale 0,5 vs. native resolution (no drizzle)!
Comparing the quite faint OIII signal with just a few lights there is already a visible improvement comparing drizzle 0,5 vs native resolution...
Ah excellent, good find and I'll add that to my notes. 🙂