2023-01-19: APP 2.0.0-beta13 has been released !
!!! Big performance increase due to optimizations in integration !!!
and upgraded development platform to GraalVM 22.3 based on openJDK19
We are very close now to releasing APP 2.0.0 stable with a complete printable manual...
[Sticky] Combining R, G, B with Ha & OIII data using an Optolong L-eNhance filter
@rkmvca I did it as other/processed. No go. I will try to load as lights. Thanks, Rich.
Hey Rich, it worked. Now I can start working on getting the head dents out of the wall. Thanks again for the quick response.
Happy to help!
Will this process work for SHO palate using L-enhance?
@kevin-lewis This is not correct, although the moderators also state this too at several places in the forum. After "Extract Ha", and registering everything, you better clear APP and reload the frames again before you extract OIII. Imagine you just begin, and you load the L-eNhance frames. APP asks you for which filter these frames are. Then I select H-alpha. I go on loading my flats (not yet masters, but plain flats!) and darks and bias frames. Then APP asks for which filter the flats are, and again I say "for H-alpha". Now, during the process, APP creates the integration AND the masterflat-for-H-alpha. (And of course the masterdark and masterbias). I f you now go back to the Raw/Fits-tab and select "Extract OIII" without reloading the frames, APP will again create a masterflat, but with a name referring to "H-alpha".
So this is why I start again from scratch when I want to "Extract OIII" (after extracting H-alpha)
@jan-monsuur It is not necessary to reload your frames to be able to extract Ha and OIII from L-eNhance and L-eXtreme images.
In tab 0, select "Ha-OIII extract Ha" or "Ha-OIII extract OIII". It doesn't matter which one you start with as long as you later return and select the other. Then in tab 1 load the lights and calibration frames and simply leave the default apply "FILTER" header tag or assign RGB/MONO enabled. APP will then load the images as RGB, which they actually are! Then you can go straight to tab6 and hit the integrate button.
When done, return to tab 0 and select the other algorithm. Then return to tab 6 and hit the integrate button again. This will repeat a part of the initial integration steps but it skips star detection and registration since this was already done on the RGB data.
This way, the Ha and OIII integrated results will be nicely registered to each other.
@nsblifer I am asking Mabula about this. I'll let you know the answer as soon as possible.
I have QHY268C and a QHY183M camera. They have different FOVs, with 183 being a bit narrow.
I am planning on a long imaging session of M81 and M82 together.
I have these
1. 6 hours with QHY268C APC- camera at 700mm FL
2. 4 hours qhy268C at 700mm and then 2 hours Ha with QHY183M
What do you recommend. Should I go with QHY, use 4 hours for RGB and split 2h of Ha, or go with my option 2? What would give me better results. I want to bring out the reds and also capture some IFN
You can always combine data from different scopes/sensors, so I would use both. So I'd opt for 2 where the Ha will have a very clean signal. The only thing I'm wondering about is if that will help you with the IFn. The Ha signal will be primarily in the galaxies and not very hard to capture. So maybe 5 hours color data and 1 hour Ha?
Thanks! yes I need the red from galaxies. So I will just do OSC + 1 hour Ha for getting reds
@wvreeven thank you! You’d think there would be a step by step YT video from start to finish on APP narrowband processing by now but there isn’t.