Improved internal memory controls (much more stable and faster on big datasets), fixed CPU image viewer, fixed Narrowband extraction demosaic algortihms.
New improved Normalization engine, Fixed random crashes in integration, fixed RGB Combine & Calibrate Star Colors, fixed Narrowband extraction algorithms, new development platform with performance gains, bug fixes in the tools, etc...
Apr 14 2026: Google Pay, Apple Pay & WeChat Pay added as payment options
Update on the 2.0.0 release & the full manual
We are getting close to the 2.0.0 stable release and the full manual. The manual will soon become available on the website and also in PDF format. Both versions will be identical and once released, will start to follow the APP release cycle and thus will stay up-to-date to the latest APP version.
Once 2.0.0 is released, the price for APP will increase. Owner's license holders will not need to pay an upgrade fee to use 2.0.0, neither do Renter's license holders.
Newbie question here: considering aspects like SNR, the virtues of stacking, the desire for clear and well-defined images, is it better to have more frames of differing 'quality' (e.g. "score" in APP) or fewer frames of top score (100%)?
Tks
Let me be more precise. I know that the SNR improves with the square root of the number of images stacked, implying that the more images I take, the better it will be. But achieving perfect images in a Bortle 9 sky, even with dual narrowband filters, remains a challenge. During a night session, I may have lots of frames >70%, quite a number >80% but not that many >95%. Hence, I'm effectively wondering if including images of lower quality can still contribute positively to the final stacked image - or if I should only keep the 90 percentile and above and wait until I have a 'enough' of them to "develop' my pictures...
This topic was modified 2 years ago by Gilles Naud
I am wondering the very same (fresh beginner to processing and stacking) and found very little information on how to interpret the analytical results per light frame graphic. This is both overwhelming and exciting at the same time! 🤩Â
In the graphic below are my 3 sessions of the Horse Nebula (3 different nights):   - 60 lights x 300s   - 150 lights x 120s   - 180 lights x 120 sec
*sorted by quality per session
In this scenario, and to echo Gilles' question, is it better to: 1) Select 100% of the frames 2) Select 80% of the best frames (as often recommended if many lights are available) 3) Scrap entirely session 3, and just go with session 1 & 2
Despite the low quality score of the session 3, I believe the algorithm would still manage to get some good signal from the session 3 ? Although that session would weight less than Session 1 & 2 with the default APP settings.
For sky background and dispersion, the higher the better? If so, how can the stars be that bad on session 3 while sky background and dispersion look better than Session 1& 2?
Newbie question here: considering aspects like SNR, the virtues of stacking, the desire for clear and well-defined images, is it better to have more frames of differing 'quality' (e.g. "score" in APP) or fewer frames of top score (100%)?
Tks
Let me be more precise. I know that the SNR improves with the square root of the number of images stacked, implying that the more images I take, the better it will be. But achieving perfect images in a Bortle 9 sky, even with dual narrowband filters, remains a challenge. During a night session, I may have lots of frames >70%, quite a number >80% but not that many >95%. Hence, I'm effectively wondering if including images of lower quality can still contribute positively to the final stacked image - or if I should only keep the 90 percentile and above and wait until I have a 'enough' of them to "develop' my pictures...
Thank you very much for your question and please accept my apologies for the late reply.
If the bulk of the images is 70-80& then adding more of that quality will certainly help in terms of improving Signal to Noise ratio. You can actually check the FITS header in your integration files to check the noise reduction achieved in the integration from stacking. With more frames, you should see the noise reduction factors increase, they are ideally the square root of the number of images as you already know. So simply add more data and look for those factors and you will know 😉
I am wondering the very same (fresh beginner to processing and stacking) and found very little information on how to interpret the analytical results per light frame graphic. This is both overwhelming and exciting at the same time! 🤩Â
In the graphic below are my 3 sessions of the Horse Nebula (3 different nights):   - 60 lights x 300s   - 150 lights x 120s   - 180 lights x 120 sec
*sorted by quality per session
In this scenario, and to echo Gilles' question, is it better to: 1) Select 100% of the frames 2) Select 80% of the best frames (as often recommended if many lights are available) 3) Scrap entirely session 3, and just go with session 1 & 2
Despite the low quality score of the session 3, I believe the algorithm would still manage to get some good signal from the session 3 ? Although that session would weight less than Session 1 & 2 with the default APP settings.
For sky background and dispersion, the higher the better? If so, how can the stars be that bad on session 3 while sky background and dispersion look better than Session 1& 2?
1) Select 100% of the frames 2) Select 80% of the best frames (as often recommended if many lights are available) 3) Scrap entirely session 3, and just go with session 1 & 2
It all depends on what you try to achieve. If you want to have a result with the smallest stars. Sort them on star FWHM and then select a percentage which removes the worst ones, maybe 40% needs to be rejected then. But if you want to have a result with least noise and overal nice quality then I would rather choose for 75-80% in this case, the session 3 really weighs a lot in bringing everything down.
Every data set is different and the metrics that APP measures are reported in a relative sense. 1.0 being best and lower being less good. The advice to stack 80% is an advice that works quite well on many datasets, but if you have overall constant quality, then please don't try anything away and stack 100%.
I would advise in general to simply try and test and not look for a rule of thumb here. The proof is in the pudding is a much better methodology to find an answer for your particular dataset. Make different stacks with different rules for leaving data out and take the result that pleases you most in the end. It will teach you specifics of your data and will help you in getting nicer results in later projects 😉Â
Great Zots! Didn't think of checking the FITS header. Thank you so much for setting me on a good path, Mabula.
Checking the noise reduction achieved: is that the 'realized/ideal noise reduction ratio (ratNR)? Or should I focus on the reference frame's 'effective reference noise reduction' (refeNR)? [ btw I think I need a crash course on FITS headers as filled in by APP...]
Excellent, indeed, that post has a good explanation. To make things simple, the most important factor in the noise reductions reported would be the realized/ideal noise reduction ratio (should approach 1 ideally). The more images you stack, the closer it should come to 1 ;-). You can also see if changing integration settings like average/median or outlier rejection will improve or reduce the noise reduction achieved. This works so well, that even the dither steps that you take (if you take any) between the captured frames, start to play a role. The bigger the dither steps, the faster this realized / ideal noise reduction ratio will go to 1 with the same amount of frames 😉 dithering is thus very important to do with big enough steps.
Thanks you so much for reminding me of dithering!!! 😀 I just installed an ASI2600 DUO camera and have to recalculate the dithering distance based on my new imaging scales. Aside from 'how big the dither steps', I guess there must also be a point of diminished returns with regards to the frequency or interval between such dither steps...
Yes, the dither step that is large enough to quickly reduce noise depends on the sensor characteristics. But it is quite clear from extensive testing on data from many APP users, that most use too smal dither steps. Steps of 1-2 pixels are not ideal. You rather take steps of 10-20 pixels. For most sensors, you then quickly achieve ideal noise reduction with stacking 😉