Mar 28 2026 APP 2.0.0-beta40 will be released in 7 days.
It did take a long time to have the work finished on this and it will have a major performance boost of 30-50% over 2.0.0-beta39 from calibration to integration. We extensively optimized many critical parts of APP. All has been tested to guarantee correct optimizations. Drizzle and image resampling is much faster for instance, those modules have been completely rewritten. Much less memory usage. LNC 2.0 will be released which works much better and faster than LNC in it's current state. And more, all will be added to the release notes in the coming weeks...
Update on the 2.0.0 release & the full manual
We are getting close to the 2.0.0 stable release and the full manual. The manual will soon become available on the website and also in PDF format. Both versions will be identical and once released, will start to follow the APP release cycle and thus will stay up-to-date to the latest APP version.
Once 2.0.0 is released, the price for APP will increase. Owner's license holders will not need to pay an upgrade fee to use 2.0.0, neither do Renter's license holders.
Didn't really know how to best phrase a question, so here's some background info:
Yesterday there were clouds coming and going. So much so that PHD2 every so often lost the guidestar, but I kept imaging anyway as the plan was to simply throw out subs with too much cloud coverage in them. All I really wanted was some data to play with. It cleared up and clouded over again for the entire evening, but in the end I had about 100 light frames of 180s each (roughly 5 hours). I had to discard one(!) because of something bumping the tripod, the rest was, as far as sharp, round stars concerned, usable.
So, I started the process of sorting subs like I always do. Those with clouds were moved into a different folder to go through later and those with clear skies were kept in the same folder. But... just for giggles really, I decided to try to stack everything - just to see how it turned out. The results I got after stacking was not was I expected - at all. My subs looked like these (cloudy one on the left, clear one on the right), and consisted of a mix. If I were to guess, i'd say about half were clear, half were very much like the one on the left.
Yet, the end result and first look without doing anything with it in APP looked like this:
I expected a mess if I'm being honest. What I instead got was a stacked dataset that can look very good once properly post-processed.
That got me wondering? Is some sort of analyzing or other forms of wizardry being performed in the stacking process that discard "unusable" data/pixels? I had set it to use 100% of the frames, checked Local Normalization Rejection using adaptive rejection and 1 iteration of 1st degree LNC.
Just a question out of curiosity really. This software keep impressing me.
You're right, we're actually wizards. Darn, we did keep that a secret for years...
😉 No, to be serious, APP analyzes each frame for various things, especially regarding star size and shape. A quality score is then calculated per frame and those with the lowest quality scores will not be included as much as those with the highest quality scores. If you have enough data, that will work very well. You can test what the result is when you select to stack only 90% and 50% for instance.
That makes sense. (the wizard-part of course).
Thanks for replying and confirming my suspicions. I genuinely was curious, especially since I intentionally asked it to stack everything.

