MAY 4 2026: APP 2.0.0-beta44 has been released !
New improved internal memory controls should now work on all computers
May 1 2026: APP 2.0.0-beta43 has been released !
Improved internal memory controls (much more stable and faster on big datasets), fixed CPU image viewer, fixed Narrowband extraction demosaic algortihms.
Apr 29 2026 APP 2.0.0-beta42 has been released !
New improved Normalization engine, Fixed random crashes in integration, fixed RGB Combine & Calibrate Star Colors, fixed Narrowband extraction algorithms, new development platform with performance gains, bug fixes in the tools, etc...
Apr 14 2026: Google Pay, Apple Pay & WeChat Pay added as payment options
Update on the 2.0.0 release & the full manual
We are getting close to the 2.0.0 stable release and the full manual. The manual will soon become available on the website and also in PDF format. Both versions will be identical and once released, will start to follow the APP release cycle and thus will stay up-to-date to the latest APP version.
Once 2.0.0 is released, the price for APP will increase. Owner's license holders will not need to pay an upgrade fee to use 2.0.0, neither do Renter's license holders.
I have run a stack of 400 frames through APP. I have manually flipped through every frame and marked the worst ones before stacking. I wanted to see if APP came to the same conclusion as I with respect to frame quality. I have no idea how the various columns are calculated through the processing, but I find the results surprising to say the least.
I have sorted the files on all the relevant result columns and made a copy of what APP claims to be the best and worst frame in that particular sort. Which sort and best/worst frame is included in the file name. While some of these seem reasonable, some do not make any sense to me at all. I can't help asking the question: Are these calculations robust?
Â
They usually are yes, but I can imagine that in this case the calculations become a bit hard to make. The quality overall isn't good with the amount of background versus signal and especially when you get close to the background with the signal, it can get tricky. You can always decide to manually throw out data, the quality score is something that's being update during processing based on the info it receives out of each step. Star shape and things are included in there and it can be that some of the frames with a lot of pollution but still very nice stars, could get a higher score.
I realize that calculating these tings are probably not easy and this test has showed me that I need to change my routines for selecting which frames to stack. Previously I have mostly checked the quality column and if the 2-3 worst frames were acceptable I have assumed that the rest were too. This is obviously not the case. (It probably is if you have "perfect" data.)
I was not initially aware that these frames contained a lot of clouds, but when I started developing the stacked frames I saw that something was not right. Reloading everything to APP and flicking through the files soon revealed why. And that's why I did this test.
I suggested in a post a long time ago that APP should have had a selection criterion facility whereby one could set criteria values for each of the calculated columns and have APP exclude (uncheck) all the frames that would not meet that criterion. Now, more than ever I think this would be very useful.
Yes, a more fine-tuned selection criterium may be nice to have, as well as flipping though images faster. Mabula is aware of this and wants that to work in later versions. You can, based on quality, always select to integrate 90% for instance, which filters that automatically. Still always a good idea to take out images with clouds and such manually as APP assumes that you load in data that has decent quality, background light pollution and such it can then correct for, but not clouds for instance.














