Mar 28 2026 APP 2.0.0-beta40 will be released in 7 days.
It did take a long time to have the work finished on this and it will have a major performance boost of 30-50% over 2.0.0-beta39 from calibration to integration. We extensively optimized many critical parts of APP. All has been tested to guarantee correct optimizations. Drizzle and image resampling is much faster for instance, those modules have been completely rewritten. Much less memory usage. LNC 2.0 will be released which works much better and faster than LNC in it's current state. And more, all will be added to the release notes in the coming weeks...
Update on the 2.0.0 release & the full manual
We are getting close to the 2.0.0 stable release and the full manual. The manual will soon become available on the website and also in PDF format. Both versions will be identical and once released, will start to follow the APP release cycle and thus will stay up-to-date to the latest APP version.
Once 2.0.0 is released, the price for APP will increase. Owner's license holders will not need to pay an upgrade fee to use 2.0.0, neither do Renter's license holders.
I loaded the subs and calibration files. But can't for the life of me stack the subs (integrate) in APP. All I get is a list of the subs and calibration files with out #stars density, background & dispertion, SNR & noise, FWHM, quality score, Registration RMS -#stars all blank.Â
| Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â |
| Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â |
| Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â |
| Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â |
| Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â |
| Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â | Â |
That is interesting. I do have to say that stacking that many at once isn't advised, I would divide the data into maybe 200 at a time. With the proper calibration data they will be calibrated nicely and the integrations can then be stacked together again. Which is much faster and less resource heavy.
What happens if you load in 50 subs (just to test), go to tab 6 and press integrate?
As a Stellina user, it's common to have a lot of files. I was thinking about doing that approach (working with a smaller set at a time). My question is at what stage should I be telling it to create files? On the 5 Normalize step? Then when I have all the various normalized files from each session run 6 Integrate then? Just want to make sure I'm understanding the approach correctly.
You just have proper master calibration files that can be used for the images, if you integrate say 100 at a time (or more) you'll get all the benefits of satellite rejection, dithering, etc. You create the various smaller integratons and APP will have saved the integrated file already. You collect those and load those in again as lights. No calibration needed anymore of course, so you then simply integrate the integrations with each other.
Ah, OK. So run 6 Integrate and save the integrated files (deleting the other bits like the final integrated image). Then do that for each session. When ready just load all the integrated files as lights with nothing else and just run.Â
Just making sure I have the sequence down.Â
I will reduce the data sets no larger than 200 subs and give it a try. Yes it is a resource heavy endeavor. It has been >10 hours and I'm still running at Step 5 Normalize. I saw a comment that APP 1.082 Beta may have a regression and some folks are having the same problem with integration or not being able to integrate their subs. Â
Â
It was recommended to load APP 1.083 Beta2 to fix the regression. I am currently running APP 1.083 with 1220 subs to see if they will stack and integrate. This is one helluva SW stress test. 🙄
Vincent Thank You, for the advice.Â
Well, if you go to 6 and integrate immediately, there will be no other sub-integrations. The end result APP saves, is the file you want, the actual integration. Beta2 does have an issue sometimes, Mabula is reverting that engine back to a previous properly working one.
Just gave it a try and it was so much faster. Night and day. Although I found I get better results by cropping the integrations in each session and then it reduces the cropping required in the final integration. The exposures don't always blend super well at the corners, though. But that could be how the MBB is doing its job and the differences between each set of integration files. Still, much faster.
How does this strategy affect drizzle and upsampling? Do you drizzle process each smaller set, integrate to a larger image, and then re-stack the new upscaled images? Won't that produce a blurrier image than drizzling all of the subs together? What's your suggested workflow?Â
Dithering etc. will work fine with these "smaller" datasets. Regarding drizzling, that's a good question, but should still work fine on these "smaller" (so 100-200 subs) datasets. The requirements for a good drizzle effect is to have undersampled data, well dithered and loads of data if possible. 100-200 subs is quite a lot of data I'd say. So that should produce a nicely corrected, drizzled end-result per sub-integration that can then still be combined with the other sub-sets.