MAY 4 2026: APP 2.0.0-beta44 has been released !
New improved internal memory controls should now work on all computers
May 1 2026: APP 2.0.0-beta43 has been released !
Improved internal memory controls (much more stable and faster on big datasets), fixed CPU image viewer, fixed Narrowband extraction demosaic algortihms.
Apr 29 2026 APP 2.0.0-beta42 has been released !
New improved Normalization engine, Fixed random crashes in integration, fixed RGB Combine & Calibrate Star Colors, fixed Narrowband extraction algorithms, new development platform with performance gains, bug fixes in the tools, etc...
Apr 14 2026: Google Pay, Apple Pay & WeChat Pay added as payment options
Update on the 2.0.0 release & the full manual
We are getting close to the 2.0.0 stable release and the full manual. The manual will soon become available on the website and also in PDF format. Both versions will be identical and once released, will start to follow the APP release cycle and thus will stay up-to-date to the latest APP version.
Once 2.0.0 is released, the price for APP will increase. Owner's license holders will not need to pay an upgrade fee to use 2.0.0, neither do Renter's license holders.
Whenever I stack subs from multiple nights, if I forget even a single frame with parts of the foreground (e.g. roofs of distant houses, trees, entering into the frame), that generates a very strong dark (or bright) blurry spot that ruins the entire integration. Playing with rejection parameters does not seem to have an impact (at least to the extent of the simple tests that I've run).
I then simply remove that sub and just integrate the rest without issues.
My problem is that right now a construction site has popped up and I have relatively frequently small parts of the image that are covered by a crane passing by an edge of the image. It is not covering the objects I'm interested in, but it is present in enough subs that i lose a significant amount of integration time if I just remove them. (1-2 hours on a typical project).
Is there a way to mask out the offending parts so that they are simply ignored by APP when stacking? I imagine that just putting all pixels to 0 might do the trick? (it is, after all, what happens when integrating mosaic panels that are registered together, the "rest of the image" is simply black. Is there something more that should be done? (alpha channel? some masking bits that might not be obvious?). What would be a good way of creating such a mask on ~50 subs? (not an issue if I need to do it 50 times, it's more a question of "which software would be most appropriate"?)
Thank you in advance!Â
Â
After a bit of testing:
Set all the pixels "to be masked" to 0 in photoshop, saved and reloaded in APP. I have 113 files in total, with 41 masked out.
-
- With automatic integration the averaging is set automatically to average, which leads to very strong artefacts due to the fact that more than 1/3 of images have blanked out pixels.
- Setting integration to median improves things dramatically, nevertheless, the large portion of images at zero in the masked out region means that the median values being selected are on the lower end of the diistribution of "real pixels".
- Trying with Maximum yields a perfect image in the masked out regions (it is ignoring the blacked out images) but the rest of the image is a mess
- Playing around with pixel rejection, I lowered dramatically the low kappa threshold, to only select a few pixels around the "real pixels" values, but there was no real difference against the standard median
-
- Â
Â
Â
And Local Normalisation saves the day! (or rather the lack of!)
I suspected that local normalisation would mean that the region with my masking would be analysed on its own, meaning that the gap between masked pixels and real data would be smaller than if all pixels were analysed together. And indeed that works out pretty well. The darker areas disappear completely and I've been able to add ~1.3h of data to my "clean" 2.4h. Given the challenges of getting data (and good weather!) this is a massive improvement!
Now I regret deleting all my previous "crane and other stuff" images!
And a bit more testing shows that swapping Median for Average (but still not using Local Normalisation Rejection) basically gives exactly the same results!
Yes, the algorithms are not able to recognize objects in the frame, this is probably not possible as that will be extremely variable. Thanks for the elaborate analysis! This is really interesting, local normalization indeed tries to get regions "averaged" in background illumination which usually works very well in true astro fields of view. 😉

