Apr 9 2026 APP 2.0.0-beta40 will be released in 24 hours !
It has a major performance boost of 30-50% over 2.0.0-beta39 from calibration to integration, for mosaics even faster! We extensively optimized many critical parts of APP. All has been tested to guarantee correct optimizations. Drizzle and image resampling is much faster for instance, those modules have been completely rewritten. Much less memory usage. LNC 2.0 will be released which works much better and faster than LNC in it's current state. Improved Outlier Rejection with LN 2.0 rejection. macOS CMD+A works now in file chooser ! And more, all will be added to the release notes in the coming hours...
Update on the 2.0.0 release & the full manual
We are getting close to the 2.0.0 stable release and the full manual. The manual will soon become available on the website and also in PDF format. Both versions will be identical and once released, will start to follow the APP release cycle and thus will stay up-to-date to the latest APP version.
Once 2.0.0 is released, the price for APP will increase. Owner's license holders will not need to pay an upgrade fee to use 2.0.0, neither do Renter's license holders.
Hi Mabula,
I processed some mono data with multi-channel processing - L, R, G and B frames. The calibrated light frames that resulted were solid black. The light frames look OK when viewed in their linear state, but all black when I choose l-calibrated from the drop-down.
I can use the master flat, master dark, master bias and BPM that were the output of the multi-channel processing to calibrate the lights in PI's ImageCalibration process and the calibrated lights look good. I can also do the same with APP if I turn off multi-channel and only do a single channel's light frames, again using the master calibration frames that were produced in the multi-channel process.
So I think the master calibration frames are fine, but there is something I am doing wrong when I try to Calibrate my light frames with multi-channel selected.
I've looked at the settings and nothing is jumping out at me. Thoughts?
Thanks,
Rowland
Can you please share the contents of the frame list panel, how are the master frames matched to the light frames? Does that look okay, is the channel/filter mapping correct?
Kind regards,
Mabula
Did you create all the masters in APP 1.060 or 1.061 ? Or are some of them old masters?
If you load an image in the l-calibrated image viewer mode, what does the console say? It should indicate which masters are used to calibrate the data. Does that look correct?
Mabula
Mabula
I'm using APP 1.060. The masters were all created fresh with APP 1.060, no old masters.
The console output (attached) looks OK to me as far as choosing the correct calibration frames. This is console output generated by double-clicking a Red light frame, first with the (l) linear view, and then changing to the l-calibrated view.
Hi @@rowland-f-archer-jr,
That all looks okay indeed. One thing does indicate a possible problem.
The preview filter indicates it needs to convert float data, so 32bits data, to normalised floats in the range of [0-1]. That seems to indicate the light frames that need to be calibrated are in 32bits float format? Is that correct?
In what bit depth are the masterframes?
The masters need to be of the same bitdepth as the light frames, so perhaps that's the problem?
Kind regards,
Mabula
Or share the fits metadata of a light frame and the masters, that could tell us if they are compatible or not.
The details selectbox at the top of the image viewer shows the fits header 😉 of the loaded image.
Mabula
OK, now we're getting somewhere! The light frame are 32 bit and the calibration masters are 16 bit. The light frames were created by PI's SubframeSelector script, because I used that script to measure the frames and write a "SCORE" keyword to the FITS header. I guess PI saved them as 32 bit, even though the original lights were 16 bit and the SubframeSelector action chosen was "Copy."
So... I will try again using the original light frames, and it will probably work OK.
I think PI's ImageCalibration must automatically convert the 16 and/or 32 bit files to a matching bit depth before calibrating? Maybe a future enhancement to APP?
Thanks for the Saturday help, back to work on this side!
Rowland
That's the problem then indeed ;-). I am sure that if you save it in 16bits integers all will be fine.
Always make sure that no external application changes the data range of the original light frames before performing calibration ! That can lead to all sorts of problems.
In the case of PI, I know the data will be scaled to normalized floats from [0-1] and I could possible adjust APP internally in such a way that it will still be able to deal with this transparantly.
But, like I said, always perfrom calibration on the data that came straight from your camera, otherwise problems could occur very quickly....
Officially I can't comment on a feature of another application, but is APP missing something in qualifying the quality of your subs? If so, what would APP need additionally in this regard? Perhaps you use PI's feature for a rather quick selection of frames to use and which not?
As a side trip.... let me explain why I think quality selection before any other processing isn't something that is very helpfull probably.
If you do quality selection on uncalibrated, unregistered and unnormalised subs, then data parameters like FWHM, noise, SNR can be quite meaningless... especially if you are comparing data shot on different nights and with different image scales. APP deals with all these problems properly and at the right time in processing. For instance, quantifying noise before normalization, does not help you, since this parameter is affected by data normalization.
(therefore comparing images for noise by using only a noise tool/script without having normalized the images is rather a futile exercise... especially if the data is from different sources/applications/sessions/photographers. If normalization adjusts the dispersion of your data with a factor of 2, then the noise is also changed with that same factor 😉 )
Kind regards,
Mabula
Thanks for that explanation Mabula. APP is running now and it found stars in all the light frames, so we definitely got past the calibration hurdle.
The reason I used PI's tools was that this was data I acquired last year and already processed in PI, so I decided to just use that data with APP and compare results.
I do like PI's Blink tool as a quick way to decide on which frames to toss because of clouds or focus problems. APP seems to take a few seconds to load each frame before I can view it, whereas Blink loads all the frames at once - so I can do something else while they're loading. Or am I missing something in APP that would let me rapidly "blink" through the rather large set of frames that come from my ASI1600MM sessions?
SubframeSelector takes some time to use and I probably would not have gone through that if I was processing the data only in APP. I just happened to have already done that in PI, and without thinking twice, I used the light frames from the "Approved" folders output by SubframeSelector.
I really like the fact that APP does the quality score as part of pre-processing, so you don't have to do a separate "SubframeSelector" manual step.
Is there a recommendation as to the "best" weighting factor to use in APP integration? SNR? Quality? or just try a few and see?
Thanks,
Rowland
Thanks for that explanation Mabula. APP is running now and it found stars in all the light frames, so we definitely got past the calibration hurdle.
The reason I used PI's tools was that this was data I acquired last year and already processed in PI, so I decided to just use that data with APP and compare results.
I do like PI's Blink tool as a quick way to decide on which frames to toss because of clouds or focus problems. APP seems to take a few seconds to load each frame before I can view it, whereas Blink loads all the frames at once - so I can do something else while they're loading. Or am I missing something in APP that would let me rapidly "blink" through the rather large set of frames that come from my ASI1600MM sessions?
SubframeSelector takes some time to use and I probably would not have gone through that if I was processing the data only in APP. I just happened to have already done that in PI, and without thinking twice, I used the light frames from the "Approved" folders output by SubframeSelector.
I really like the fact that APP does the quality score as part of pre-processing, so you don't have to do a separate "SubframeSelector" manual step.
Is there a recommendation as to the "best" weighting factor to use in APP integration? SNR? Quality? or just try a few and see?
Thanks,
Rowland
You're most welcome @rowland-f-archer-jr 😉
Thank you for the feedback.
I can definitely make a similar tool for blinking, I'll write it down in my TODO list.
For weights, in most cases, quality weights work best for both sharpness and noise in the end result.
Quality is a combination of
- FWHM star size and roundness (which is star shape weights, the FWHM values are also corrected for images with different image scales. Absolute is absolute value in pixels in that frame. Relative is relative FWHM taken into account the image scale of the reference frame and those FWHM values are used in the quality calculations and star shape weights)
- noise (noise weights)
- star density (star density weights, so corrected for images with different dimensions and image scale
SNR is very unreliable for weights because the actual Signal that we are interested in is very hard to measure reliably. Images with larger stars (so a bit out of focus) usually also give higher SNR, which is not good. Or bad skies with cloudiness will give false positives on the SNR metric. If you compare images that have slighlty different gradients due to ligh pollution for instance, then the SNR metric is also unreliable. I never use SNR for weights because of these reasons.
Kind regards,
Mabula

