It did take a long time to have the work finished on this and it will have a major performance boost of 30-50% over 2.0.0-beta39 from calibration to integration. We extensively optimized many critical parts of APP. All has been tested to guarantee correct optimizations. Drizzle and image resampling is much faster for instance, those modules have been completely rewritten. Much less memory usage. LNC 2.0 will be released which works much better and faster than LNC in it's current state. And more, all will be added to the release notes in the coming weeks...
Update on the 2.0.0 release & the full manual
We are getting close to the 2.0.0 stable release and the full manual. The manual will soon become available on the website and also in PDF format. Both versions will be identical and once released, will start to follow the APP release cycle and thus will stay up-to-date to the latest APP version.
Once 2.0.0 is released, the price for APP will increase. Owner's license holders will not need to pay an upgrade fee to use 2.0.0, neither do Renter's license holders.
Hello Mabula, a question about registration of pictures taken with different telescopes and focal lengths. Is it better to take a picture with a shorter focal length as a reference or with a longer focal length? I'm thrilled with the new version, especially the speed of integration. If linear fit clipping on the next version gets even faster then that would be perfect.
Hello Mabula, a question about registration of pictures taken with different telescopes and focal lengths. Is it better to take a picture with a shorter focal length as a reference or with a longer focal length?
Hi minusman,
It depends on what you want to accomplish.
If you want to use the whole field of view of the data with shorter focal length, then use a frame of the shorter focal lenght data. In this case, to benefit from the longer focal length data, with probably lower image-scale. you can increase the scale parameter in 6) to compensate for this. This method ensures good registration over the entire field of view.
if you only want to use the field of view of the longer focal length data, then use a frame of the longer focal length data as a reference. No resolution is waisted then. But in this case, you run the risk of bad registration outside the field of view of the reference frame.
 I'm thrilled with the new version, especially the speed of integration. If linear fit clipping on the next version gets even faster then that would be perfect.
That's great, thanks!
Regarding linear fit clipping, my advise would be to use sigma/winsorize clipping in combination with Local Normalization Correction or LNC. Linear fit clipping is nothing more than a filter that compensates for bad local normalization in your stack. LNC is invented by me, to solve that particular problem directly, so LNC actually makes linear fit clipping a redundant filter 😉 and has as huge benefit, much better quality of integration. Since data is normalized locally, the outlier rejection filters sigma/winsorize will even work better than linear fit clipping in that case.
Linear fit clipping assumes that the data has different gradients, but if that's already corrected by LNC, then that assumption actually degrades a good statistical calculation needed for robust outlier rejection.
I have already worked and tested though to make linear fit clipping faster, so that will come sooner or later. But, currently, improving the LNC process, so LNC version 2.0 ;-), will probably come sooner since it's really much more valuable for all kinds of integrations, regular and mosaics icm with sigma/winsorize clipping.
So if I understand it correctly, it's better to use a wide field image as a reference and to increase the scale when integrating. A reference image with a long focal length would cause a loss of resolution. I had used linear fit clipping as always in the tooltip without LNC. Had with LNC and Sigma Clipping only after several attempts a similar or sometimes a better result than the first attempt with LFC.
So if I understand it correctly, it's better to use a wide field image as a reference and to increase the scale when integrating. A reference image with a long focal length would cause a loss of resolution. I had used linear fit clipping as always in the tooltip without LNC. Had with LNC and Sigma Clipping only after several attempts a similar or sometimes a better result than the first attempt with LFC.
Best regards.
Hi Minusman,
In most cases, yes, it will be better, since you will have more ensurance that the whole field of view looks good. And the scale factor can be used to preseve resolution from the longer focal length data.
A reference image with a long focal length would cause a loss of resolution
No, that's not correct. The long focal length data has higher resolution natively. So using that as a reference will preserve resolution. If you don't use it, (so reference from shorter focal length), the scale factor is there to preserve resolution if you wish. You will need to know how the image-scale is between the different focal length setups, to make the right compensation.
An example: If the image scale of the shorter focal length is 2.4 arcsecond/pixel and the longer focal length has 1.8 arcsecond/pixel, then the ratio between these two tell you what scale factor you want to use.
suggested scale factor: 2.4 / 1.8 = 4/3 or 1.33
then the longer focal length data will be preserved for resolution. Otherwise resolution would be lost.
I had used linear fit clipping as always in the tooltip without LNC. Had with LNC and Sigma Clipping only after several attempts a similar or sometimes a better result than the first attempt with LFC.
Okay, this is actually a bit complicated. What is a better result? Let me explain:
Is the result better because you see less outliers/artefacts? Since the filters work differently, you need to adjust the kappa values. A kappa of 2.5 for LFC will always be different in result than a kappa of 2.5 for normal sigma clipping. These kappa values are really not to be compared between filters, because the standard deviations on which it works are totally different metrics.
To make a qualitive statement about which filter works best, you need to find the kappa for both filters at which point there are no more outliers. Then you will have different integrations without outliers made with separate outlier rejection filters with different kappa values. Do make an output rejection map as well, while testing this. That can tell you a lot about how much or how little is rejected.
The 2 integrations then need to be analysed mainly for noise to be able to make a qualitative statement about which integration is better.
An example, let's say with kappa 2,5, all outliers are removed with LFC, so it looks fine. And with kappa 2,5 with sigma clip (and LNC enabled), some outliers still exist.
Then lower the kappa for sigma clip to 2.2 for instance. Then, let's assume, at that setting all outliers are gone.
Now compare the 2 stacks for noise. The result might surprise you.
Outlier rejection applied too aggresively can really harm Signal to Noise Ratio. The amount of noise in the integration file can be anywhere from 0 to even as much as 50% worse, which is terrible.
Many thanks, Mabula, for this detailed report. OK, I did not compare the noise values, I just compared the brightness gradients purely visually on the monitor. It's also calibrated with Spyder Elite 5. With LNC and Windsor Sigma clipping, the outliers, for example, were better filtered out than LFC. Since I will compare in the future more the noise values. Thank you first.
Hello Mabula, first a big praise. The next stack of M33 I had to give up with Pixinsight. True always at the registration failed. With APP, it was easy and thanks to the Compositions Mode Full in conjunction with MBB, I got the gradients under control.
And it's true Winsor Sigma Clipping with LNC has a higher SNR and less noise than linear fit clipping though the values are pretty much the same.
Excellent Minusman, apologies for my late response...
Looks good indeed. Yes I see the statistics. Differences are very minor indeed. Hard to see the difference visually. It does depend on the data off course. And I see that you have used advanced normalization for the three stacks that are shown, which improves things as well.
Regarding Linear Fit Clipping, it does work and can work well. But it's good to realize that LFC is a solution for a problem, which is bad local normalization in the data. For that same reason, using LFC for integration of darks and bias frames really doesn't make any sense.
The drizzle stack with the gaussian kernel is worst for noise, because the gaussian kernel will improve sharpness and noise will suffer a bit then 😉 That's always the case with the drizzle settings, it's noise versus sharpness.
Here is the finished image of M 33. Everything in Astropixel processor from calibration to post production. Artifact problem with Bayer Drizzle also solved. With correct scaling factor, the artifacts disappeared.
Can you share the difference that you see with different scaling? Are the artefacts wors or less with a higher scale factor?
Is it possible to see a fully zoomed-in screenshot of the artefacts that you see? ( I know that drizzle will give artefacts with settings that need more data, but it's good to have a visual picture of the artefacts)
 The correct scale true 1.6512, since I rounded up to 1.7. You can also increase the scale without artifacts arise (see pictures). If you let the scale to 1 come out really crazy things. Depending on the reference image when registering, even a black hole. The picture consists of a stack with three different lenses and with 4 different focal lengths. 50% of the pictures I would have had to throw away at Pixinsight because Bad Guiding, aircraft traces, etc. But with APP no problem.
Thank you for sharing these results. I will test drizzle with scale 1 and lower and will report back 😉
It seems not enitrely right that the images get black holes (or become entirely black) if you only lower the scale parameter (so leaving the droplets the same). So I need to test this.
 The correct scale true 1.6512, since I rounded up to 1.7. You can also increase the scale without artifacts arise (see pictures). If you let the scale to 1 come out really crazy things. Depending on the reference image when registering, even a black hole. The picture consists of a stack with three different lenses and with 4 different focal lengths. 50% of the pictures I would have had to throw away at Pixinsight because Bad Guiding, aircraft traces, etc. But with APP no problem.
I have been investigating your mentioned problem about black holes with bayer drizzle and a scale of only 1.0 for the integration.
I can't reproduce the problem with Bayer drizzle scale 1.0 and droplets of 2 pixels and winsor sigma clipping turned on like you did with the APP 1.057-beta (almost ready for release)
I suspect the problem was due to a bug in winsor sigma clipping that was fixed in 1.056. Can you confirm that this doens't happen with APP 1.056?
I have studied my drizzle/bayer drizzle module and I think everything is working fine, but if you can still reproduce this error, possible, it's worthwile to have a look at your particular dataset to solve this.
Hang on, I did find a bug when a lowered the scale to around 0,4-0,6x, then things become weird and not correct. Lowering even further to 0.2x and it works again, I now see this issue is related to the drizzle weights that are calculated for each pixel.
I will investigate further and report back if fixed 😉Â
I had used the Gauss kernel. Had read in a post that can be used in over 100 pictures to achieve higher sharpness. I'll try it again with the Tophat Kernel. Should I test with other settings?
Okay thanks, yes, the gauss kernel will be sharpest and noisiest with the same droplet size. If the problem doesn't occur with for instance the tophat kernel, I have a better indication where the bug might be 😉
What happens if you turn off outlier rejection? Is then integration normal?
From the file names, i can gather that you use 10 frames for diffraction protection for outlier rejection?
Please turn it off and let me know what happens. I think that might be the problem here. For your dataset, I think you need to increase it to a much higher value.
Another question, the area that is wrong has a particular shape, are you combining images of different field of views, or are they all from the same camera and telescope?
And, are your integrating on an external drive? If so, what kind of drive?