15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes
7th December 2023: added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.
Hello Mabula, a question about registration of pictures taken with different telescopes and focal lengths. Is it better to take a picture with a shorter focal length as a reference or with a longer focal length? I'm thrilled with the new version, especially the speed of integration. If linear fit clipping on the next version gets even faster then that would be perfect.
Hello Mabula, a question about registration of pictures taken with different telescopes and focal lengths. Is it better to take a picture with a shorter focal length as a reference or with a longer focal length?
Hi minusman,
It depends on what you want to accomplish.
- If you want to use the whole field of view of the data with shorter focal length, then use a frame of the shorter focal lenght data. In this case, to benefit from the longer focal length data, with probably lower image-scale. you can increase the scale parameter in 6) to compensate for this. This method ensures good registration over the entire field of view.
- if you only want to use the field of view of the longer focal length data, then use a frame of the longer focal length data as a reference. No resolution is waisted then. But in this case, you run the risk of bad registration outside the field of view of the reference frame.
I'm thrilled with the new version, especially the speed of integration. If linear fit clipping on the next version gets even faster then that would be perfect.
That's great, thanks!
Regarding linear fit clipping, my advise would be to use sigma/winsorize clipping in combination with Local Normalization Correction or LNC. Linear fit clipping is nothing more than a filter that compensates for bad local normalization in your stack. LNC is invented by me, to solve that particular problem directly, so LNC actually makes linear fit clipping a redundant filter 😉 and has as huge benefit, much better quality of integration. Since data is normalized locally, the outlier rejection filters sigma/winsorize will even work better than linear fit clipping in that case.
Linear fit clipping assumes that the data has different gradients, but if that's already corrected by LNC, then that assumption actually degrades a good statistical calculation needed for robust outlier rejection.
I have already worked and tested though to make linear fit clipping faster, so that will come sooner or later. But, currently, improving the LNC process, so LNC version 2.0 ;-), will probably come sooner since it's really much more valuable for all kinds of integrations, regular and mosaics icm with sigma/winsorize clipping.
Kind regards,
Mabula
So if I understand it correctly, it's better to use a wide field image as a reference and to increase the scale when integrating. A reference image with a long focal length would cause a loss of resolution. I had used linear fit clipping as always in the tooltip without LNC. Had with LNC and Sigma Clipping only after several attempts a similar or sometimes a better result than the first attempt with LFC.
Best regards.
So if I understand it correctly, it's better to use a wide field image as a reference and to increase the scale when integrating. A reference image with a long focal length would cause a loss of resolution. I had used linear fit clipping as always in the tooltip without LNC. Had with LNC and Sigma Clipping only after several attempts a similar or sometimes a better result than the first attempt with LFC.
Best regards.
Hi Minusman,
In most cases, yes, it will be better, since you will have more ensurance that the whole field of view looks good. And the scale factor can be used to preseve resolution from the longer focal length data.
A reference image with a long focal length would cause a loss of resolution
No, that's not correct. The long focal length data has higher resolution natively. So using that as a reference will preserve resolution. If you don't use it, (so reference from shorter focal length), the scale factor is there to preserve resolution if you wish. You will need to know how the image-scale is between the different focal length setups, to make the right compensation.
An example: If the image scale of the shorter focal length is 2.4 arcsecond/pixel and the longer focal length has 1.8 arcsecond/pixel, then the ratio between these two tell you what scale factor you want to use.
suggested scale factor: 2.4 / 1.8 = 4/3 or 1.33
then the longer focal length data will be preserved for resolution. Otherwise resolution would be lost.
I had used linear fit clipping as always in the tooltip without LNC. Had with LNC and Sigma Clipping only after several attempts a similar or sometimes a better result than the first attempt with LFC.
Okay, this is actually a bit complicated. What is a better result? Let me explain:
Is the result better because you see less outliers/artefacts? Since the filters work differently, you need to adjust the kappa values. A kappa of 2.5 for LFC will always be different in result than a kappa of 2.5 for normal sigma clipping. These kappa values are really not to be compared between filters, because the standard deviations on which it works are totally different metrics.
To make a qualitive statement about which filter works best, you need to find the kappa for both filters at which point there are no more outliers. Then you will have different integrations without outliers made with separate outlier rejection filters with different kappa values. Do make an output rejection map as well, while testing this. That can tell you a lot about how much or how little is rejected.
The 2 integrations then need to be analysed mainly for noise to be able to make a qualitative statement about which integration is better.
An example, let's say with kappa 2,5, all outliers are removed with LFC, so it looks fine. And with kappa 2,5 with sigma clip (and LNC enabled), some outliers still exist.
Then lower the kappa for sigma clip to 2.2 for instance. Then, let's assume, at that setting all outliers are gone.
Now compare the 2 stacks for noise. The result might surprise you.
Outlier rejection applied too aggresively can really harm Signal to Noise Ratio. The amount of noise in the integration file can be anywhere from 0 to even as much as 50% worse, which is terrible.
Let me know if this makes sense 😉
Mabula
Many thanks, Mabula, for this detailed report. OK, I did not compare the noise values, I just compared the brightness gradients purely visually on the monitor. It's also calibrated with Spyder Elite 5. With LNC and Windsor Sigma clipping, the outliers, for example, were better filtered out than LFC. Since I will compare in the future more the noise values. Thank you first.
Hello Mabula, first a big praise. The next stack of M33 I had to give up with Pixinsight. True always at the registration failed. With APP, it was easy and thanks to the Compositions Mode Full in conjunction with MBB, I got the gradients under control.
And it's true Winsor Sigma Clipping with LNC has a higher SNR and less noise than linear fit clipping though the values are pretty much the same.
Excellent Minusman, apologies for my late response...
Looks good indeed. Yes I see the statistics. Differences are very minor indeed. Hard to see the difference visually. It does depend on the data off course. And I see that you have used advanced normalization for the three stacks that are shown, which improves things as well.
Regarding Linear Fit Clipping, it does work and can work well. But it's good to realize that LFC is a solution for a problem, which is bad local normalization in the data. For that same reason, using LFC for integration of darks and bias frames really doesn't make any sense.
The drizzle stack with the gaussian kernel is worst for noise, because the gaussian kernel will improve sharpness and noise will suffer a bit then 😉 That's always the case with the drizzle settings, it's noise versus sharpness.
Mabula
Here is the finished image of M 33. Everything in Astropixel processor from calibration to post production. Artifact problem with Bayer Drizzle also solved. With correct scaling factor, the artifacts disappeared.
Great image minusman 😉
Nice detail in the core of M33 as well !
Can you share the difference that you see with different scaling? Are the artefacts wors or less with a higher scale factor?
Is it possible to see a fully zoomed-in screenshot of the artefacts that you see? ( I know that drizzle will give artefacts with settings that need more data, but it's good to have a visual picture of the artefacts)
Cheers,
Mabula
The correct scale true 1.6512, since I rounded up to 1.7. You can also increase the scale without artifacts arise (see pictures). If you let the scale to 1 come out really crazy things. Depending on the reference image when registering, even a black hole. The picture consists of a stack with three different lenses and with 4 different focal lengths. 50% of the pictures I would have had to throw away at Pixinsight because Bad Guiding, aircraft traces, etc.
But with APP no problem.
Here are the details of the pictures.
Hi Minusman,
Thank you for sharing these results. I will test drizzle with scale 1 and lower and will report back 😉
It seems not enitrely right that the images get black holes (or become entirely black) if you only lower the scale parameter (so leaving the droplets the same). So I need to test this.
Kind regards,
Mabula
The correct scale true 1.6512, since I rounded up to 1.7. You can also increase the scale without artifacts arise (see pictures). If you let the scale to 1 come out really crazy things. Depending on the reference image when registering, even a black hole. The picture consists of a stack with three different lenses and with 4 different focal lengths. 50% of the pictures I would have had to throw away at Pixinsight because Bad Guiding, aircraft traces, etc.
But with APP no problem.
Hi @minusman,
I have been investigating your mentioned problem about black holes with bayer drizzle and a scale of only 1.0 for the integration.
I can't reproduce the problem with Bayer drizzle scale 1.0 and droplets of 2 pixels and winsor sigma clipping turned on like you did with the APP 1.057-beta (almost ready for release)
I suspect the problem was due to a bug in winsor sigma clipping that was fixed in 1.056. Can you confirm that this doens't happen with APP 1.056?
I have studied my drizzle/bayer drizzle module and I think everything is working fine, but if you can still reproduce this error, possible, it's worthwile to have a look at your particular dataset to solve this.
Kind regards,
Mabula
Hi @minusman,
Hang on, I did find a bug when a lowered the scale to around 0,4-0,6x, then things become weird and not correct. Lowering even further to 0.2x and it works again, I now see this issue is related to the drizzle weights that are calculated for each pixel.
I will investigate further and report back if fixed 😉
Mabula
The bug that I found in drizzling with a scale lower than 1.0 will be fixed in APP 1.057 😉
Cheers,
Mabula
Hello Mabula, thank you for the good news. Had already doubted my records. I'm looking forward to the new version. Best regards.
Hello Mabula, I've done the integration again with the new version 1.058. The result is crazy.
Try to integrate again with no Flats see what happens
Hi @Minusman,
Can you please tell me what settings you are using relating to drizzle ?
How many images were you integrating.
Drizzle or Bayer Drizzle?
Please show the weight outmap as well.
I haven't seen this yet, so I need to know more information 😉
@gregwrca, flats or no flats shouldn't make a difference for such a drizzle integration bug I think.
Cheers,
Mabula
I have set Bayer Drizzle with scale 1 and droplets size of 2.0. Pictures of the weights I will post, had difficulties with uploading.
Thank you @minusman,
Do you see a difference when you use a different drizzle kernel? Which one did you use now?
Mabula
I had used the Gauss kernel. Had read in a post that can be used in over 100 pictures to achieve higher sharpness. I'll try it again with the Tophat Kernel. Should I test with other settings?
Okay thanks, yes, the gauss kernel will be sharpest and noisiest with the same droplet size. If the problem doesn't occur with for instance the tophat kernel, I have a better indication where the bug might be 😉
I will test myself this afternoon as well.
Mabula
Hello Mabula, these are the pictures for the integration with Gauss Kernel.
That's the integration with Tophat Kernel.
Then I disabled Drizzle. And there the result is the same. Maybe because of the Outlier rejection.
Thank you @minusman,
What happens if you turn off outlier rejection? Is then integration normal?
From the file names, i can gather that you use 10 frames for diffraction protection for outlier rejection?
Please turn it off and let me know what happens. I think that might be the problem here. For your dataset, I think you need to increase it to a much higher value.
Mabula
OK. I will test it tonight. Should I keep the filter? Or try it with others?
Hang on, I think the diffraction protection setting is not the issue. It might be the specific rejection filter.
The LNMWC, Local Normalization MAD Winsor sigma clip.
Maybe that breaks down. So try to use LN MAD Sigma clip and/or LN Winsor clip.
I will run a test with 100 frames to see if I can reproduce this.
Mabula
I just did a test with 80 Canon 450D frames with the same settings like you:
LNMWC, kappa 4, 1x, 10 frames diffraction protection
LNC 1x first degree
And I am getting a perfect integration.
Will lower kappa and increase iterations now
After that I do bayer drizzle and drizzle
Another question, the area that is wrong has a particular shape, are you combining images of different field of views, or are they all from the same camera and telescope?
And, are your integrating on an external drive? If so, what kind of drive?
Mabula