Your M16 is better than my old processing. It looks beautiful. There should be at least two nights of data for NGC253, separated by a few years and both with D800, including some H-alpha. If you can identify both sets of NGC253 images, you should be able to achieve higher S/N. The second year of NGC253 images were taken under poor seeing, however.
I am surprised to hear about the flat issues for both images. I had no problem flattening them. The first set of NGC253 images were taken with D800 without a firmware hack to restore the true dark level and bias level. So there will be residual large-scale pattern in the image background. However, this can't be seen without extremely strong stretch (much much stronger then what this NGC253 image requires). So it shouldn't be the reason for your problems in flat-fielding. Did you use the bias images in the calibration? If you use the correct set of biases, flats, and darks, the stacked image should be very flat.
For 5D2, I used to skip biases. This works because DeepSkyStacker automatically subtracts the 1024 mean bias level in 5D2 files. (I believe DSS only does this on very early cameras.) So I was lucky. After I realized this, I started to take biases all the time, so the images can be calibrated even if DSS no longer does such magic thing, or if the images are to be calibrated with other programs. For the M16 images, you won't find bias files on the date when they were taken. Please go to the bias library under the 5D2 folder. Those are biases I took quite many years later, but they should still work.
What happened with the masterflat of @wei-hao_wang ? I bet it's probably incompatible for dimensions, right?
This has to do with how DCRAW and APP handles CR2 images (and other DSLR Raws).
APP uses the whole sensor for calibration which, I think, is the most logical thing to do.
Applications that use DCRAW like DSS, clearly don't do this, Besides that, the provided image dimensions are DCRAW specific as well, not according to the camera's model specifiication even.
Wei Hao's masterflat can be corrected though (have done it) but that's a bit complicated. The batch modify tool is needed to add some extra pixels to the borders of the original masterflat, then it should work.
So the masterflat dimesions were compatible here? I will have to test this myself probably. If the masterflat isn't working well, it must have to do with the bias pedestal in both lights and flat calibration.
Your M16 is better than my old processing. It looks beautiful. There should be at least two nights of data for NGC253, separated by a few years and both with D800, including some H-alpha. If you can identify both sets of NGC253 images, you should be able to achieve higher S/N. The second year of NGC253 images were taken under poor seeing, however.
I am surprised to hear about the flat issues for both images. I had no problem flattening them. The first set of NGC253 images were taken with D800 without a firmware hack to restore the true dark level and bias level. So there will be residual large-scale pattern in the image background. However, this can't be seen without extremely strong stretch (much much stronger then what this NGC253 image requires). So it shouldn't be the reason for your problems in flat-fielding. Did you use the bias images in the calibration? If you use the correct set of biases, flats, and darks, the stacked image should be very flat.
For 5D2, I used to skip biases. This works because DeepSkyStacker automatically subtracts the 1024 mean bias level in 5D2 files. (I believe DSS only does this on very early cameras.) So I was lucky. After I realized this, I started to take biases all the time, so the images can be calibrated even if DSS no longer does such magic thing, or if the images are to be calibrated with other programs. For the M16 images, you won't find bias files on the date when they were taken. Please go to the bias library under the 5D2 folder. Those are biases I took quite many years later, but they should still work.
I am surprised to hear about the flat issues for both images. I had no problem flattening them. The first set of NGC253 images were taken with D800 without a firmware hack to restore the true dark level and bias level. So there will be residual large-scale pattern in the image background. However, this can't be seen without extremely strong stretch (much much stronger then what this NGC253 image requires). So it shouldn't be the reason for your problems in flat-fielding. Did you use the bias images in the calibration? If you use the correct set of biases, flats, and darks, the stacked image should be very flat.
Regarding the flat-field calibration issues. I think I need to have a go at your data as well.
I am currenty working on a big upgrade of APP's calibration engine. The current calibration rules in APP are too strict and quite a few users have trouble in getting good calibration. So I am making it more flexible and smarter. Currently, bias, dark, flat calibration requires matching of ISO/gain and exposure (for darks and darkflats). I will let those requirements go and I will also introduce dark scaling to APP.
For 5D2, I used to skip biases. This works because DeepSkyStacker automatically subtracts the 1024 mean bias level in 5D2 files. (I believe DSS only does this on very early cameras.) So I was lucky. After I realized this, I started to take biases all the time, so the images can be calibrated even if DSS no longer does such magic thing, or if the images are to be calibrated with other programs. For the M16 images, you won't find bias files on the date when they were taken. Please go to the bias library under the 5D2 folder. Those are biases I took quite many years later, but they should still work.
DSS uses dcraw, and dcraw subtracted that 1024 value. This is the internal black point of the camera. It's similat to an offset in CCD's. It's in 14bits as well.
The internal black point is not the same for all Canon Camera's, some have a value of 2048 for instance.
APP's processing of Canon CR2s never subtracts this internal black point, so both lights and flats need to have this offset subtracted by using either bias or darks. But you could also subtract a value of 1024 from the frames for instance (batch modify tool can add/subtract an offset) .
@1cm69, if you can find the bias, like Wei-Hao indicates, subtract those from the lights 😉
More than likely, this is an error on my part with not processing the calibration files in either the correct order or along with the correct other calibration files.
Heres a quick lowdown of what I do & the order I do it.
1. Load Bias to create MB
2. Clear Load
3. Load Flats & MB (matching ISO) to create MF
4. Clear Load
5. Load Darks to create MD
6. Clear Load
7. Load MD & MF to create BPM
8. Clear Load
9. Load Lights, MF, MD & BPM
10. Calibrate & continue further steps.
Am I incorrect in my procedure?
i know that the Darks in step 5 have to match the Lights in step 9 in terms of ISO, Exposure Length & Temperature.
It is very weird. The attached screenshot is what I got with PixInsight using exactly the same set of files, with standard calibration and stacking workflow without any customized procedure.
There must be something seriously wrong.
BTW, I was never able to complete this mosaic. Only very few panels were collected. So you are going to be disappointed if you wish to compose a mosaic using this set of images. I even did not try to process any of the images until now. Now seeing this, I think I probably should try to complete it.
It is very weird. The attached screenshot is what I got with PixInsight using exactly the same set of files, with standard calibration and stacking workflow without any customized procedure.
There must be something seriously wrong.
BTW, I was never able to complete this mosaic. Only very few panels were collected. So you are going to be disappointed if you wish to compose a mosaic using this set of images. I even did not try to process any of the images until now. Now seeing this, I think I probably should try to complete it.
So this is PixInsight output using the exact files I listed that I had used in APP?
It would be interesting to see your output after running the exact same files through APP, no post processing, just finish at integration.
I did try this data a second time but this time I just loaded all the files at once in to APP & ran straight through to integration, it produced exactly the same result as my initial run.
Either I am missing something or there is a major FLATs issue with APP.
Odd thing is that I am sure that I have used data with Flats in the past when testing APP before buying & all was OK, now nothing seems to work.
Right, I have just had a third attempt at this data & this time I left all the Master files to AVERAGE without setting any below 20 files to MEDIAN in the CALIBRATE tab.
I thought that maybe there was something here causing an issue but no, I still got the exact result as shown in my first post above.
Right. The image I showed used exactly the same files described by your file numbers. I even didn't look at my own log to figure out whether they are correct. I simply use the file numbers you described. I use PI's BPP for calibration and then straight integration, all with basic setting and nothing else. The displayed image was stretched using the automatic screen transfer function to enhance the brightness and contrast. No additional post-processing was applied.
Right. The image I showed used exactly the same files described by your file numbers. I even didn't look at my own log to figure out whether they are correct. I simply use the file numbers you described. I use PI's BPP for calibration and then straight integration, all with basic setting and nothing else. The displayed image was stretched using the automatic screen transfer function to enhance the brightness and contrast. No additional post-processing was applied.
The situation here is quite weird.
I am not a PI user so have no understanding of the processes involved when using it but could you do me a favour and process the same files in APP so we could compare results.
You only need to go as far as CALIBRATE
Click on any one of the LIGHTs in APP's file list to load the image into the Image Viewer making sure the 'linear (l)' option is chosen, then take a screen shot.
Then just change the Image Viewer dropdown to 'l-calibrated' and when the calibrated image loads in the viewer, take another screen shot.
Ah, unfortunately I ended my role in the APP beta test team because I don’t have much time to test it. So I no longer have a working APP installed in my computer. Someone else here might help.
Ah, unfortunately I ended my role in the APP beta test team because I don’t have much time to test it. So I no longer have a working APP installed in my computer. Someone else here might help.
I do know what's wrong here with the flat-field calibration.
If you load flats, they need to be normalised before they are integrated into the masterflat.
The normalization engine has changed a bit in 1.059 and now the normalization of the flats unfortunately has a bug.
I have seen this myself on several datasets now and it has been reported by several users. It causes the masterflat to undercorrect, showing a light frame that looks "over corrected". Since flat-fielding is about dividing the masterflat, a visual overcorrection, actually is a undercorrection in the vignetting profile that is in the masterflat.
I am working with the highest priority now to release APP 1.060. It will have a completely upgraded calibration engine and this problem should be completely gone. The new engine, will have less strict rules and will include dark scaling as well.
I do know what's wrong here with the flat-field calibration.
If you load flats, they need to be normalised before they are integrated into the masterflat.
The normalization engine has changed a bit in 1.059 and now the normalization of the flats unfortunately has a bug.
I have seen this myself on several datasets now and it has been reported by several users. It causes the masterflat to undercorrect, showing a light frame that looks "over corrected". Since flat-fielding is about dividing the masterflat, a visual overcorrection, actually is a undercorrection in the vignetting profile that is in the masterflat.
I am working with the highest priority now to release APP 1.060. It will have a completely upgraded calibration engine and this problem should be completely gone. The new engine, will have less strict rules and will include dark scaling as well.
Kind regards,
Mabula
Phew!! Thanks Mabula for getting back to me - thought I had totally messed something up.
I do know what's wrong here with the flat-field calibration.
If you load flats, they need to be normalised before they are integrated into the masterflat.
The normalization engine has changed a bit in 1.059 and now the normalization of the flats unfortunately has a bug.
I have seen this myself on several datasets now and it has been reported by several users. It causes the masterflat to undercorrect, showing a light frame that looks "over corrected". Since flat-fielding is about dividing the masterflat, a visual overcorrection, actually is a undercorrection in the vignetting profile that is in the masterflat.
I am working with the highest priority now to release APP 1.060. It will have a completely upgraded calibration engine and this problem should be completely gone. The new engine, will have less strict rules and will include dark scaling as well.
Kind regards,
Mabula
Phew!! Thanks Mabula for getting back to me - thought I had totally messed something up.
Yes, I am to blame here, I had to double check this before releasing 1.059. As soon as work is completed on the new calibration engine, I'll release it 😉
I will also put this problem in a seperate topic so everybody is aware that there is a bad bug here in 1.059.
I do know what's wrong here with the flat-field calibration.
If you load flats, they need to be normalised before they are integrated into the masterflat.
The normalization engine has changed a bit in 1.059 and now the normalization of the flats unfortunately has a bug.
I have seen this myself on several datasets now and it has been reported by several users. It causes the masterflat to undercorrect, showing a light frame that looks "over corrected". Since flat-fielding is about dividing the masterflat, a visual overcorrection, actually is a undercorrection in the vignetting profile that is in the masterflat.
I am working with the highest priority now to release APP 1.060. It will have a completely upgraded calibration engine and this problem should be completely gone. The new engine, will have less strict rules and will include dark scaling as well.
Kind regards,
Mabula
Phew!! Thanks Mabula for getting back to me - thought I had totally messed something up.
Yes, I am to blame here, I had to double check this before releasing 1.059. As soon as work is completed on the new calibration engine, I'll release it 😉
I will also put this problem in a seperate topic so everybody is aware that there is a bad bug here in 1.059.
Mabula
Mabula, wouldn't it be an idea to have a BUG section on the forum where you can list known bugs?