Critical Warning - ...
 
Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

Critical Warning - Flat Field Calibration Can not be Performed Correctly

24 Posts
3 Users
7 Likes
1,820 Views
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  

This is not a new problem and I have raised concerns in the past that there seems to be something odd in the way that multi-session data is being used within APP. It is possible that I am doing something wrong but having spent several days checking my data and undertaking testing I would be grateful if someone would take a look at this issue for me.

Summary of the Issue and Testing undertaken.

  • I frequently get the error “Critical Warning: flat field correction can not be performed correctly” when undertaking multi-session pre-processing in APP. I don't think I have ever had it with single session/single filter processing.
  • I take Flats and Dark Flats for each session ensuring the exposures are identical.
  • I sometimes use Darks for several sessions however they always exactly match the exposure of the lights.
  • Run 1 – Using all data (Lights, Flats, Dark Flats, Darks) – failed with the error.
  • Run 2 – Re-loaded as for Run 1 and double checked everything in the Loading table. Failed as with Run 1.
  • I then checked a sample of fits headers from each file set – I couldn’t see any issues.
  • Run 3 – I reran using the same full data as in Run 1 and Run 2 but this time I ran each session separately – it all worked Fine!
  • Run 4 – I ran all the data, for all sessions through Pixinsight WBPP and it ran fine. The mapping table showed correct mapping of all Lights to Darks, Lights to Flats and Flats to Dark Flats.
  • Run 5 – Back in APP I simplified the data load to make checking easier and only loaded two of each file type for each session (2 lights, 2 flats etc). This failed again with the error – this was expected but it allowed me to again check the data I was loading in more detail.
  • Run 6 – I reran using the same data as Run 5 but running each session separately and as expected these all worked fine.

 

Maybe I did something wrong but if so I don’t see why running each session separately should work when all sessions together fails. To clarify, when using the simplified test (details below), I only loaded the data once but then selected which files to run for each test. So I know the data loaded for the individual session tests was exactly the same data as loaded for the All sessions test.

 

It looks to me as though there is a bug relating to how the processing uses the loaded data in multi-session processing.

 

This is my simplified loading table which is probably a good starting point, I can also make this data available (3.7GB). The full data is 35GB.

image

   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Are you able to verify that it indeed doesn't cause that error with the same files, but using a single session?


   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  

Hi @vincent-mod

I thought I had covered this above but maybe I am misunderstanding your question. To clarify Run 5 and 6 above:

I loaded 2 of each data type (lights, flats, dark flats, darks) for each of three sessions.

I ran all this data for a single integration and it failed. 

I then deselected files as necessary to run each session by itself - every session processed successfully when run by itself.

So Yes, the same data was used for each test, I only loaded the data once. 

I am happy to load the data for you to try for yourself.  

 

 

This post was modified 2 years ago by midnight_lightning

   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  
Posted by: @midnight_lightning

Hi @vincent-mod

I thought I had covered this above but maybe I am misunderstanding your question. To clarify Run 5 and 6 above:

I loaded 2 of each data type (lights, flats, dark flats, darks) for each of three sessions.

I ran all this data for a single integration and it failed. 

I then deselected files as necessary to run each session by itself - every session processed successfully when run by itself.

So Yes, the same data was used for each test, I only loaded the data once. 

I am happy to load the data for you to try for yourself.  

 

 

Any update?

 

 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Apologies, I missed your post.

I think it's better to share the data for me to try indeed. If you can upload like 10 lights for each session and the calibration data, that would be great.

Go to  https://upload.astropixelprocessor.com  and for the username and password, use the word: upload

Create a directory named “midnight-lightning-sessionsIssue” and upload in there. Thank you!


   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  
Posted by: @vincent-mod

Apologies, I missed your post.

I think it's better to share the data for me to try indeed. If you can upload like 10 lights for each session and the calibration data, that would be great.

Go to  https://upload.astropixelprocessor.com  and for the username and password, use the word: upload

Create a directory named “midnight-lightning-sessionsIssue” and upload in there. Thank you!

Hi Vincent, 

I have uploaded the test data that I used above - two of each file type from each of three sessions. I'm happy to load further data but I thought rather than introduce new variables it was safer this way. 

I have also rerun my tests today and made detailed notes about how I loaded the data and ran the tests - see below. Hopefully this will allow you to replicate my experience. I realise the method of data loading could be optimised to be quicker but this is how I generally load data and again I don't want to introduce new variables in case it is the loading method that is the issue.  

It shouldn't matter but please be aware that I used different gain/offsets between sessions - the calibration frames match the lights. Obviously I don't know the code but this, and other issues in the past, suggest that the explicit assignments made on loading are not always used in processing. 

(This is probably unrelated to the current issue but one thing I have seen happen in the past, and unfortunately I don't have an example, is that I load some calibration frames and they appear in the load table. I then load further calibration frames (perhaps for a different session) and these appear in the load table but the ones I loaded previously disappear. I have seen this on several occasions and not been able to explain it - I think in such cases the job ran to completion but I always wondered whether the correct files had been used. Pretty sure I logged that one in the past. )

Let me know if I can do anything else to help.

Run 1 – all sessions

Lights

  • Load S1 lights choosing Assign Filter – Session 1
  • Load S2 lights choosing Assign Filter – Session 2
  • Load S3 lights choosing Assign Filter – Session 3

Flats

  • Load S1 Flats choosing Assign Filter – Session 1
  • Load S2 Flats choosing Assign Filter – Session 2
  • Load S3 Flats choosing Assign Filter – Session 3

Dark Flats

  • Load S1 Dark Flats choosing “Ha” explicitly – Session 1 (NB No filter tag in FITS Header for darks and need to be assigned explicitly)
  • Load S2 Dark Flats choosing “Ha” explicitly – Session 2 (NB No filter tag in FITS Header for darks and need to be assigned explicitly)
  • Load S2 Dark Flats choosing “Ha” explicitly – Session 3 (NB No filter tag in FITS Header for darks and need to be assigned explicitly)

Dark Flats

  • Load S1 Darks choosing “Ha” explicitly – Session 1 (NB No filter tag in FITS Header for darks and need to be assigned explicitly)
  • Load S2 Darks choosing “Ha” explicitly – Session 2 (NB No filter tag in FITS Header for darks and need to be assigned explicitly)
  • Load S2 Darks choosing “Ha” explicitly – Session 3 (NB No filter tag in FITS Header for darks and need to be assigned explicitly)

 

Run 1 – All three sessions

Ran Calibrate with all defaults

  • Received the Calibration Error Message
  • Cancelled Run
  • The following files were created before the job errored - looks to have failed on second master flat
image

 

Run 2 – session 1 only

  • No reloading of data – simply deselected all session 2 and session 3 files and deleted the masters created by Run1 giving the following
  • image
  • Ran Calibration again with same defaults
    • Ran fine producing the correct masters

 

Run 3 – session 2 only

  • No reloading of data – simply selected only session 2 files from loaded data and delete the masters created by Run2
  • Ran Calibration again with same defaults
    • Ran fine producing the correct masters
image

Run 4 – session 3 only

  • No reloading of data – simply selected only session 3 files from loaded data and delete the masters created by Run3
  • Ran Calibration again with same defaults
    • Ran fine producing the correct masters
image

 

 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Thanks for the elaborate description and process! I'll start working it tomorrow, please allow for 1-2 days.


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

So I'm starting work on your data now, I think you can simplify the processing a lot by not changing offsets. This is usually not required, you need a proper offset for your sensor, but can then use that for all your data. Gain can be changed of course

I started simple by loading in data for session 1 and 2 with their darks only. This seems to work for session 1, but session 2 already gives a warning in the console when switching the view to "l-calibrate" to check how that masterdark is performing on the data.

12:42:21 - GENERAL IMAGE LOADER: frame D:\Moderator\midnight-lightning-sessionsIssue\midnight-lightning-sessionsIssue\MD-IG_26.0-E600.0s-QHYCCD-Cameras-Capture-9576x6388-all_channels-session_2.fits was loaded successfully
12:42:22 - 2) CALIBRATE: Adaptive Data Pedestal : raised to:  9.156E-02
12:42:22 - 2) CALIBRATE: Adaptive Data Pedestal : because 337214 pixels are clipping to 0 without it!!!

This means that substracting the dark in session 2, would make a large amount of pixels completely black. APP is preventing this by raising the pedestal by quite a lot. This is already quite the problem and is likely caused by too low of an offset and too low of a gain setting. You are loosing quite a bit of data this way.

Next, looking at the flats for session 2 (just to keep it simple again); the view of that seems quite odd. There is no good vignetting visible, normally you'd have a nice gradual gradient with maybe some dust spots (showing up as little rings), but not a variation is illumination across the field of view. How are you taking the flats?

image

In cases like this it's best to tackle the problems 1 by 1. First getting the offset and gain at a proper level, then tackling the flats. I noticed that NOT using your flat actually made the data and signal look better. So I would advice not to use them in these sessions.

Also, you can simply using darks and dark-flats and load them without assigning a session. APP will simply use the correct ones for the lights that are compatible with it.

edit: What version of APP are you using? In my calibration of the flats I didn't get a warning, so maybe if you're using an older version then the most recent one, you can also hit a bug in APP for that.


   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  

@vincent-mod 

Hi Vincent,

Thank you for looking at the data, much appreciated and some interesting findings. I am relatively new to CMOS cameras and some of the topics you raise are pushing my knowledge so I will try and explain my logic and process.

 

APP Version

image

I installed 1.083.3 today for the further testing referred to below.

 

Gain.

 

The QHY600 camera used has different Modes and Gain settings.  I generally use Mode 0/Gain 26 for broadband and Mode 1/Gain 56 for Narrowband.

 

I accidently used the wrong gain for Session 2, I had also been capturing broadband data and forgot to change the gain when switching back to narrowband. Whilst not ideal I wouldn’t expect it to cause any processing issues and visually there is very little difference between the stretched G26 and G56 subs.

 

Offset

I ended up with different Offsets as I was trying to optimise the setting between sessions. I used Sharpcap and other software to arrive at a setting and rightly or wrongly my objective was to produce an image that had no zero pixel values (Histogram not touching the left hand side). I wasn’t able to find any information regarding how far to move the histogram to the right but the general advice seemed to be to keep it as far left as possible without having zero values.

 

Darks

Your observation on Dark Subtraction creating zero value (black) pixels got me thinking.

I checked the pixel values in the session 2 lights and darks using the PixInsight Statistics tool and got the following minimum value readings (16bit settings). All were quite low but above zero:

Light_frame1  = 127

Light_Frame2 = 128

Dark_Frame1 = 113

Dark_Frame2 = 105

 

Update. The subtraction I did originally was flawed so I have deleted it from here.

The low values in lights and darks are very similar so I can see how dark subtraction could potentially result in zero values - I will look into this further. However, if I increase the Offset won’t this increase the lights and darks equally and mean I am back where I started?

So, my next thought was perhaps it’s the lower gain I used for Session2 that is the issue, however looking at the session 1 subs they have relatively similar low values.

Light Frame = 330

Dark Frame = 300

I notice APP also had to raise the pedestal when processing Session 1 data which had used the intended higher gain.

Conclusion. I don’t know how to stop the Dark Subtraction clipping – any suggestions? Looking at the APP log the raised pedestal seems to handle the issue very well showing only 105 pixels clipped which is negligible.

Flats

I take flats using a Gerd Neumann Aurora light panel. I use SGP Calibration Wizard to establish exposures using the default ADU target of 30,000. The panel is slightly too bright for broadband and I use a few sheets of paper to dim it down to get exposures around 3s. The panel is too dark for 3nm NB and exposures are over a minute for Ha. I have wondered whether the the long exposures could be an issue but they seem to work ok in practice.

I am very careful to try and minimise dust donuts as far as possible. The observatory is kept very clean and I use a blower etc when assembling components. If I open up the OTA for any reason I immediately use cling film to keep dust out, same with any components on a desk. I do get dust bunnies periodically but generally not too much of an issue.

Vignetting – Looking at the image you posted I would have said it shows significant vignetting to the corners – although this image is significantly stretched. On measuring pixel values the corners are around 0.43 and the centre around 0.47 (Scale 0,1) which on the face of it does seem quite a small difference.

I reran session 2 with and without flats and could see no difference visually which I found odd. But then looking at the lights there also appears to be very little gradient which again surprises me. Using the PI Readout the background is fairly constant across the image at around 0.0040 (Scale 0,1). I can see more vignetting on my broadband subs so perhaps the 3nm filters make this less obvious – I don’t know, I am out of my depth at this point.

My NB flats have always looked lumpy, even when I was using CCD, and I worried about it but was reassured on various forums that it was normal – do you think this is an issue?

Further Testing

This is interesting. I ran a test based on your comment that all Darks/DF’s could be loaded together and would sort themselves out – I selected All Channels/All Sessions when I loaded the dark data for Session 1 and 2 in one go. When I tried to calibrate it failed with the Flat Field Calibration Error as I expected it would.

However, I then deleted the Masters, Deselected Session 2 Lights and Flats, and reran calibration on Session 1 Lights and Flats only. 

It unexpectedly Failed as below. This is the first time Session 1 data by itself has failed, no idea why but my instinct is that the processing used the wrong Dark Flat - all Darks were still loaded including those intended for session 2. I cant see any other reason for individual sessions to run successfully but multiple sessions to fail using the same data. As you have indicated there may be some issues with the data but I am struggling to see how this could be causing the Flat Field Calibration issue when individual sessions work.

image

  Finally

I note that you haven’t been able to replicate the issue while I haven’t been able to get a clean run in any test, probably over twenty, that I have run including with version 1.083.3. So, we must be doing something differently 😉.

Can you confirm that you have loaded the data exactly using the method I outlined above, with the explicit mappings, and run Calibration with default values? (BTW – you can ignore session 3 data, I have found today that it is not required to trigger the error.)

I hope some of this helps, your input is very much appreciated, let me know if there is anything I can help with.  

Update 2 - for interest only

I re-ran session 1 with the full data through to integration. When I used the I-Calibrated view on a single light (as you highlighted) it showed the Pedestal applied and also that the image was still being clipped by 200+pixels. However the actual Integration looks fine. I have coloured all zero value pixels Red and it appears they are only an issue where there are overlapping edges due to dither/guiding which get cropped out anyway 🙂

 

image

  

This post was modified 2 years ago 4 times by midnight_lightning

   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

I asked Mabula to chime in as it's indeed getting deeper now. 🙂 I think you can simplify this a lot, but let's see what he can offer for advice.


   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  
Posted by: @vincent-mod

I asked Mabula to chime in as it's indeed getting deeper now. 🙂 I think you can simplify this a lot, but let's see what he can offer for advice.

Thanks Vincent. I know I can simplify how I load data etc but the issue that I am really interested in is quite simple.

Why does Calibration fail when running all sessions together, but succeed when running each session independently?

How do I know that the explicit calibration file mappings that I specify when loading data are correctly applied? 

This post was modified 2 years ago by midnight_lightning

   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @midnight_lightning

 

Darks

Your observation on Dark Subtraction creating zero value (black) pixels got me thinking.

I checked the pixel values in the session 2 lights and darks using the PixInsight Statistics tool and got the following minimum value readings (16bit settings). All were quite low but above zero:

Light_frame1  = 127

Light_Frame2 = 128

Dark_Frame1 = 113

Dark_Frame2 = 105

 

Update. The subtraction I did originally was flawed so I have deleted it from here.

The low values in lights and darks are very similar so I can see how dark subtraction could potentially result in zero values - I will look into this further. However, if I increase the Offset won’t this increase the lights and darks equally and mean I am back where I started?

So, my next thought was perhaps it’s the lower gain I used for Session2 that is the issue, however looking at the session 1 subs they have relatively similar low values.

Light Frame = 330

Dark Frame = 300

I notice APP also had to raise the pedestal when processing Session 1 data which had used the intended higher gain.

Conclusion. I don’t know how to stop the Dark Subtraction clipping – any suggestions? Looking at the APP log the raised pedestal seems to handle the issue very well showing only 105 pixels clipped which is negligible.  

Hi @midnight_lightning,

Those values that you report as minimum values say nothing really when it comes to clipping data to 0. If the standard deviation (sigma) is 20 of the dark current in the lights, then pixels which are 6*sigma, will already clip and there are many with even higher deviations. We raise the pedestal 24*sigma of the dark current of the masterdark or masterbias and even then pixels can clip.

 

 

This post was modified 2 years ago by Mabula-Admin

   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @midnight_lightning

 

Flats

I take flats using a Gerd Neumann Aurora light panel. I use SGP Calibration Wizard to establish exposures using the default ADU target of 30,000. The panel is slightly too bright for broadband and I use a few sheets of paper to dim it down to get exposures around 3s. The panel is too dark for 3nm NB and exposures are over a minute for Ha. I have wondered whether the the long exposures could be an issue but they seem to work ok in practice.

I am very careful to try and minimise dust donuts as far as possible. The observatory is kept very clean and I use a blower etc when assembling components. If I open up the OTA for any reason I immediately use cling film to keep dust out, same with any components on a desk. I do get dust bunnies periodically but generally not too much of an issue.

Vignetting – Looking at the image you posted I would have said it shows significant vignetting to the corners – although this image is significantly stretched. On measuring pixel values the corners are around 0.43 and the centre around 0.47 (Scale 0,1) which on the face of it does seem quite a small difference.

I reran session 2 with and without flats and could see no difference visually which I found odd. But then looking at the lights there also appears to be very little gradient which again surprises me. Using the PI Readout the background is fairly constant across the image at around 0.0040 (Scale 0,1). I can see more vignetting on my broadband subs so perhaps the 3nm filters make this less obvious – I don’t know, I am out of my depth at this point.

My NB flats have always looked lumpy, even when I was using CCD, and I worried about it but was reassured on various forums that it was normal – do you think this is an issue?

@midnight_lightning, tHe long exposure for narrowband filter flats is normal and perfecly okay 😉 

The amount of vignetting that you see in your setup is from your optics and your sensor dimensions. You expect the same vigneting amount for broadband en narrowband filters, allthough the mechanics of  your filterwheel with slight possible tilts can make look things slightly different. The key is that it should look rather similar apart from the dust bunnies of course.

My 3nm narrowband flats also look less nice than the broadband ones, so yes, I guess that is normal. In the end, if your lights don't show vignetting nor dust bunnies after calibration, they must be okay 😉

 


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @midnight_lightning

 

Further Testing

This is interesting. I ran a test based on your comment that all Darks/DF’s could be loaded together and would sort themselves out – I selected All Channels/All Sessions when I loaded the dark data for Session 1 and 2 in one go. When I tried to calibrate it failed with the Flat Field Calibration Error as I expected it would.

However, I then deleted the Masters, Deselected Session 2 Lights and Flats, and reran calibration on Session 1 Lights and Flats only. 

It unexpectedly Failed as below. This is the first time Session 1 data by itself has failed, no idea why but my instinct is that the processing used the wrong Dark Flat - all Darks were still loaded including those intended for session 2. I cant see any other reason for individual sessions to run successfully but multiple sessions to fail using the same data. As you have indicated there may be some issues with the data but I am struggling to see how this could be causing the Flat Field Calibration issue when individual sessions work.

image

  

@midnight_lightning, So from that screenshot, I am not surprised that you get the warning message. I see no darkflats nor bias loaded ? So that is expected behaviour.

If you only calibrate with lights and flats, that warning should always be thrown. So that is not unexpected.

I will look at you data now and I will try to see what might be wrong, will report back later when finished 😉

 


   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  
Posted by: @mabula-admin

So from that screenshot, I am not surprised that you get the warning message. I see no darkflats nor bias loaded ? So that is expected behaviour.

The screen shot isn't very clear but the dark flats are there under Darks. This was a one-off test following Vincent's suggestion that I could load all darks/DF's together. 😉 

The screenshot in my original post which shows them explicitly loaded may be a better example.

 


   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  
Posted by: @mabula-admin

Those values that you report as minimum values say nothing really when it comes to clipping data to 0. If the standard deviation (sigma) is 20 of the dark current in the lights, then pixels which are 6*sigma, will already clip and there are many with even higher deviations. We raise the pedestal 24*sigma of the dark current of the masterdark or masterbias and even then pixels can clip.

 

Agreed, I have learned quite a bit today digging into Dark Subtraction. 😎 

The good news, an update at the end of my earlier post, is that the only clipped pixels from the  session 1 integration are in the edges where dithering etc caused overlaps - so not an issue as they get cropped out. It seems the 3nm filters are the main issue concerning dark-subtraction and clipping but the pedestal seems to have handled it well.


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi @midnight_lightning,

Okay, I have started with your data, I already see 1 possible cause of problems like yours.

I have loaded all 3 sessions, assigned the lights and flats to their session and uses the fitler header tags.

I loaded the darks by assigning them to sessions since you  have different gains and offsets across the sessions. I did assign them to all fitlers, so not used the filter header tag.

I loaded the flatdarks for each session and used the fitler header tag, and now we see a problem, this will not work !

FlatDarks no fitler tag in header

You see that the filter assigned for the flat darks is now MONO, so they can not be matched with the flats, triggering your warning. SGP pro, which I use myself, does not add the fitler tag when you shoot darks/flatdarks which is logical.

The solution is now to unload these flatdarks, and then reload them with setting the filter/channel explicitely to Ha, the filter used for the flats:

FlatDarks no fitler tag in header reloaded Ha explicitely

Now the next step is to click on create Masters & assign to lights in 2) Calibrate and all works without issues. The warning message is not shown and I can confirm from the console output that all flats were properly calibrated with the respective MasterDarkFlats.

All works as expected

So I can only assume that you are loading the frames differently for some reason?

The fully calibrated data looks great !

With regard to the clipping, I see this in the console:

21:53:49 - 2) CALIBRATE: Adaptive Data Pedestal : raised to: 1,112E-01
21:53:49 - 2) CALIBRATE: Adaptive Data Pedestal : because 352093 pixels are clipping to 0 without it!!!
21:53:50 - 2) CALIBRATE: Adaptive Data Pedestal : set at: 1,112E-01
21:53:50 - 2) CALIBRATE: Adaptive Data Pedestal : still 161 pixels are clipping to 0 with it!!!

So this does indicate that you exposure time is still short and could be much longer with much better results... if pixels still clipp even with the pedestal, it means that quite a lot of pixels after dark subraction received so little photons that the light pixel value can not rise above the dark current of your masterdark.

Now with 3nm narrowband filters this is not odd of course! I would not worry about it. But ! If your setup allows for longer exposure times I would definitely do it 😉

Hope this helps and clarifies things?

Mabula

 

This post was modified 2 years ago by Mabula-Admin

   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  
Posted by: @mabula-admin

So I can only assume that you are loading the frames differently for some reason?

Something is clearly different between what we are both doing given its the same data and I am struggling to find the difference. 

At least you have eliminated my data as being the issue which is a relief for me 😎 

In my 4th post in this thread I detailed my loading procedure, and as far as I can see it is identical to the one you just used, yet it fails when I run it. Would you mind checking this and see it you can spot anything?

I was aware of the lack of filter in the fits header for darks/DF's and always assign explicitly to the filter, Ha in this case. I assigned Darks to Ha rather than all filters as you mention above.

Default calibration settings were used to keep it simple - did you use any non-defaults?

Versions 1.083 AND 1.083.3 were used for the test and both error.

I will take another look tomorrow but having spent the whole day working on this I am struggling. 

UPDATE. I have been through your loading table again and it looks identical to mine. The only thing I can think of is the actual method of loading. 

Did you load each session separately, i.e. For session 1 load the two lights, then load the two flats etc. Then repeat for session 2, and then for session 3?

I mentioned this above but I have seen issues in the past where, using this loading approach, I have added frames and seen them correctly in the loading table, but then on loading more subs find the original ones disappear from the loading table. I don't have a current example but will log this next time it happens.

Many thanks for your, and Vincent's, help with this, it is appreciated. I would very much like to get to the bottom of it as it is a recurring issue. 

 

This post was modified 2 years ago by midnight_lightning

   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  
Posted by: @mabula-admin

So I can only assume that you are loading the frames differently for some reason?

I have run the data through Calibration again this morning, using my original loading method, your loading method, and several variations and inexplicably they all ran through without error. 🙂  

I simply don't understand why it is suddenly working now when it was erroring before. It looks like user error but I have spent so many days trying to get this to work, and been so careful with loading, that I am sure what I have done today is exactly the same as I have done before. Something is certainly different today.

I am short of time at the moment but will do some further investigation to try and find the issue. 

This is one of the loading tables that worked - I will go back and compare to one that didn't when I get chance. 

image

Thanks for your help in getting here, the information regarding Dark subtraction and NB exposures has been very useful.


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Magic solutions are always the best. 😉


   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  
Posted by: @mabula-admin

So I can only assume that you are loading the frames differently for some reason?

I have compared the loading tables for the Successful and Errored runs and they are identical in every respect. It is possible they were loaded differently, e.g. loading lights for all sessions before moving on to flats vs loading all subs for session 1 before doing session 2, but the resulting load details are identical.   

We still haven't ascertained why, in previous runs, trying to calibrate two or three of the sessions at once failed whilst calibrating them individually was always successful. These tests were done using the same loaded data, only changing the data selected for each run - the data was not reloaded between tests. 

There is also the issue whereby files loaded in one step subsequently disappear after loading the next step - all assigned explicitly.

I worked in Computer Systems Development for forty years, from CICS Assembler programmer to Department Head, and I know if a user came to me with this kind of issue it would turn out to be user error 99% of the time. Whilst I can never be 100% certain that it isn't user error now I have been rigorous in my testing and am 99.9% certain there is an intermittent problem somewhere, possibly outside APP if I am the only person affected.      

Unfortunately I am not yet able to replicate the issue but will report back when the problem next occurs.

Thanks again for your help - I have in any case learned some new things that will improve my processes going forward 😀 

 


   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  
Posted by: @mabula-admin

So I can only assume that you are loading the frames differently for some reason?

@mabula-admin

I just spent 30 minutes documenting the latest failed integration in fine detail, unfortunately when I attached the file below your system deleted everything I had typed. I don't have time to go through all that again.

Basically, I tried to pre-process my latest data,  NGC1333 with 4 Sessions of RGB and it failed in exactly the same way as the original post above.

Yet each session run individually worked fine and produced good Integrations.

Given past experience I used a very precise process to demonstrate the anticipated issue.

I initially loaded S1 data only and ran that thorough with defaults (except Normalisation where I didn't use Neutralise Background). It worked fine. 

I then "deselected" S1 data files, "removed" output files from the file list, moved the actual output files to new folder. 

I then repeated the above for S2, S3 and S4. 

All individual sessions ran successfully. 

I then selected all data files (all sessions) - NB I did not reload any files - the selected files were the ones used for the original individual sessions, and re-ran.

It failed in the same way as indicated at the start of this thread. 

There is something wrong somewhere. 

I have attached spreadsheet showing the File List (Note the job continued to run for a minute after cancelling so it's possible that an output file(s) was created after the error. I can't find the log, I assume it is saved somewhere? 

I estimate I have spent several weeks of time over the past year with similar issues and frankly don't have time to keep doing this. I would really like to be able to use APP but for now I will use PixInsight which always seems to run through without issue.

Happy to provide the data for investigation, 799 files @ 120mb each.   

 

 

  


   
ReplyQuote
(@midnight_lightning)
Neutron Star
Joined: 6 years ago
Posts: 104
Topic starter  

Any thoughts?


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

I asked Mabula to have a look. 😉


   
ReplyQuote
Share: