Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

Error messages

25 Posts
5 Users
14 Likes
2,153 Views
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

I have been very frustrated with APP last few weeks. Why? Because of its insufficient  error messages. Here is a few examples:

Critical warning

 This message I have never seen before even though I have used APP for many years. Is it new in APP beta 3? It appeared in the middle of the star analysis.
First of all it does not tell me which file or files creating the error. So it is not helpful at all. Secondly, I'm not convinced that it is even correct. I checked my calibration files as best I could and reran everything. No error message this time.

Here is another example:

Critical warning 2

How am I to correct this when APP is not telling me where the problem is? It is not user friendly by any measure! And the worst part: It was not even correct!
When I replaced the individual calibration (flat/dark flat) files with the newly created master files there was no error message.

And here we go again:

Registration failur

65 out of 65 were successfully registered? So why the error? When I answered No I found that it was actually 64 that had registered, one had failed.
In this case the file is highlighted on the screen so it is no problem to action on it.

I have also had an error message saying "This file seem to be missing". When OK is pressed APP stops the process without any notice of which file is missing.

So to sum this up:
Either a file name need to be included within the error message itself or an error log need to be created for each incident/error so that we have a remote possibility to figure out what the problem is. As it is now, it is simply not good enough. I expect more from APP!


   
ReplyQuote
Topic Tags
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Those are some error messages at once indeed. It seems you may have a data or workflow issue. Mabula is changing when the first warning you see is (it's a warning not an error) going to pop up, but it does indicate you can improve your data.

Please download the stable version of 1.083 as well, you can find it with the link on top of the forum.


   
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

Of course, all of these messages did not show during the same stack process. I have collected them for a few weeks. 
If they are warnings or errors is irrelevant. Problem is that APP does not tell me which file or files are causing them, leaving me completely in the dark. The most frustrating part is that APP must know, it just prefer not to tell.
If i have several hundred images to stack, which I quite often have I can't sit down and open every file to try to figure out if it is under exposed. And quite honestly, I don't believe that any of them are.
Maybe I can improve my data, but in this case the potential for improvement is much greater in APP.

I will of course dl the latest version.


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Hi Heno @heno,
 
I am very sorry to read that our warning system is upsetting you. We implement these warnings to help you get better results and understand better what you are doing.
 
I will address all the warnings that you received below, but please understand that those warnings that you get from data calibration are very strucutural warnings and don't indicate a problem with a single light frame. The problem is structural in the whole data set in that case, so we pointing the finger at a single light frame is not logical.
 
Also, the warnings are implemented to show only once per APP session. So if you run it again after the warning appeared the warning will not pop-up again but the problem is still there of course.
 
Now to the warnings that you received:
 
 
Posted by: @heno

I have been very frustrated with APP last few weeks. Why? Because of its insufficient  error messages. Here is a few examples:

Critical warning

 This message I have never seen before even though I have used APP for many years. Is it new in APP beta 3? It appeared in the middle of the star analysis.
First of all it does not tell me which file or files creating the error. So it is not helpful at all. Secondly, I'm not convinced that it is even correct. I checked my calibration files as best I could and reran everything. No error message this time.

This warning indeed is new in APP since beta3, the 1,083 stable release will show it, but the threshold on which it does is in fact lowered compared to beta3.

It will now only show in really problematic cases where way too many pixels clip to the 0 value after an artificial pedestal was already added to the data. Then it definitely is a structural issue of the data set and we can't point the blame to a single light frame.

As the warning says, it is caused by 1 of 3 causes. If you still get this warning in 1.083 (stable) and are sure that your data does not have any issues, by all means upload your data and I will explain to you what the issue is detailed on your data.

The under-exposure cause is the least severe normally of the 3, but it is becoming quite clear that many astrophotographers these days, especially with the popular duo narrowband filters, are underexposing significantly and they don't seem to be aware of this and so they are producing results which could be a lot better if only they would expose a bit longer if practically possible...

Feel free to upload the data when you get the warning again with our new 1.083 release:

https://upload.astropixelprocessor.com/

username and password: upload

Please, make a folder with your name and a description of your issue and let me know once uploaded.

Here is another example:

Critical warning 2

How am I to correct this when APP is not telling me where the problem is? It is not user friendly by any measure! And the worst part: It was not even correct!
When I replaced the individual calibration (flat/dark flat) files with the newly created master files there was no error message.

Again a structural problem not caused by a single frame. In this case, like the warning says, your flats are missing bias and/or darkflats so the flats will never work they way they are supposed to be. You always need to subtract either bias and/or darkflats from your flats. (You also need to subtract bias and/or darks from your lights). That is if you want your flats to work properly. I have written down that we can improve this warning to indicate which flats from which filter and session are missing out.

And here we go again:

Registration failur

65 out of 65 were successfully registered? So why the error? When I answered No I found that it was actually 64 that had registered, one had failed.
In this case the file is highlighted on the screen so it is no problem to action on it.

I have also had an error message saying "This file seem to be missing". When OK is pressed APP stops the process without any notice of which file is missing.

So to sum this up:
Either a file name need to be included within the error message itself or an error log need to be created for each incident/error so that we have a remote possibility to figure out what the problem is. As it is now, it is simply not good enough. I expect more from APP!

We will certainly work to improve things, I will look at that registration message if the calculation is wrong.

For now, please know, that all details about these problems can always  be seen in the separate console window. It is there for this purpose. So if you want to know on which frame the error occurs, after the error message is clicked away, check the console window and you should be able to see on which file it was triggered.

Mabula

This post was modified 2 years ago by Mabula-Admin

   
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

Hi Mabula @Mabula-admin

Thank you so much for coming back to me with such a thorough explanation. I really appreciate it. It is helpful to understand why these messages appear even though I still wonder where the problem is, especially on the first warning.
My NB lights have a median DN of at least 1300, 16bit and the lowest pixel value read in each frame is usually between 600-1000, these are 6 minutes frames. For LRGB all values are much higher. This as read by NINA. I find it hard to believe they are under exposed.
Flats/dark flats are taken using the NINA flats wizard and have the exact same value and offset. I always use offset 10 on my camera, but I will certainly check it that could have accidentally changed during any of the calibration frame captures.
If I cannot figure this out I will upload a handful of lights + calibration master files.

Suggestion: When an image is loaded in the viewer, would it be possible to highlight pixels clipping, both to high and low values?

Regarding the second message: APP gave this warning when I tried to calibrate using the individual flats/dark flats just captured. Nothing was missing. Every file was there. APP created master files for each set of flats/dark flats. And when I replaced the individual files with the APP master files just created and ran it again the calibration was OK. I don't understand this. 

The last message is not really a problem as such, but the count is wrong. I know how to solve this because the problem file is on screen.
I proposed a year ago that I see no reason why the process should stop because of a file (or more) will not register. I would prefer the process to continue. I could always go back and rerun the registration with a different reference file at a later stage. (This would of course require a list of the un-registered files to be created.) But usually there is something wrong with the files anyway.

Some sort of general log file created during the process would have been useful. A simple text file listing everything that happened from start to end including all files used, settings, warnings and errors.

Helge.


   
Mabula-Admin reacted
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

I have scrutinized all my calibration files. I have come to the conclusion that there may be a problem with my darks. Don't know for sure or why or how, but I made new 30s darks just to compare and made a new MD. The master file from the new 30s files looked quite different from the old one. So I have decided to build a complete new darks library. It will take all day so tomorrow I will rerun the process that gave me the initial warning.
May I present a suggestion: @Mabula-admin
The individual files contain information about the camera offset, temperature and binning. But this info is not included in the master file header. May I suggest it is.


   
Mabula-Admin reacted
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

So I think I cracked it. I have made a complete, new darks library to replace the old one. Yesterday I reran two of the integrations that gave me the data issue warning. No warning this time so I assume the problem is gone. Why my old darks were clipping pixels to black, I have no idea. I have tried to recreate using various camera settings, but I cannot.

If I, with my rather limited knowledge in this topic, am able to identify this fault, so should APP. In fact, this problem could already have been identified when the master darks were created if such a check was implemented. I am sure that it would not even pose a challenge to @mabula-admin to create and implement relevant checks on the calibration files. 

Helge


   
Mabula-Admin reacted
ReplyQuote
(@sparna)
Molecular Cloud
Joined: 3 years ago
Posts: 4
 
Posted by: @heno

The last message is not really a problem as such, but the count is wrong. I know how to solve this because the problem file is on screen.
I proposed a year ago that I see no reason why the process should stop because of a file (or more) will not register. I would prefer the process to continue. I could always go back and rerun the registration with a different reference file at a later stage. (This would of course require a list of the un-registered files to be created.) But usually there is something wrong with the files anyway.

Some sort of general log file created during the process would have been useful. A simple text file listing everything that happened from start to end including all files used, settings, warnings and errors.

I've had this same error multiple times with version 1.083 - it says that all the frames registered fine (eg. 121/121), but in the same sentence claims that some didn't. It's always the case that one or two didn't, and they are easy to find and remove. But the fact that this stops the whole process is quite annoying, since processing times in APP make it a "set it and go do something else for the rest of the day" activity, and coming back hours later to find it stopped because it couldn't register one frame is quite frustrating.

This post was modified 2 years ago by sparna

   
Mabula-Admin reacted
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 
Posted by: @heno

Why my old darks were clipping pixels to black, I have no idea. I have tried to recreate using various camera settings, but I cannot.

If you download and install APP 1.083 using the latest installers, then you can go to the log window of APP which will show how many pixels have clipped. If you have used an older installer then the popup will show if 100 or more pixels were clipped. With the newest installer this limit has been increased a lot, which may explain why you don't get the popup anymore. Still, the log window shows exactly how many pixels have clipped.


   
Mabula-Admin reacted
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

@wvreeven
Hi. I'm not sure if I fully understand your explanation. I am using 1.083. What do you mean by "using the latest installers"?
Regarding the clipped pixels, are you talking about when I create the master dark or can I load an existing MD and find how many pixels have clipped? I'm a little confused with the terminology. 😯 A screenshot would have been nice. A picture tells more than a 1000 words, you know.
I'm thinking that that in a MD there should not be any more clipped pixels than would show in the BPM. But I could be totally wrong here?

I found another fault amongst my calibration files today. I had accidentally used the flats to create a MDF and the dark flats to create a MF. This serious mistake was not detected by APP, and that is a bit disappointing. What I noticed first was that the OIII integration had amp glow. But the Ha and SII did not and they used the same MD. So checking the rest of the calibration files revealed the mistake.
So this is what I'm talking about in my previous post. People make mistakes, even me 😉, and a sanity check on the loaded files would have been useful. There is no point in running hours of integration and later finding that one of the calibration files is incorrectly created or loaded.

Helge 


   
Mabula-Admin reacted
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 
Posted by: @heno

What do you mean by "using the latest installers"?

Mabula has recreated the APP 1.083 installers to include better error handling. Just redownload the installer and reinstall APP 1.083 and you're set.

I'll search for a picture regarding the log windows now.


   
Mabula-Admin reacted
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 

@heno See the answer of Mabula in this post:

https://www.astropixelprocessor.com/community/appreleases/dark-frame-errors/

The Console window is hidden behind the APP window so you may need to minimize that (or if you are on Mac use the ~ key to rotate between the application windows) to see it.


   
Mabula-Admin reacted
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

@wvreeven

Thank you for the info. I read the explanation that Mabula gave in your link and I think I understood, but I would like you to confirm.
The MD is subtracted from the light, but if the light frames are not bright enough, i.e. underexposed with respect to the MD, pixels will clip to 0. Correct?
I use the median value in my lights during capturing to determine if I am exposing long enough. I have calculated a value which when reached, the sky fog/light pollution will swamp the read noise by a factor of 20. I usually go quite a bit higher. Is there a relationship between this median value (or any other value) in lights and darks where you would know up front that there will be a clipping problem? Not sure if I make sense.
The median value in my darks is around 640, varies a little bit with exposure length. The median value in my lights rarely under 1300, but it happens the the lowest pixel value recorded is below 640. I assume then that this pixel and any other pixel with a value lower than 640 will clip during calibration. Is this how it works?
Mabula suggested a list of things to check, like offset and more. I will repeat myself and mention that offset, temperature and binning values are not recorded in the master files. So how can we check this? I never keep individual calibration files once I have created the master files. They take up too much space.
SWCREATE= 'N.I.N.A. 2.0.0.2002 ' / Software that created this file This would also be useful information to be recorded in the master files.

Helge


   
Mabula-Admin reacted
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 

@heno You almost got it right. Apart from the median value in the MD, there also is a statistical spread which is characterized by the standard deviation of the values in the MD. This needs to be taken into account as well. If the median value in the MD is, e.g., 640 and the standard deviation e.g. 10 then pixels as high as 650 and even higher in the lights MAY be clipped. And pixels as low as 630 in the lights have a high possibility of being clipped.


   
Mabula-Admin reacted
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

@wvreeven
Thank you. I assumed that I had to take the standard deviation into consideration if the rest of my theory made sense, just did not mention it.  I'll put this to rest now.
I have mentioned a few things that I think could be improved, like relevant info in the master files, sanity check of the calibration files when loaded/when calibration is run and more.
A proper log file summing up what happened in the whole session, which files were used, errors, warnings, settings, etc. etc. would really be something.

Helge.


   
Mabula-Admin reacted
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 
Posted by: @heno

A proper log file summing up what happened in the whole session, which files were used, errors, warnings, settings, etc. etc. would really be something.

That info all is present in the console log window.


   
Mabula-Admin reacted
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 
Posted by: @heno

I have mentioned a few things that I think could be improved, like relevant info in the master files, sanity check of the calibration files when loaded/when calibration is run and more.

I'll leave it to @mabula-admin to comment on this.


   
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  
Posted by: @wvreeven
Posted by: @heno

A proper log file summing up what happened in the whole session, which files were used, errors, warnings, settings, etc. etc. would really be something.

That info all is present in the console log window.

Yes, but I cannot pull that info up two weeks later. That is only for the moment, or can it be saved?


   
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 
Posted by: @heno

Yes, but I cannot pull that info up two weeks later. That is only for the moment, or can it be saved?

That's a fair point. It can only be saved by selecting all text manually and pasting that into a text file.


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi Helge @heno,

Thank you very much for all the suggestions, they are many 🙂 ! I will address them in a next response. Let me first respond in general to your initial problems.

I think I understand correctly now that APP's initial warning on your data about clipping pixels to 0 was justified, right? So in that sense it was very good that we warned you and it forced you to have a better look at the data.

What happens a lot these days unfortunately: many users are making bias/dark/flatdark libraries and they are not aware enough, that once their capture software package changes something in camera control, their library is not good anymore and they are best of reproducing the whole library if any...

Personally, I have been sticking with SG pro for many years now, and I never have these issues even when updating to new SG Pro versions. Their camera control is perfect and stable clearly. But... users of other packages like Nina and SharpCap have it frequently it seems...There are many forum topics on our forum that have been posted in the last couple of years, because the capture software changes things in camera control and users often think that APP all of a sudden is the problem, because their results don't look as expected. I guess it is because those packages are still relatively new and are under rather active development which is great of course.

I think you can understand what our problem is here, right? If I don't implement these warnings, users can keep thinking that APP is missing something were it isn't at least I think so... they have data issues and are not aware of it. Some issues can be found easily others can be difficult to detect as well.

Okay, I will now write another response for all your suggestions 😉

This post was modified 2 years ago by Mabula-Admin

   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @heno

My NB lights have a median DN of at least 1300, 16bit and the lowest pixel value read in each frame is usually between 600-1000, these are 6 minutes frames. For LRGB all values are much higher. This as read by NINA. I find it hard to believe they are under exposed.
Flats/dark flats are taken using the NINA flats wizard and have the exact same value and offset. I always use offset 10 on my camera, but I will certainly check it that could have accidentally changed during any of the calibration frame captures.

@heno, well.. median ADU values of 1300 in the 16bit range of 0-65355 is very low, but ! not uncommon when you shoot narrowband of course. It means the sky background in your images are only at 2% of the data range.... I expose to have the sky background at least at 10% of the data range, and I go much deeper with the faint details as a consequence. I understand that there are practical limits in exposing long. I make narrowband exposures of at least 15 minutes or longer because it gives much better results and my mount can handle that.

Now an offset of 10 is also rather low in my experience. I use an offset of 20 I believe with my 12bit asi1600mm. which means that the offset in 16bits is already at 20*16 = 320.

I use the median value in my lights during capturing to determine if I am exposing long enough. I have calculated a value which when reached, the sky fog/light pollution will swamp the read noise by a factor of 20.

Okay, but that is only part of what you should look at when determining exposure time, because read noise is not the complete story here if you take into account possible  data calibration.

If data calibration is involved (and it always is) besides read noise, you need to look at the median value of your masterdark/bias and the standard deviation (sigma) and relate that to the median value of your light frames. The median value of your light frames need to be well above the median of your MasterBias/MasterDark + 24 * the Master's standard deviation... Even with 24 * the standard deviaton, pixels can clip, because the dark current signal can be partly non-gaussian due to causes like amp-glow. In my experience, 24 * standard deviation is reasonable on many datasets especially with CMOS sensors where read noise is different per pixel. In CCD's, read noise is the same for the whole sensor (or quadrants of it), not so with CMOS !

If you make masters from more frames, that standard deviation in the master gets lower and thus the exposure time can be reduced as a consequence. Also the sensor offset can be lower when you shoot more calibration subs.

So if you don't take into account the masterdark/masterbias median and standard deviation, you run the risk, that some pixels will clipp and that they will have no meaningfull information as a consequence. The adaptive pedestal that APP uses helps you prevent this, but the warning we implemented happens when pixels still clip when the artificial pedestal is used, which is already 24* standard deviaton... so if your pixels still clip then in significant numbers, you really want to know this...


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @heno

So I think I cracked it. I have made a complete, new darks library to replace the old one. Yesterday I reran two of the integrations that gave me the data issue warning. No warning this time so I assume the problem is gone. Why my old darks were clipping pixels to black, I have no idea. I have tried to recreate using various camera settings, but I cannot.

If I, with my rather limited knowledge in this topic, am able to identify this fault, so should APP. In fact, this problem could already have been identified when the master darks were created if such a check was implemented. I am sure that it would not even pose a challenge to @mabula-admin to create and implement relevant checks on the calibration files. 

Helge

@heno, I think we are doing just that, aren't we? APP noticed something is fishy when the lights were calibrated, right?  Only by looking at a master, we can not say much. Even the header's gain and offset value are sometimes not what we think they are because of the camera control issues of capture software packages.


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @heno

@wvreeven

I'm thinking that that in a MD there should not be any more clipped pixels than would show in the BPM. But I could be totally wrong here?

Helge 

Clipping pixels to zero has to do with the pixel distribution of the bias + dark current signals. Not so much with bad/hot pixels which the BPM indicates 😉


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi @heno,

Thank you for the suggestions. I have added the following to my ToDo List:

  • add offset, temperature and binning info in masters if present
  • I will show offset and temperature if present in the frame list as well per image
  • Console Output logs will be possible to generate per module 2) to 6)
  • I will keep track of the SWCREATE tag per master and compare it internally to the lights to see if that can explain possible issues.

Now to check on flats if they are really flats or darks if they are really darks is fraud with danger... Some users have flats with no vignetting at all... it is very hard to discriminate then... So I don't think that is very viable or robust. And APP can perfecly show you problems with calibration to start with using the l-calibrated image viewer mode which is there for that purpose.

Continuing processing when not all data passes registration or star analysis is also difficult. An old version did it and then users started asking that APP should stop because there is an issue that needs to be solved first, which i agree on as workflow and thus implemented that...But... the next APP version will be completely about save/load settings and projects and we might as well add the option in general settings to choose this behaviour with a selectbox continue processing with problems and you can then enable/disable this, that would make everyone happy then I think 🙂 ?

Mabula


   
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  
Posted by: @mabula-admin

Now an offset of 10 is also rather low in my experience. I use an offset of 20 I believe with my 12bit asi1600mm. which means that the offset in 16bits is already at 20*16 = 320.

I have had the same camera. I used an offset of 13.
Dr. Robin Glover (or maybe it was Craig Stark) said in a lecture that the easy way to determine you proper offset was to take 0,5 sek dark exposures. Increase the offset until you're in the median 500-1000 range. And don't sweat it. 🙂 Not sure if you agree with this, but that is what he sad, as I remember it. And that is what I do, and I read 628 (just checked). SD value was 8,15. (NINA figures) A higher offset will decrease the dynamic range, but you knew that already. 
My camera is the ASI294MM.

Posted by: @mabula-admin

If data calibration is involved (and it always is) besides read noise, you need to look at the median value of your masterdark/bias and the standard deviation (sigma) and relate that to the median value of your light frames. The median value of your light frames need to be well above the median of your MasterBias/MasterDark + 24 * the Master's standard deviation... Even with 24 * the standard deviaton, pixels can clip, because the dark current signal can be partly non-gaussian due to causes like amp-glow. In my experience, 24 * standard deviation is reasonable on many datasets especially with CMOS sensors where read noise is different per pixel. In CCD's, read noise is the same for the whole sensor (or quadrants of it), not so with CMOS !

I'm do not understand this 24* SD. Why 24? Below is statistic of a 60s MD, as read by NASA/ESA FITS liberator. The figures honestly seems a little strange to me. 

Statistics
Posted by: @mabula-admin

Now to check on flats if they are really flats or darks if they are really darks is fraud with danger... Some users have flats with no vignetting at all... it is very hard to discriminate then... So I don't think that is very viable or robust. And APP can perfecly show you problems with calibration to start with using the l-calibrated image viewer mode which is there for that purpose.

OK, I accept that that could pose a problem. It was just a suggestion anyway.

Posted by: @mabula-admin

Continuing processing when not all data passes registration or star analysis is also difficult. An old version did it and then users started asking that APP should stop because there is an issue that needs to be solved first, which i agree on as workflow and thus implemented that...But... the next APP version will be completely about save/load settings and projects and we might as well add the option in general settings to choose this behaviour with a selectbox continue processing with problems and you can then enable/disable this, that would make everyone happy then I think?

I appreciate that not everybody would agree with me in this, but if you could make this a setting, as you say, everybody should be pleased. 👍 😊 

Posted by: @mabula-admin

@heno, I think we are doing just that, aren't we? APP noticed something is fishy when the lights were calibrated, right?  Only by looking at a master, we can not say much. Even the header's gain and offset value are sometimes not what we think they are because of the camera control issues of capture software packages.

I never said or meant that the warning was incorrect, but running the same process a second time did not produce the warning. That certainly made me wonder. But you have explained why. What frustrated me was that the root cause of the problem was not identified, which I think it could have been and should have been. The timing of the warning, in the middle of a process indicated to me that the problem was related to one specific (light) file that were handled at that point in time. It wasn't. In this case, clipped pixels in the master dark was the problem. I'm still think APP could have warned about this problem when the MD was loaded, if such a check had been implemented. (I know, I demand a lot! 😀 )

@Mabula-admin I really appreciate you taking so much time to respond. It means a lot.  I just hope that others reading this will also find it useful.

Helge


   
ReplyQuote
Share: