Most time efficient...
 
Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

Most time efficient way to do image processing, AND, get the best possible final integrations?

17 Posts
4 Users
3 Likes
2,261 Views
 John
(@jlis09astro)
White Dwarf
Joined: 4 years ago
Posts: 9
Topic starter  

Hi, I apologize in advance for the long introduction and the many questions. I've read through quite a bit of the APP forum website and watched many tutorials, but I haven't been able to gather enough information to answer my questions.

I've been using Astro Pixel Processor since August 2020, and in general, have been getting very good results. When I started using APP I was not processing very many frames at one time, so my total time spent doing pre-processing was not an issue. Since October 2020, I've been imaging with a fast focal ratio setup (Hyperstar on C11 OTA, F/1.93) and ZWO ASI183 MC pro camera (OSC) and some narrowband filtering with the OSC. I have small pixels and large file sizes. Pixel scale with this setup is 0.92. So, I now take many short exposure images (typically 30 seconds - 1 minute) and often can obtain very good and detailed images of deep sky object images like galaxies, if conditions are good. I try to add as many good quality frames to the final integrations, with the goal of getting the best possible detail and highest signal to noise ratio. And, with my not so great mount, wind, and clouds, I get a lot of bad frames in each session. So I find that I need to shoot over many nights, and obtain hundreds if not thousands of frames so that I have enough good frames remaining to get a good (fine detail, high SNR) final integration result. The downside of dealing with this many frames is that now I'm finding the pre-processing to be extremely time consuming and tedious.

My original and typical workflow (Workflow A) example: Night 1, shoot 200 frames on a target. Calibrate those lights with flats, dark flats, dark master, use the sort function to determine which bad frames to inspect, open and inspect the worst of the frames and remove the bad ones, then integrate the remaining "good" files, resulting in Integration file #1. For Night 2, I repeat the same target and process and get integration file #2. I would then usually integrate #1 and #2, look at the result, and in the event it was good enough, I'd stop shooting that target. If not good, I'd shoot again, night 3, get integration #3, integrate #1,2,3, and look at the result. So, three relatively short sessions of dealing with 200 frames, followed by a short session to integrate 3 integration frames. Not a horrible amount of time spent. However, is the quality of the final integration the best it can be? If I have thin clouds or frames with streaking stars during one of those nights, and those frames are included in the integration, how does this affect the final integration? If I spend more time to remove such frames, how much does that help the quality of the final result? Also, I've shot some dim objects, and I know I need to obtain more signal. But if I have to obtain thousands of these short exposure images, I don't know if I have the time, processing power, or patience to deal with all of those. 

During pre-processing in APP, I always question my strategy of "removing the worst frames". Especially because it is so time consuming, user-error prone, and very subjective. It seems intuitive that including a bad frame in the stack will not help the quality of the final integrated frame. Another complication is that sometimes, APP will determine a high "star shape" number for a frame that contains streaking stars (because mount/OTA moved). Because of this case, I seem to spend a lot of time looking through most of the frames, which is tedious and slow. Also, I always wonder if I am throwing out good data when I remove frames that maybe aren't so bad? I know that to make things quicker, I can just simply select a lower "percentage" of "lights to stack", but just picking a lower percentage without rationale or analysis is just arbitrary (in my opinion). Especially when I know that an image with streaking stars will likely sneak into the stack!

More recently, I've been reading elsewhere (not in the APP forum) that the integration result should be better if all frames are integrated all at one time, rather than integrating each night separately (and then integrating the nightly integrations). Based on this premise, an example of my new workflow (Workflow B) is to do the steps "calibration through normalize" on one night, 200 frames. I then inspect and throw out bad frames, then save the remaining calibrated frames. I'll repeat this for each night, until I have a large number of frames. So, after 3 nights I'll have say 400 calibrated frames remaining. Then, I integrate this in one session. Note that I don't use multi-session processing in APP since it seems to do essentially the same thing as Workflow A. The difference with workflow B is that my computer (laptop, i9 intel core processor, 16Gb RAM, SSD storage), the integration process is excrutiatingly slow for anything more than 400 frames. And, I've tried integrations of 1000 frames; it takes all night. And, it's not clear to me (just based on my visual inspection) that the process gives a better result than Workflow A. It's difficult to do a direct comparison of the results, because I usually throw out bad frames whether doing Workflow A or B, and I doubt the same frames are integrated in both cases. Plus, I don't have the tools to do anything but a visual inspection, which is subjective.

Sorry for the long story, I hope you are still with me... 😀  I know that I can buy a better mount, build an observatory, buy a super-computer, etc, but for now I am just trying to make this process a little less tedious, time consuming, still get good results, and spend more time enjoying the astronomy, which is what this is supposed to be about!  Here are my questions:

1. If the same subframes are used for the integration in each workflow case A and B, are the final integration results identical?  

2. For workflow A, when I'm integrating the integrations, are the best settings as follows? Integrate: automatic, weights: quality?

3. Regarding workflow A, if night 1 has good frames and a good integration result (e.g., clear transparent night, good seeing, thus good detail, high SNR), and night 2 has a large number of obviously bad frames (e.g., bad transparency and seeing, elongated stars, thin clouds) and thus a worse integration result, is it better to throw out a higher percentage of the bad frames (for Night 2), even though there are less images to integrate, and thus the SNR will be much lower? Will this integration count as much when combined with night 1? 

4. For workflow A, I believe you recommend to do light pollution removal for each night's integration? I worry that by doing that I am reducing nebulosity and thus weakening the contribution of that integration result. 

5. Do I need to worry so much about removing bad frames before doing an integration? Or does APP do a sufficient job of weighting the bad frames so that do not have a detrimental effect (but potentially a good contribution?) to the final integration?

6. If I do need to worry about bad frames, how much do egg-shaped stars affect the final integration? (usually caused by RA drift or guiding errors in RA) I usually don't see egg-shaped stars in my final integration; but I'm not sure how APP deals with these when creating the final image. It seems that a subframe of a galaxy with egg-shaped stars means the galaxy image is smeared along the direction of the egg? So, in general it is not a good contribution to a good integrated image of a galaxy?

7. How do frames with thin clouds affect the final integration? The illumination of such frames can be very uneven and different for each frame. Sometimes they are hard to spot by visual inspection. And, there often can be good signal in a cloudy frame, so I hate to throw such frames out. Should I just leave them in? Or best to remove them?

8. I understand that the "quality" number calculated by APP is a relative measure for each session, and those values can be very different in different sessions. For a given session, and a range of quality values, can one draw any conclusion about how many frames should be kept based on the range of values?

9. Are the other calculated parameters such as star shape, registration RMS, etc. consistent between different sessions? In other words, are they an absolute measure, and not a comparative measure? So, if I choose to reject anything below star shape 0.4 on one night, it's safe to say that I should reject similar stars (for star shape 0.4) the next night?

10. As mentioned, I frequently observe that APP will give a high quality value and/or high star shape value to a frame that has streaking or squiggly stars (e.g., due to mount movement). Sometimes, the only way to quickly find them is if I sort on registration RMS, and hopefully such frames will drop to the bottom of the list. Since I do my inspection mostly at the bottom of the list, sometimes I get lucky and I find those and remove them. Other times they don't go to the bottom, and by chance I've found them in the middle of the stack. Is there anything else that can be done to quickly find these?

Any other comments or suggestions you have to increase time efficiency will be greatly appreciated.

Thanks in advance for your replies.
John


   
ReplyQuote
(@ventania)
Molecular Cloud
Joined: 5 years ago
Posts: 3
 

Hi John,

Your post is for me a very interesting read with all valid and to the point questions. My personal experience in processing AP data with APP is unfortunately insufficient to give you any answers, but I am hoping others can on this forum. I will follow this thread with interest. My main camera is a monochrome that I use with either broadband or narrowband filters, also combining data of multiple sessions on a single object. Usually longer exposures and thus not that many subframes as you have. Processing is then done using your workflow A, which I let APP do in one go with the multi-session method. I actually only apply the light pollution correction on the combined RGB result and not on the integration result of each session and channel. So far I seem to be getting away with that, perhaps light pollution is sufficiently constant over multiple nights, although the impact of high clouds will of course be very different from night to night. Then again: that will also be different from frame to frame during a single night.

Regards,
Hans


   
ReplyQuote
 John
(@jlis09astro)
White Dwarf
Joined: 4 years ago
Posts: 9
Topic starter  

Thanks Hans for your comments. I've done some processing of Narrowband filter (with OSC) data, but I'm hoping first to figure out a standard workflow as discussed above, then I'll move on to the narrowband challenges!  In any event, I'm hoping the APP team can provide some feedback!


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Hi John, that is an extensive post indeed. Please let me come back to it tomorrow, have to go through some data from others first. 😉 Thanks for the questions.


   
ReplyQuote
 John
(@jlis09astro)
White Dwarf
Joined: 4 years ago
Posts: 9
Topic starter  

Hi, just checking back in...

John


   
ReplyQuote
(@b4silio)
Main Sequence Star
Joined: 6 years ago
Posts: 27
 

@jlis09astro

Hi John, just my 2 cents:

I've done quite a bit of testing on whether combining multiple sessions vs having everything in a huge session works out better. I routinely combine 10-20 sessions together as I see only a slice of sky from my balcon and can only shoot about 2h of an object every night before it goes behind other buildings.

  1. Yes with the amount you have. You might worry A vs B if you had 10-20x 900s subs, but with 50+ per session it's a no brainer.
  2. Automatic is a bit iffy if you have thin (or not so thin) clouds. This is because clouds mess up the outlier calculation (they're not as bright as satellites/asteroids so they're kind of in the middle). In that case Median is actually the best as it finds the values that are most likely to really be star stuff
  3. I usually look at whether the "bad" night is really bad, in which case I simply drop it. One thing you can look at is the weights that are assigned to it when you start integration (look in the console you'll have a list of all the images and how much weight is assigned to it)
  4. That's what I do myself. Don't worry too much about killing nebulosity, as the gradient removal (with 5-8-ish points) will be very smooth and not too stark. You can have a look at the "Model" button in the LPR tool to see if it's killing details or if it's more of a unified blobby gradient (usually the case).
  5. That's kind of the job of the outlier rejection model. you can lower the sigma values (disable "automatic" mode for this) so that you only keep good stuff. But it's also not bad to remove the most terrible traily stars.
  6. see 5.
  7. see 2.
  8. Think of the quality "baseline" as separate things for individual sessions vs the final integration, there you usually end up having something that is more evened out. See 3 for weights.
  9. From what I gather it should be so, SNR is the only one that is kinda dependent on the overall quality of the night session (viewing, etc), but star size/shape is usually in absolute terms
  10. If you dont see squiggles in your final integration it means that those were rejected by the outlier detection step, so I wouldn't worry. It is very clear when they are being counted in, as a single bright pixel screws up your average very much.

 

(Attached integration of 14 different sessions from the past several weeks)

Heart Nebula v3

   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 
Woop, sorry John, getting to it now. Thanks also to b4silio for the answer;

My original and typical workflow (Workflow A) example: Night 1, shoot 200 frames on a target. Calibrate those lights with flats, dark flats, dark master, use the sort function to determine which bad frames to inspect, open and inspect the worst of the frames and remove the bad ones, then integrate the remaining "good" files, resulting in Integration file #1. For Night 2, I repeat the same target and process and get integration file #2. I would then usually integrate #1 and #2, look at the result, and in the event it was good enough, I'd stop shooting that target. If not good, I'd shoot again, night 3, get integration #3, integrate #1,2,3, and look at the result. So, three relatively short sessions of dealing with 200 frames, followed by a short session to integrate 3 integration frames. Not a horrible amount of time spent. However, is the quality of the final integration the best it can be? If I have thin clouds or frames with streaking stars during one of those nights, and those frames are included in the integration, how does this affect the final integration? If I spend more time to remove such frames, how much does that help the quality of the final result? Also, I've shot some dim objects, and I know I need to obtain more signal. But if I have to obtain thousands of these short exposure images, I don't know if I have the time, processing power, or patience to deal with all of those. 

When you're dealing with frames >40-50 say, it shouldn't be a big factor anymore to have a few bad frames. APP is analyzing each frame for quality (which is a combination of star shape, noise etc). That should remove most frames with issues, you can set the integration to integrate 90-95% of the data which will let APP automatically reject the worst quality ones. Sometimes frames with satellite trails are still included, but that's fine as the surrounding signal can still be very good. The rejection in APP will remove the trails (this is why >40-50 frames is important, besides noise) and the signal still in that frame is used. So I would just try it like that and see, it usually works robustly.

More recently, I've been reading elsewhere (not in the APP forum) that the integration result should be better if all frames are integrated all at one time, rather than integrating each night separately (and then integrating the nightly integrations). Based on this premise, an example of my new workflow (Workflow B) is to do the steps "calibration through normalize" on one night, 200 frames. I then inspect and throw out bad frames, then save the remaining calibrated frames. I'll repeat this for each night, until I have a large number of frames. So, after 3 nights I'll have say 400 calibrated frames remaining. Then, I integrate this in one session. Note that I don't use multi-session processing in APP since it seems to do essentially the same thing as Workflow A. The difference with workflow B is that my computer (laptop, i9 intel core processor, 16Gb RAM, SSD storage), the integration process is excrutiatingly slow for anything more than 400 frames. And, I've tried integrations of 1000 frames; it takes all night. And, it's not clear to me (just based on my visual inspection) that the process gives a better result than Workflow A. It's difficult to do a direct comparison of the results, because I usually throw out bad frames whether doing Workflow A or B, and I doubt the same frames are integrated in both cases. Plus, I don't have the tools to do anything but a visual inspection, which is subjective.

Yes, theoretically processing all at once may result in a better integration. However with enough data per session I've never seen this to be an issue, so I would not worry about that too much. You can experiment by doing that once, but like you say, it's not very noticeable if at all and in my opinion not worth the extra effort.

2. For workflow A, when I'm integrating the integrations, are the best settings as follows? Integrate: automatic, weights: quality?

That is usually fine yes, APP chooses the best rejection algorithm based on amount of data for instance. If you do have an issue with the result, it may be interesting to manually set a different integration for better rejection like b4silio mentions.

3. Regarding workflow A, if night 1 has good frames and a good integration result (e.g., clear transparent night, good seeing, thus good detail, high SNR), and night 2 has a large number of obviously bad frames (e.g., bad transparency and seeing, elongated stars, thin clouds) and thus a worse integration result, is it better to throw out a higher percentage of the bad frames (for Night 2), even though there are less images to integrate, and thus the SNR will be much lower? Will this integration count as much when combined with night 1?

Yes, based on that second session I would integrate a lower %. However, a bad session will result in worse SNR and overall quality so to add that to session 1, it will not be optimal, still better then just session 1, but not as good as adding a good second session, this is just the nature of the data then. In those case I would (personally) spend a lot of time in trying to get the setup to behave better as that will result in less time wasted later on.

 

4. For workflow A, I believe you recommend to do light pollution removal for each night's integration? I worry that by doing that I am reducing nebulosity and thus weakening the contribution of that integration result. 

You should be careful using LP removal yes, because it will impact nebulosity if you place the boxes on top of the nebulosity. For this I alway stretch my integration to the max, increase saturation to the max as well and then place the boxes in areas where nebulosity is either not present or very minimal. Doing that will not destroy the signal and is a good way forward.

5. Do I need to worry so much about removing bad frames before doing an integration? Or does APP do a sufficient job of weighting the bad frames so that do not have a detrimental effect (but potentially a good contribution?) to the final integration?

It usually does yes, many people throw away too much data because of trails and such while these can be used perfectly well to increase the signal in the final result.

7. How do frames with thin clouds affect the final integration? The illumination of such frames can be very uneven and different for each frame. Sometimes they are hard to spot by visual inspection. And, there often can be good signal in a cloudy frame, so I hate to throw such frames out. Should I just leave them in? Or best to remove them?

Clouds can be an issue, but I think APP will see that the SNR is quite bad and reject them. I usually do this filtering during data acquisition where I tell my guiding software to simply reject anything where guiding is not good enough. Clouds will influence that as well, sure it does mean that on a bad night a lot is rejected, but then that's simply what it is... a bad night. I think I rather filter on that night then later on.

8. I understand that the "quality" number calculated by APP is a relative measure for each session, and those values can be very different in different sessions. For a given session, and a range of quality values, can one draw any conclusion about how many frames should be kept based on the range of values?

The quality value is constantly recalculated during the processing and the final result will have a good score for that particular session. It is not a good measure for comparing different session usually.

9. Are the other calculated parameters such as star shape, registration RMS, etc. consistent between different sessions? In other words, are they an absolute measure, and not a comparative measure? So, if I choose to reject anything below star shape 0.4 on one night, it's safe to say that I should reject similar stars (for star shape 0.4) the next night?

Registration RMS, star shape are things that are absolute values basically, if those values are out of whack that is indicative of that frame being worse no matter what session. SNR can be very different per session. Also here it's just what the data is. If you have a lot of sessions at the end of the year of the same object, you can always go back and do another integration of the sessions with the best values for instance.

 

I hope this helps you,

Vincent


   
ReplyQuote
 John
(@jlis09astro)
White Dwarf
Joined: 4 years ago
Posts: 9
Topic starter  

Thank you b4silio and Vincent, I appreciate your tips and answers to my questions. 

Recently, I've been doing some processing of images I captured earlier this year of Markarian's Chain. I went back to the method (my workflow A above) of integrating the frames for a target for each night, and then integrating those integrations. I incorporated some of your pointers in my workflow. I've had decent results so far. Attached is a 4 panel mosaic that I generated; using APP to do the calibration through integration steps, then light pollution removal (in another program), and then created the mosaic in APP. Followed by post-processing in another program.

I'm happy with the results, but I understand that I can do better if I generate more good frames and that will help me to increase signal to noise. I will address those problems next! I'm sure I'll have more questions as I go forward; you'll likely be hearing from me again! 

Again, thanks for your help! 

John

 


   
ReplyQuote
 John
(@jlis09astro)
White Dwarf
Joined: 4 years ago
Posts: 9
Topic starter  

Hi, I tried inserting my Markarian's Chain image, but I keep getting an "unknown error". It's only a jpg file that is 14.5 MB. 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 
Posted by: @jlis09astro

Hi, I tried inserting my Markarian's Chain image, but I keep getting an "unknown error". It's only a jpg file that is 14.5 MB. 

Great to hear the workflow is working for you! You can add a picture by selecting "Attach Files" in the reply box. That doesn't work?

 


   
ReplyQuote
 John
(@jlis09astro)
White Dwarf
Joined: 4 years ago
Posts: 9
Topic starter  

I just tried again, got the same error.

When I tried the other day (with a JPG), the file size was larger than 30MB, I now realize mine was too large. Today, I reduced it to around 27MB, and it's still not working. I tried both "attach files", and "my media". When I tried attach files, I click on the link, select my file, and hit ok, then nothing happens. When I tried my media, it appears that it is uploading the file (shows progress), but then the process ends by showing an ! and Error Unknown Error. 

Are their other restrictions on the type of upload allowed?


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Mmm, not really, can you try by compressing it a bit more, say to 20 MB?


   
ReplyQuote
 John
(@jlis09astro)
White Dwarf
Joined: 4 years ago
Posts: 9
Topic starter  

Reduced the jpg file size to 16.9MB. Still get the same error. Dimensions are 9469 x 6968 pixels, could that be a problem?


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Strange, might be or maybe there is something in the jpeg that the site doesn't like. But yes, resize it to a much lower resolution just as a test. I'll verify with Mabula.

edit: So no, we don't have a resolution limit, just the file size. I wonder if a different package to generate the jpeg might work, is it straight from APP?


   
ReplyQuote
 John
(@jlis09astro)
White Dwarf
Joined: 4 years ago
Posts: 9
Topic starter  

Vincent, I cropped the mosaic significantly, and compressed the JPG further, got it down to 8.325MB. Here it is:

Markarian's Chain Mosaic 4 panels final small cropped

Wish I could upload the entire mosaic, I thought it came out pretty good!


   
ReplyQuote
 John
(@jlis09astro)
White Dwarf
Joined: 4 years ago
Posts: 9
Topic starter  

also, no, I wasn't generating the image in APP. 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Beautiful part at least! So, Mabula asks if you want to upload the entire jpeg to our server, that error should indicate something else than the resolution. Here's our link;

Go to https://upload.astropixelprocessor.com and use upload1 as username and upload1 as password. Just upload it in the main directory there.

Posting a link to astrobin for instance also works btw.

edit: Mabula does see your images are uploaded correctly, might indeed be a resolution issue with our thumbnail preview specifically, thanks for stumbling on this! 🙂 We're investigating. He did increase the resolution for that now, not sure if that works. You can repost one of your uploaded full images from "My media" I think.

 


   
ReplyQuote
Share: