Best Practice for H...
 
Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

Best Practice for Hundreds of Files (2020)

4 Posts
3 Users
4 Likes
970 Views
(@ckagy)
White Dwarf
Joined: 4 years ago
Posts: 5
Topic starter  

I've read several older threads on approaches to deal with large (i.e. hundreds) of files to stack, but wanted to confirm the current (as of late Nov 2020) best practice.  Is this still the general recommended workflow?

1. Divide subs into subgroups of ~100

2. Load, Calibrate, Analyze Stars and Register as normal

3. Before Nomalize, turn off calibrate background

4. Integrate the subset using your preferred number of lights to stack, weights, Outlier Rejection, Local Normalization Rejection, Local Normalization Correction settings

5. Continue with subgroups until done

6. On 1)Load, clean out all files

7. Load the results of Step 4 integrations as Lights. Do not load Flats, Darks, BPM, etc

8. On 6) Integrate, select Median (if fewer than 20 frames), integrate all frames, use Local Normalization Correction if desired, Enable MBB (5% - 10% nominally)

9. Integrate the final result

I'd welcome any input on current best practice.

Thank you all!

-Chris


   
ReplyQuote
(@annehouw)
Neutron Star
Joined: 7 years ago
Posts: 55
 

From personal experience, I would say that the best quality is still achieved by using all of your subs in a single (multi-session) integration, using the proper flats and flatdarks per session (or Masterdarks for that session), darks and bad pixelmap.

I recently had a 600 sub project. The acquisition was over many nights with varying conditions (altitude of object, transparency, percentage of moonlight). From a practical perspective, I would integrate new sessions and after that integrate the results again to see the improvements. At the end I did a comparison between this result and a result obtained by doing one big multi session integration. On inspection, the large multi-session result was finer grained than the sub diveded integration.

From a birdsview statistical point of view, this is also logical. Using weighting per small bucket, a certain sub in one bucket could be the lowest quality and not used very much. But this same "relative low qualilty" sub could be better than the best sub in the next bucket. If you use all subs in one big multi-session integration the algorithm can determine the absolute quality over the whole range and assign appropriate weights.

Logicalities aside, as I described in the beginning, I did test this. The difference in my case was not super big, buth it was there.

 

Using CMOS with a lot of pixels and short integrations, it does get a big task for the computer to process all that. In my case, the total integration time was 6 hours and needed a lot of free (SSD) disk space. It is time to look around for a new computer...mine is 9 years old now.

 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Yes it may be a slight difference (mainly due to differences in the data of the separate sessions), but depending indeed on your system and processing capabilities, the difference is not big enough in my opinion, to warrant always waiting a long time for it to finish the complex statistics that that many frames entail. So if your system simply limits what you can achieve, sub dividing the data in 100-200 per session is still a good idea.


   
Chris Kagy reacted
ReplyQuote
(@ckagy)
White Dwarf
Joined: 4 years ago
Posts: 5
Topic starter  

Thank you both!

-Chris


   
ReplyQuote
Share: