20 January 2021: Soon to be released APP 1.083-beta2 : improved comet registration, updated tools, new Star Reducer Tool and more...
16 November 2020 - Wouter van Reeven has officially joined the Astro Pixel Processor Team as a moderator on our forum, welcome Wouter 🙂 !
Best Practice for Hundreds of Files (2020)
I've read several older threads on approaches to deal with large (i.e. hundreds) of files to stack, but wanted to confirm the current (as of late Nov 2020) best practice. Is this still the general recommended workflow?
1. Divide subs into subgroups of ~100
2. Load, Calibrate, Analyze Stars and Register as normal
3. Before Nomalize, turn off calibrate background
4. Integrate the subset using your preferred number of lights to stack, weights, Outlier Rejection, Local Normalization Rejection, Local Normalization Correction settings
5. Continue with subgroups until done
6. On 1)Load, clean out all files
7. Load the results of Step 4 integrations as Lights. Do not load Flats, Darks, BPM, etc
8. On 6) Integrate, select Median (if fewer than 20 frames), integrate all frames, use Local Normalization Correction if desired, Enable MBB (5% - 10% nominally)
9. Integrate the final result
I'd welcome any input on current best practice.
Thank you all!
From personal experience, I would say that the best quality is still achieved by using all of your subs in a single (multi-session) integration, using the proper flats and flatdarks per session (or Masterdarks for that session), darks and bad pixelmap.
I recently had a 600 sub project. The acquisition was over many nights with varying conditions (altitude of object, transparency, percentage of moonlight). From a practical perspective, I would integrate new sessions and after that integrate the results again to see the improvements. At the end I did a comparison between this result and a result obtained by doing one big multi session integration. On inspection, the large multi-session result was finer grained than the sub diveded integration.
From a birdsview statistical point of view, this is also logical. Using weighting per small bucket, a certain sub in one bucket could be the lowest quality and not used very much. But this same "relative low qualilty" sub could be better than the best sub in the next bucket. If you use all subs in one big multi-session integration the algorithm can determine the absolute quality over the whole range and assign appropriate weights.
Logicalities aside, as I described in the beginning, I did test this. The difference in my case was not super big, buth it was there.
Using CMOS with a lot of pixels and short integrations, it does get a big task for the computer to process all that. In my case, the total integration time was 6 hours and needed a lot of free (SSD) disk space. It is time to look around for a new computer...mine is 9 years old now.
Yes it may be a slight difference (mainly due to differences in the data of the separate sessions), but depending indeed on your system and processing capabilities, the difference is not big enough in my opinion, to warrant always waiting a long time for it to finish the complex statistics that that many frames entail. So if your system simply limits what you can achieve, sub dividing the data in 100-200 per session is still a good idea.
Thank you both!