2022-05-11: APP 2.0.0-beta 1 has been released !!!
Download links per platform:
Why are all images normalized?
I'm currently trying to stack my first light images (Skywatcher Evostar 72ED) of M13. I have 100 lights and 30 darks and bias frames. The image quality is not very good due to the fact that I still waiting for my field flattener to arrive. To get something out of my images I only stack 70 out of 100 light frames but during the normalization process all 100 frames are processed. Why is this done if only 70 of them are used at the end? The quality score of an image is calculated during the calibration in step 2 so it is already there when the normalization process (step 5) starts and at the end (step 6) only 70 of my images are used for integration.
For me it could have saved 30% of the time the normalization step took saving about 15 minutes or so.
Maybe somebody could bring some light into this.
With best regards,
The quality score does change when subsequent steps are processed, when APP knows more about the data because of those steps, things can change.