Star analysis - how to interpret results?
How can I interpret these results?
@stephanhamel22 The number of stars and the minimum and maximum FWHM are mentioned. Based on that, APP calculates a quality score based on internal algorithms and that score is displayed as well. The rest are other ways to display the same information.
- Thank you for this. What is the perfect score? Is this decent score? There are no published references to compare with?
@stephanhamel22 It is very difficult, if not impossible, to provide a perfect score. It depends on the optical quality of the telescope, the size of the camera pixels, the seeing, guiding, and many other factors.
There are several online tools that provide means to calculate the theoretical plate scale in pixels for your telescope and camera. You can compare that value with the FWHM value to see how well you are doing.
The quality score not only depends on FWHM but also on SNR so it is even more difficult to judge whether a score of about 500 is good or not. But APP allows for using only part of the lights for the stack and which ones are used depends on the scores of the frames.
I have to admit this is still the most frustrating/mysterious part of APP for me next to the Correct Vignetting feature (only matched by the DELIGHT i experienced when I realised that I didn't need to slavishly click through all the steps 1-6 saving along the way and that I could just click Integrate and I would have amazing results - literally changed my love for the hobby that day!)
I don't have a suggestion other than somewhere something needs to pop up explaining what the score means and, ideally, allow the user to "blink" the frames (aligned and in order) to determine which ones to keep/reject (in addition to the very cool, but not always the smartest "auto" setting).
Just to be clear - the above sounds like a complaint, it's not - but having asked this question myself, I am confident that I am not the only one confused by how to interpret and action the results of those columns.
To add to what Wouter said; the scores are relative to the each other basically. So you can't really use them to compare different datasets (unless processed together) in that way. But you can for instance take the best scoring frames per dataset and combine them with others, so at least you know you'll have the best data per set.