Black Friday & Cyber Monday Sales on Renter's & Owner's licenses - sale will end on the 1st of December 2020 at 12:00 UTC
16 November 2020 - Wouter van Reeven has officially joined the Astro Pixel Processor Team as a moderator on our forum, welcome Wouter 🙂 !
30 July 2020 - APP 1.083-beta1 has been released introducing Comet processing! This 1st beta has comet registration. The stable release will also include special comet integration modes.
9 July 2020 - New and updated video tutorial using APP 1.081: Complete LRGB Tutorial of NGC292, The Small Magellanic Cloud by Christian Sasse (iTelescope.net) and Mabula Haverkamp
2019 September: Astro Pixel Processor and iTelescope.net celebrate a new Partnership!
Star color calibrate
I've noticed a few times that I have colored artifacts after the "star color calibration" in the photo.
What can that be?
before color calibation:
after color calibration:
If I bend the colors in Photoshop like this I don't get any artifacts.
Are you talking about the color noise that becomes more clear? How is the result from Photoshop looking, can you upload an example? Usually this color calibration works very well and consistently.
I'm talking about the color artifacts that result from it.
Here is the version of Photoshop:
Ah yes, so this is different because the colors are not like how they "should" be. The color calibration in APP shows a more orange result, which is likely closer to reality (given you have enough stars to pick from, say 200). The color is then corrected across the image and you will see more color noise in the background. It's not that you suddenly have more of it, but it becomes more clear as now they have the actual color in them.
ps. the Photoshop is also less stretched, so if you turn that down a bit in APP, the background will look a bit darker and better as well.
Ok thanks ... I will test it on my next shot. It is quite possible.
I would respectfully disagree that the noise is already there and the colour calibration is just making it more visible. Looking at it I would say there's a cutoff threshold being hit on (parts of) the brighter streaks of the stars during the star color correction, which creates those artifacts (it "corrects" only the patches of the streaks it detects as sufficiently bright?).
I've taken the liberty of pushing in photoshop the original image in the direction of the star-corrected one, stretching and saturating. And while the core of the star is butchered (I'm working from the screengrab posted above), we can see that while the background noise comes out even more than in the APP SCC, the light streaks are not being broken down by the process.
Am I completely off the mark?
Thanks for the analysis, always welcome. I think the problem here is that APP and Photoshop are doing a few very different things. As you see, the APP background is more even as this has been calibrated better and the very bright star has way more halo around it in the Photoshop version. I think if you would be able to get the star the same as APP did, you would likely see a similar effect. It is not really possible for APP to inject noise into the frames at this stage.
You're completely right, I was not thinking APP was introducing noise, but rather that, indeed, it does a much more complex analysis of Background vs Foreground when correcting colors, which is why it's able to protect the background much better than photoshop. It's indeed the case that, in the regions of the star halo and streaks, APP is picking up a lot more of the actual "light and flares of the star" compared to photoshop that just boosts the core (bright enough) and leaves the rest uncorrected. I understand that "what is BG vs what is FG" is a very difficult question to answer algorithmically (and in most cases APP is doing a fantastic job).
The question one could ask, incidentally, is if it's good to have so much flaring and streaks, but that is a question that combines hardware available, technique and time for adjusting it and very importantly taste, more than the software used afterwards to process it!
Yes, it always comes down to.. software can only do so much. The most important thing to reduce artefacts is to get the data as good as possible from the start. This requires a good look at all the components used, finding ways to correct for artefacts at the stage of capturing the data. Then, APP and its algorithms is able to do its best job and deliver the best possible results. Correcting for certain artefacts afterwards is always a bit of a balans, correcting too much gives ugly results, too little and the artefacts already in the data remain.
Thanks a lot for this discussion, really nice.
If desired I can provide the FITS file. Perhaps this will provide further insights.
Thank you, but I think it's clear from these pictures.