2023-03-15: APP 2.0.0-beta14 has been released !
IMPROVED FRAME LIST, sorting on column header click and you can move the columns now which will be preserved between restarts.
We are very close now to releasing APP 2.0.0 stable with a complete printable manual...
Astro Pixel Processor Windows 64-bit
Astro Pixel Processor macOS Intel 64-bit
Astro Pixel Processor macOS Apple M Silicon 64-bit
Astro Pixel Processor Linux DEB 64-bit
Astro Pixel Processor Linux RPM 64-bit
Hey folks,
Sorry if this has been covered elsewhere, but is there a way to have APP automatically normalise per-channel without having to manually run all the steps for each, saving the intermediate files as you go?
I have a large data set where the RGB data is deliberately underexposed compared to the narrowband, but it all needs to be registered together.
Loading the frames/calibration and pressing integrate works fine, except the resulting narrowband images are often normalised against the lower exposure RGB data, resulting in a histogram peak of <0.2 for clipped stars. I don’t want to normalise RGB against the narrowband, or the narrowband channels against each other.
Processing each channel individually is very time consuming so being able to automatically run the preceding steps from Integrate helps a lot.
hope that make sense!
many thanks in advance,
Tom
Mm, I get what you're saying I think. To make it clear for me, you are doing a split channels workflow on the RGB, then making integrations of R, G and B right? And then you register and normalize with the NB. Is that where things don't work out?
Hey Vincent, thanks for the reply. Sorry if I was a little vague.
I have an R, G, B, Sii, Ha, Oiii data set from a mono camera of a target (same optics). The exposure of the R, G and B channels is deliberately a good number of stops down on the narrowband data.
If I use (1) and load the lights/masters/etc, adjust settings, then (6) Integrate per-channel, the (5) Normalise step generally picks a reference frame from the RGB data (as it has a much better quality score). As this data has a much darker histogram, the narrowband data is drastically darkened to match, such that the peak in the Ha channel (usually the one with the brightest average due to nebulosity) is <0.2.
To work around this, as I understand it, I have to manually register all frames, save them, then isolate each narrowband channel and normalise and integrate them. This is somewhat time consuming. The last time I tried, I also ended up with a bunch of nullptr errors which stoped integration (I haven't had any time to try and reproduce that yet).
I now generally only have single exposure length, so I'm going to experiment with just disabling normalisation entirely, but I have some older data sets with mixed exposures for one channel, where it would still be useful to use normalisation.
I may be misunderstanding something, but it seemed that if APP could pick a normalisation reference per-channel, this issue would go away.
Thanks again for the help.
Tom
Hey Vincent, thanks for the reply. Sorry if I was a little vague.
No worries, I'm still on my second coffee and my brains needs about 3 to fire up to its full potential. Thanks for the details!
I may be misunderstanding something, but it seemed that if APP could pick a normalisation reference per-channel, this issue would go away.
You can select the reference frame manually in the list, but not per channel I believe. Would selecting the Luminance channel as reference not make it better then? I would need to test this myself as well as I never did a workflow like this.
No worries, I'm still on my second coffee and my brains needs about 3 to fire up to its full potential. Thanks for the details!
You just reminded me - I need to get one too!
You can select the reference frame manually in the list, but not per channel I believe. Would selecting the Luminance channel as reference not make it better then? I would need to test this myself as well as I never did a workflow like this.
I sadly don't have any L data. I guess the main thing was that I just don't need to normalise the narrowband data against any of the other channels.
It is probably an unusual workflow (I'm quite an unusual person I suppose 🙃), It's with a creative rather than scientific goal. I'm not under very dark skies, so the basic premise is:
- Narrowband data exposed for nebulosity, high gain (so low dynamic range, clipped stars)
- Star reduction/removal
- creative colormapping to an RGB nebulosity image.
- RGB data exposed for the stars (lower gain, higher dynamic range)
- Basic stretching of the RGB data for a natural-colour starfield
- Blend the natural colour starfield back in over the nebulosity data to taste
With my setup I seem to have to have at least double the exposure time for Sii and Oiii compared to Ha to get any detail, so there is always a lot of relative stretching of each of the narrowband channels what ever happens.
Some examples from older experiments where I had no RGB data, so the stars are re-introduced from a crushed version of the Sii channel. I wanted to try and bring out star temperature, so started experimenting with an RGB star layer, which is what highlighted this normalisation issue.
Thanks again
Tom
Sorry, I have to go, will answer you in more detail later. But instead of L (I didn't read that properly) you can pick any frame as the reference, Ha for instance. In tab 4 (register) it's the top button. I'm wondering if that would partly help you already. I do get your goals, some of these blends may need other tools for now I think, mixing the data in the RGBCombine at the end is always a bit of a compromise in the data contribution of each filter. A good HDR tool would work better I think, which is not yet possible.
Beautiful data btw!
Sorry, I have to go, will answer you in more detail later. But instead of L (I didn't read that properly) you can pick any frame as the reference, Ha for instance. In tab 4 (register) it's the top button. I'm wondering if that would partly help you already.
I gave this a go, but then it seems to clip the RGB channels at 1. I'd hoped that as it was a float file, it could keep the full range.
I do get your goals, some of these blends may need other tools for now I think, mixing the data in the RGBCombine at the end is always a bit of a compromise in the data contribution of each filter. A good HDR tool would work better I think, which is not yet possible.
Sorry I should have mentioned, I do the blending in another tool, I just need the individual channels out of APP. I just tried without normalisation on one data set and it works much better.
Beautiful data btw!
Thanks! I'm still getting my head around this all, and its quite cloudy here in the UK so narrowband takes quite some time...
Appreciate all your help, hope you have a good weekend.
'best
Tom
Fwiw, it’s took me over 4hrs last night to integrate a 4 panel mosaic splitting up the steps to avoid a single normalisation reference - so it’d be great if this option existed. The data was from several nights so I needed normalisation for the individual channels...
Ok, thanks for letting us know, I'll ask Mabula to have a look as well to see if this is possible in a way I didn't see or if it is nice to have in a future version.
Hi,
not wanting to hijack this, but it sounds quite similar to the problem that I'm hitting: my data set is RGB + Luminosity taken simultaneously on my twin rig.
Trying to single-step the normalisation is giving null ptr errors, as it does when I try the all-in approach, that has worked for the last few simple (non-mosaic, but multi-channel) sets I've tried this on: it even worked as a single step on this data set first try, but the colours were wrong - automatic recognition had chosen GBRG, whereas the correct colours need RGGB...
Please find attached screenshot of error, and log from the start of the normalisation run: I have only the Luminance channel (UHC filter/mono camera) selected, though the RGB is also loaded...
Sorry, but I've lied about the Norm-fail.zipped(PNG) file - it's actually the log text, zipped, but the forum won't let me upload that!
@celkins So what are you trying to do exactly? What kind of data and workflow do you use?
My data set consists of 111 RGB & 111 Luminance frames, calibrated with dark, flat & bias frames, from my pair of ASI183C/M cameras - 38MB per subframe.
They form a mosaic of M31 & co., overlapping by 40-50% against the central shots
I tried an initial run, where, as I’ve done with other data sets, I just set things up, and let rip: after some 3 days processing, I had a mosaic output, but the colours were wrong: viewing the RGB sub frames showed that the wrong de-Bayer matrix had been used...
I had to choose RGGB for the correct colours, despite SharpCap being a supported app.; they have been calibrated using darks, flats, and bias; star analysis & registration proceed against the whole set without issue. I then saved the registered subs, as insurance... It would be nice if it were possible to snapshot the complete working state of APP, so that you can resume processing, rather than having to go back to the beginning.
I tried to normalise the entire data set as one, but get errors as shown: I’ve tried deselecting all but the luminance frames - I get these errors: I tried the same with just the RGB - same errors.
I’ve modified my hardware, introducing a 500GB SSD to use as workspace for processing, since I could see the previous integration run was heavily disk-bound.
I’m now stuck as to how to proceed, and why it worked first time, but won’t now...
Suggestions gratefully received...
Thanks, Carl
Thanks for the detailed overview! So to be able to skip certain steps is going to be possible in the next release. It's difficult to see why it fails here, for a mosaic it's always faster and better to create panels first and then perform a mosaic with those. Could you try that? If that still doesn't work I'll ask for the data to be able to check for myself.
Hey folks, just wondered if there was any update on the per-channel normalisation reference topic?
I've been trying the same process in PixInsight, and though the results for ‘one click’ aren’t as good as APP, it’s so much quicker and far more straight forward not having to split all the steps up. Being able to save settings/image lists when returning to on-going projects also makes it so much less error-prone.
thanks again, and all the best,
Tom
We know yes, it's on our to-do list but other features take priority at the moment a nice update is in the works. 🙂 Updates are pretty regular and these things will trickle in.
Thanks Vincent,
Good to know I wasn’t missing any way to do it in the current version.
thanks for the help,
best wishes.
tom