30 July 2020 - APP 1.083-beta1 has been released introducing Comet processing! This 1st beta has comet registration. The stable release will also include special comet integration modes.
9 July 2020 - New and updated video tutorial using APP 1.081: Complete LRGB Tutorial of NGC292, The Small Magellanic Cloud by Christian Sasse (iTelescope.net) and Mabula Haverkamp
2019 September: Astro Pixel Processor and iTelescope.net celebrate a new Partnership!
Combining mono and One-Shot-Color data.
Now, let me start by saying - I know the 'split channels' workaround. It... works but it requires quite a bit of fiddling around (more than necessary IMHO) and with high pixel count cameras (BIG subs) on fast optics like a RASA (LOTS of subs) the disk space requirements quickly balloon to the point of being unworkable (esp on an SSD where space is still a bit of a premium - even with a 1TB 970Pro).
This has irked me for some time now since nearly everything I do these days involves combining mono data with one-shot-color RBG, whether that is HaRGB of a galaxy or HOO/SHO narrowband nebulae with RGB stars.
All data gets loaded correctly (I fix the FITS headers the capture program spits out with 'astfits' - a great tool if you run linux) without needing to force anything on tab 0, OSC is picked up as such with the correct bayer pattern (thanks to the 'BAYERPAT' keyword) and mono is also recognized as what it is. They have their own 'Filter' as well so all I'm asking as an end result is a nice stack per filter but registered together. Alas this bombs out with an ArrayIndexOutOfBoundsException in 5) Normalize. as it is looking for the second channel (index 1) in the reference frame.
Surely this is something that can be fixed without too much hassle?
Not totally sure here, but you are still normalizing 3 channel RGB data with 1 channel data then? As the CFA pattern is still there. I might be misreading that. That's at least not possible right now, which is why the split channels workflow exists. But we do want to make this more clear and easier in a future version indeed.
I don't see why there would be a difference (from an end-user perspective) in normalizing each channel in an RGB image on-the-fly (as happens when only RGB data is in the workflow) vs that RGB image split into 3. Esp since at that point the debayering should have already occurred.
That might be something that gets implemented indeed, I don't know about the background algorithms yet though and how difficult this is (or not) to implement that way.