19 June 2021: Our upload server https://upload.astropixelprocessor.com/ has been migrated successfully to our new office with higher upload and download speeds (nearly 10MByte/sec up/down ) ! We now have 1 general upload user called: upload with password: upload. The users upload1 - upload5 have been disabled.
31 May 2021: APP 1.083-beta2 has been released ! APP 1.083 stable will follow soon afterwards. It includes a completely new Star Reducer Tool, New File Saver Module, Improved Comet registration and much more, check the release notes here!
Combining mono and One-Shot-Color data.
Now, let me start by saying - I know the 'split channels' workaround. It... works but it requires quite a bit of fiddling around (more than necessary IMHO) and with high pixel count cameras (BIG subs) on fast optics like a RASA (LOTS of subs) the disk space requirements quickly balloon to the point of being unworkable (esp on an SSD where space is still a bit of a premium - even with a 1TB 970Pro).
This has irked me for some time now since nearly everything I do these days involves combining mono data with one-shot-color RBG, whether that is HaRGB of a galaxy or HOO/SHO narrowband nebulae with RGB stars.
All data gets loaded correctly (I fix the FITS headers the capture program spits out with 'astfits' - a great tool if you run linux) without needing to force anything on tab 0, OSC is picked up as such with the correct bayer pattern (thanks to the 'BAYERPAT' keyword) and mono is also recognized as what it is. They have their own 'Filter' as well so all I'm asking as an end result is a nice stack per filter but registered together. Alas this bombs out with an ArrayIndexOutOfBoundsException in 5) Normalize. as it is looking for the second channel (index 1) in the reference frame.
Surely this is something that can be fixed without too much hassle?
Not totally sure here, but you are still normalizing 3 channel RGB data with 1 channel data then? As the CFA pattern is still there. I might be misreading that. That's at least not possible right now, which is why the split channels workflow exists. But we do want to make this more clear and easier in a future version indeed.
I don't see why there would be a difference (from an end-user perspective) in normalizing each channel in an RGB image on-the-fly (as happens when only RGB data is in the workflow) vs that RGB image split into 3. Esp since at that point the debayering should have already occurred.
That might be something that gets implemented indeed, I don't know about the background algorithms yet though and how difficult this is (or not) to implement that way.