2023-03-15: APP 2.0.0-beta14 has been released !
IMPROVED FRAME LIST, sorting on column header click and you can move the columns now which will be preserved between restarts.
We are very close now to releasing APP 2.0.0 stable with a complete printable manual...
Astro Pixel Processor Windows 64-bit
Astro Pixel Processor macOS Intel 64-bit
Astro Pixel Processor macOS Apple M Silicon 64-bit
Astro Pixel Processor Linux DEB 64-bit
Astro Pixel Processor Linux RPM 64-bit
I have HaLRGB data at 1475mm FL 2.3nm pixel size and OSC RGB data at 250mm 4.6nm pixel size. Most of Blue channel is missing because the object was too low by the time it was shot. I want to integrate R,G,B channels separately by substituting B channel with Blue data from OSC camera and complement R & G data from OSC data shot at wider field. Luminance and Ha data are shot at higher resolution and focal length and I want them to be the reference; I'm fine with scaling up Blue data from OSC because the detail is in the Luminance layer. R & G data may or may not benefit from integrating with lower resolution wide field data, but worth a try.
What's the process to do that type of integration? Is checking "Integrate per channel" enough? Will it know to extract RGB data from RGB channel to combine with R/G/B channels from the other camera, or it will create 4 integrations, R/G/B/RGB because OSC subs were loaded as "RGB" channel? I want to have 5 channels as output: Ha, L, R, G, B, registered to a reference frame in Luminance.
I have an idea: check "split channels" and "save calibrated frames", then add them as light frames back into session as R,G,B channels and remove RGB channel, then integrate "per channel". Right?
Yes, You can split the channels from the OSC RGB into mono frames. Then load them back into APP together with the other mono frames from the other setup. This will register and scale everything to each other after which you can add this integrated data into the RGBCombine tool.
Registering it was tough: had to use "triangles" with start=1 and stop=15, uncheck "same camera and optics". Then it said integration is beyond currently supported size and would required 2.5TB disc space.
I'm trying to do some math. Just to match the resolution of camera1 the OSC subs need to be 2x in size. Then to match focal length they need to be scaled up additional (1075/250=4.3 times) to 35570x24200!!!
So the solution would be probably to do the reverse and chose a reference frame from camera2 (wide field) and downscale. Or maybe something in the middle - use normalization process to downscale camera1 0.5x at the price of the detail in Luminosity layer...
Yes, that's likely then required. You can manually set the reference frame in the register tab.