19 May 2020 - APP 1.080 will soon be released with full Fujifilm RAF support, so that will include SuperCCD & X-Trans camera's 🙂 !
2019 September: Astro Pixel Processor and iTelescope.net celebrate a new Partnership!
Question about registering Ha and Color image?
I want to register a mono Ha image with a color image. The mono image is slightly larger than the color image.
So, through the process of registration I want to make sure that the mono image is reduce instead of enlarging
the color image.
How do I set the mono image to be reduced when registering it with the color image instead of the registration
process enlarging the color image?
If you want to register a mono ha panel to you're RGB panel I would do it like this.
First if you got an stack panel from a OSC or DSLR, it is important to seperate the RGB data.
Load the final stack (and processed) file go to tab 2 and on the bottom press on split channels, after this save the split channels.
Clear you're memory and load the 3 channels (r,g,b) and also load you're HA, O3, S2, luminance and do on.
Press register them together and save the registerd files!!!
Clear againg the memory and open the combine tool. Press load files and select the 4 (or more) channels you want to combine.
You now will be asked to locate every channel to a colour, red for red and so on. But HA data you can select custome, and then select HA data.
If all data is loaded, you can play with the colours, for example adjust the HA data for 100% to red, and 3 to 7 % to green and blue (gives a nice natural look) but this is all about taste.
Hope this helps.
Thank you much but one clarification. In which step in the process you specify that you want the Ha picture (which is slightly larger) to be scale down and how do you make sure that the R,G,B channel pictures that came from an OSC CCD (which they are slight smaller) do not get scale up?
Just normalize the registered channel files will make them into the same scale
I know registration will make everything the same size but scaling down a bigger image will make the pixels look good and scaling up a smaller image will create pixelation. So, how do we make sure that the program is being smart and scales down a larger image?
I'm not sure but if it is logic you could select one of you're RGB files to be the "reference" file.
Al the other files will be registerd and normalized to this file.
I would say test it, just register and do the star analyse, see after this what file APP choose to be the "best file" (reference) maybe change this one to one of the RGB files. And normalize.
Apologies. I’m confused about the above instructions. It seems like we are guessing and I’m not really sure how to check the reference image and I don’t understand why star-analyze has anything to do it.
It would be much easier to know for a fact if APP defaults to scale down the larger image to match the smaller image or if the reference file really plays a role at all.
What a really want to accomplish is for APP to scale down the larger image to the smaller image, instead of the other way around which would cause pixelation.
Is there a way Mabula can tell us for a fact?
The easiest of your questions to answer is how to set the reference image. This is easy. When you have loaded your images and gone through the Analyse Stars stage, the next step is Registration. By this time APP will have identified an image that it considers to be the best quality and this will be highlighted in the image list. At this point you have a choice, you can continue with APPs selected image or press the Set Reference button and choose a different one.
Whether APP 'upscales' or 'downscales' will depend on the pixel dimensions of the image that you choose as your reference. At the end of Integration (assuming that on the Integrate screen you choose composition mode 'reference') the resulting stack will have pixel dimension that correspond to the pixel dimensions of the reference image you chose at Registration.
It is very straightforward to experiment in APP (assuming you do not use to many frames) so I would suggest that you process your image subs using first one of your Ha frames as the reference and then one of your RGB frames. If you do this I think you will find the results very hard to distinguish in quality terms, though obviously there will be differences possibly small, arising from minor variations in field of view, orientatation etc of the selected reference images.
I would point out that APP is recognised as being exceptionally good for integrating images acquired using different optical trains, imaging systems etc that will naturally have rather different image dimensions, fields of view, rotation angles, illumination levels etc between frames. It would not have acquired this reputation if it was not very good at dealing with dealing with these matters. I do have any inside knowledge of how APP handles matters of differing pixel sizes / alignments etc. - I suspect it uses some form of drizzle algorithm - but I am sure it is a good deal more sophisticated than simply upscaling / downscaling individual pixels between images.
I hope this helps a little
I’m starting to understand a bit more your explanation. I am not really processing lots of subs. I have an integrated Ha image and I have an integrate color image. So, it’s just 2 images. The color image is slightly smaller. For what you are saying, I just want to establish the color image as a reference, so the Ha image will get a bit smaller. Just keeping it simple, I know it does more than that, but in my simple mind, I just want a slightly larger image (Ha) to downscale to the slightly smaller image (Color) by downscaling the Ha image.
So, if I set the color image as reference, it should do that, right? I just hate guessing, I just want to know how it will work for a fact with no guessing, so I can make the process repeatable.
As previously, APP is widely used to stack and process images captured with different scopes and camera so it necessarily has to be rigorous in its approach and reliable in its implementation. Even if images are captured with the same scope and camera, individual image frames will only rarely match exactly in orientation even though the pixel scale is exactly the same.
In answer to your question, very simply, if you select the RGB frame as your reference, APP will at Integration create an Ha frame that matches the scale and pixel size of your RGB image. The actual physical size of the resulting image will depend on the composition mode you select.
However, remember also the earlier post from Jan Willem to your question. You cannot directly use your RGB image as your reference frame. The RGB image must first be split into its RGB components. I think you will find that APP will select the Green component as the reference image after Analyse stars because most cameras are most sensitive to Green light so this will be considered the best quality image.
So you will eventually have four images loaded into APP for processing, one corresponding to each of the RGB components of your colour image plus your Ha. These four image files will need to processed through APPs Analyse Stars, Registration and Normalisation steps even if this seems trivial for your actual requirements simply because this is APP's workflow.
Following Integration you will have four output files. One each for your R, G, B and Ha image 'stacks'. The RGB frames will match the corresponding inputs but the Ha will be rescaed to match the pixel size of the Green (?) reference image.
You will then have to load the output RGB and Ha images into APPs RGB Combine tool to create your new Ha enhanced RGB image.
Hope this helps (and that I've not overlooked anything)
When you said "In answer to your question, put very simply, if you select the RGB frame as your reference, APP will at Integration create an Ha frame that matches the scale and pixel size of your RGB image", that was perfect and it is the answer, I was looking for.
Also, I understand what you said about 'Green' and I assume I can manually tell APP to use a different reference image -if it happens to pick what I would think is the wrong one- but I can research that.
I appreciate you taking the time to answer this post.
Loads of great info Mike, thanks a lot!