Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

Workflow to capture Ha photons from all three channels of an OSC camera

4 Posts
2 Users
1 Likes
3,133 Views
(@cocoplum)
Main Sequence Star
Joined: 7 years ago
Posts: 17
Topic starter  

I just received a ZWO ASI6200MC Pro OSC  camera with a full frame Sony BSI CMOS IMX455 sensor.  One option is to use it with my Baader 7 nm wide Ha filter.  While only Ha light will get through the filter, that light will be recorded mostly in the R channel, but also (to a lesser degree) in the two G and one B pixels.  No sense wasting precious photons.  I posted about this on CN in a thread someone with a different OSC camera, and a helpful reply suggested a way to "use all the photons", and assign them all to the red channel.  The context for that reply was with PixInsight and PS.  I have copied and pasted the result below.  What would the equivalent (but hopefully simpler) method be in APP?  I have only a little experience with APP, so hopefully any suggestions can be stated in simple language.  Thanks in advance!

Because of the overlapping response curves of a Bayer Matrix, of all the Ha light that passed through the filter, 67% was collected by the "red" pixel wells in the Bayer color filter array. In addition, 28% was collected in the two "green" pixel wells and 5% was collected in the "blue" pixel wells. Many advocate throwing away everything other than the 67% found in the red channel after readout. I think you should use all the light that was detected by the sensor. We already know that it was all really Ha light. There were no green or blue photons captured -- just Ha photons.

My personal opinion is that you should use all of that light for your Ha image. The trick is to extract it so that you can use it. I see that you mention use of DSS and PS as your image processing software. I use PixInsight but think I can describe how to use all of the data in more generic terms if you do not have PI or prefer to use PS.

I would proposed that you try this:

  • Calibrate all of your Ha images. (DSS does this for you.)
  • Debayer (convert to color) all of your Ha images. (DSS does this for you.)
  • Align all of your Ha images. (DSS does this for you.)
  • Stack all of your Ha images into an integrated RGB color image. (DSS does this for you.)
     
  • Separate your stacked image into separate R, G, and B color channels using PS.
    (These will now be monochrome images containing the data from the different color channels)
  • Now use mathematical methods to add together these three color channel images. Give the green channel twice the weight compared to the Red and Green channels. (This is because the green channel contained data contained in both green pixels from the RAWCFA image.) In action, you can multiple the green channel image by 2 before doing the addition.
  • The Addition function in your image processing program can do this when represented as R + (2*G) + B.
    You now have a monochrome image that contains the brightness of all the light that fell on the sensor.

This monochrome image can now be used as you might use an Ha image from a monochrome camera. You can Add / Blend it with an normal unfiltered RGB image from your Altair 183C camera.

In PixInsight, I would use the following processes:

  • Start with your RGB Color stack from the Ha-filtered imaging session.
  • Use the RGBWorkingSpace process to set each color channel to 1.00 weighting.
  • Use the ChannelExtraction process to extract the R, G, and B color channels.
  • Use the PixelMath process to add those channels back into a mono Ha Image.
    Image_R + 2 * Image_G + Image_B
    Create a new image from the result in the PixelMath process.
  • Now, this forms the stacked linear Ha monochrome image for use in your normal Ha + (L) + RGB workflow.

 A more formal way to perform the same flow is to split out the CFA channels of your raw calibrated images, align and integrate them by CFA channel and then add up the four resulting mono images. Having done this both ways (CFA or DeBayered R,G,B separations) I find the difference is very minimal and the DeBayered separated channel method is less messy.)

In any case, I feel there is no need to throw away 33% of your Ha data just because it ended up in the "wrong" pixel well. I will admit that I am very much in the minority for having this opinion.

All the best,

Kevin


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Thanks for this elaborate post! First things I'm noticing about the workflow is that DSS is used to calibrate the data. This is definitely not the best way as DSS has far simpler algorithms which result in less well calibrated data to begin with, compared to PI and especially APP. When images get complex this will get worse. The problem in general with extracting pixels from very separate channels on the Bayer matrix (while using a Ha filter) is that it's very difficult for a program to see what is a signal and what is noise. In the workflow I don't even think noise is taken into account and I'm pretty sure you will end up with images which have a lot more noise injected into them.

APP has a special algorithm just for this, in tab 0 you can select to extract Ha data directly from the raw data. It is superior to workflows I know of. The proof would be in the pudding, so you can simply compare the workflows with what APP does and produces. Ofcourse make sure to post-process that result nicely, so you can get a good signal and check the noise in both workflows.

Screenshot 2020 01 26 at 10.31.40

   
ReplyQuote
(@cocoplum)
Main Sequence Star
Joined: 7 years ago
Posts: 17
Topic starter  

@vincent-mod

Wow... I hadn't noticed that feature, thanks!  Who needs a workflow when a dedicated algorithm will accomplish the same thing!  Impressive, can't wait to try it.  I'm waiting on adapters, of course.


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

I bet it will perform better, not the same. 😉 Good luck with trying, if you run into any issue, we're here to help.


   
ReplyQuote
Share: