QUAD band filter pr...
 
Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

QUAD band filter prosessing and more.

44 Posts
10 Users
14 Likes
4,019 Views
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 
Posted by: @headworx

I think it is continuous in X3F file (Foveon proprietary) and discrete in DNG.

Wow that’s fantastic. Thanks for correcting me on this point!


   
ReplyQuote
(@headworx)
Main Sequence Star
Joined: 5 years ago
Posts: 20
 

So FWIW, I started the discussion with the Foveon community:  https://www.dpreview.com/forums/thread/4441374 .

I would say it is promising, considering this color frequency response characteristic:

Foveeon Frequency Response No IR

--Simon


   
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

I see this turned into a discussion about Foveon sensors. I have not read all of that.
So let me repeat myself. I have no wish to split HA form SII and Hb from OIII. It probably can not be done anyway. I just want to assure that if I use the Extract Ha algorithm I also get the SII data. And likevise with OIII and Hb. 
If it does, fine. I'm happy. If not, can the developer do anything about it?
The rest of us can provide as many educated, logical and clever guesses and arguments as we may, but I personally will not be satisfied until Mabula tells me one way or the other.
Helge.


   
ReplyQuote
(@headworx)
Main Sequence Star
Joined: 5 years ago
Posts: 20
 

I think, judging by this discussion, the Sii is included when you extract Ha and Hb is included when you extract Oiii. As long as your filter passes them through.

Speaking of that - do you have examples (photos) when including Sii and Hb improves them ? Also - as I want to push with my Foveon exercise, what object would you recommend to take the photos of, to gather good 4-channel narrowband data?

--Simon


   
ReplyQuote
(@chagen)
Hydrogen Atom
Joined: 2 years ago
Posts: 2
 

@heno

I am  just starting out, still on Trial...

In center of Bortle 9+ skys of Las Vegas, so have to use narrow band filter.  I was searching on which box to check on the loading list, and found this question.. the answer to which means that I have even more to learn.  

I am using the Radian quad band filter, and am able to take 900second exposures! (Without filter, all white from skyshine.)

The published specs are:

Transmission lines:

H-Beta: 79% Peak Transmission, 5nm FWHM

O-III: 97% Peak Transmission, 4nm FWHM

H-Alpha: 87% Peak Transmission, 4nm FWHM

S-II: 90% Peak Transmission, 4nm FWHM

SO:  is it possible to do 4 processings for these, or because the lines are so close together, do we just do 2?


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

It's not no, not when you have a color camera (is that the case?). H-alpha and SII are both basically red, which means that signal will be picked up by the red pixels on your sensor. They will be "mixed" there. Same principle for the other signals. If you have a mono sensor, you can take them seperately and combine them in any way you want. Still, the result will be a RGB image, so to combine them all will be a bit tricky.


   
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

@chagen

I can tell you what I have done with a color camera and a quad band filter. The intention, however, was to combine frames from different scopes and cameras.
After loading all required files I opened RAW/FITS pane and selected algorithm SII, OII, Ha and Hb in turn.
In the Calibration pane I checked "split channels" and pressed the "Save calibrated files". APP will now split the colors and save the frames. I can now use these to combine with mono chrome frames from a different camera. How useful this would be if you don't intend to combine the result with other data, I'm not sure.
As Vincent said, you only have RGB data and the various bands will be mixed into these colors.  I have made several images with my Rasa8 + ASI294MC + Quad band filter without splitting out the various bands. The color calibration is a bit challenging as you are lacking major parts of the spectrum, but it is fully possible. Good luck.

Helge


   
ReplyQuote
(@chagen)
Hydrogen Atom
Joined: 2 years ago
Posts: 2
 

Thank for the reply.

I am too new to astrophoto to have useful opinions; that said, the narrowband filter does an amazing job of defeating skyshine for me.  Because of this I can learn how to use astrophoto tools; no more 2 hrs to dark site 2hrs back.

2. Other posts have mentioned that the RGGB sensors have sensitivities in R,G, and blue.. the sensitivity graphs I have looked at show that their sensitivity for these sensors has a wide color distribution.  So it would seem that the R, G, B signals when we use the quad filter would  have mixtures of (for the H-alpha Sii pair would have lots of R signal, and some G, and B: while the H-beta Oiii pair would have roughly 2 parts blue and 1 part green)

             

image
image

                                        


   
ReplyQuote
(@andybooth)
Red Giant
Joined: 2 years ago
Posts: 51
 

Ok, i have in the passed gone through this thinking, as I have both dual and quad band filters, but  explaining in words is not easy. Let me try, and first i will explain dual band to help i hope.

First, the base chip is mono, and just simply reacts to a photon of any frequency hitting it. When it receives one, then that creates ‘luminosity’ signal for that pixel.

above the mono chip sits a bayer matrix, a grid of coloured filters aligned with each mono pixel. They are red, green and blue windows, which pass only those colour frequencies, and reject the rest. However the filters have a wide pass band, as already said in a previous post. If a red photon hits a red window, it will pass through and create a luminance signal on that red assigned pixel. If a red photon hits a blue or green pixel, then it will not pass, and the blue and green assigned pixel gets no luminance.

any software only reads the total luminance hits on that mono pixel, and by knowing the matrix, assigns a colour r,g or b to that total luminance count. When you specify the matrix , rggb for example, By default it ‘knows’ that the luminance count on a pixel under the red window must come from red photons, but has no idea of frequency except that it is within the wide red bandpass, or it would not have received a hit.

so, on to the dual/quad bit.

The dual filter passes ha and oii in a 7nm bandpass each. This provides a clean narrow red signal for ha, which is in the middle of the bayer matrix red window bandpass, so always is passed by a red pixel, and rejected by the blue and green pixels. So ha object will only provide luminance hits on 1/4 of the pixels on a rggb chip.

The oii signal  unfortunately straddles both both blue and green bayer windows, so will be rejected by the red pixels, but provide luminance hits on both blue and green pixels. So oii is providing signals on 3/4 of a rggb chip.

together, on an object emitting with both ha and oii, ( not all objects do!),  then all pixels of the chip are receiving signal, and with a dual band you can further say the red pixel signal are ha and the combination of  blue AND green pixel signal is oiii.

the software decodes this with the Airy disc setting as just a colour pic made from all three channels as is, and  interpolates any  missing pixels which gives us yellow etc. for our dual filter, we lose the identity of ha and oii due to this interpolation. This is true for any debayer software. The debayer process intelligently makes up 3 of the 4 pixels for red channel, 2 of the 4 pixels for the green channel and 3 of the 4 pixels for the blue channel. This is why you do get a sort of full colour image from dual and quad filters when using airy method, but the colours do not represent the actual wavelengths tht created them.

however, if you use ha extract, the software only extracts the red pixels and  uses red data only, to provide a mono full size image, so you KNOW it is from ha signal only. 
the oii extract only uses blue and green pixels, and uses only that data to create a full size picture  so you know it is from only oii signal.

then in rgb combine module you use hoo formula whicch asks for your ha mono extract and oii mono extract. It assigns ha to red at 100% and assigns oii to blue and green at 50% each it mimic the straddling of blue and green. So this way you KNOW red is only ha, and blue and green are only oii (unlike  the airy disc, or other softwares deyaber interpolation).

now onto quad. Can you guess? 
yes, the other two passed wavelengths are so close to the original ha and oii, that they are still passed by wide window band passes, only the red window for sii and only the blue for hb. For the base mono chip, it cannot see the difference between the ha only and ha plus sii, only a higher luminance total. For oii and hb, then the luminance count will be higher than just oii on the blue pixels, and same as oii on the green pixels BUT it does  not  know how much of the blue is oii or hb, and so cannot pass this info to the software.

So in summary, the software cannot discriminate between ha and sii in the ha/sii pair, and oii and hb in the oii/hb pair, as the signal  being passed by the bayer windows cannot discriminate them. The software only sees the resulting luminance totals per pixel, using the bayer matrix you  tell it you have used. For a true representation, use the ha and oii extract and rgb combination to form your picture, understand that red is ha and sii , and green is oii and blue is oii and hb.

 

So WHY use a quad over a dual? Two reasons.

One, the detail in a picture comes from the luminance, not the colour. So a quad, will have a different luminance picture than the dual. The red count is higher where there is sii (not passed by dual) and the blue count is higher in sii areas (not passed by dual), so there will be different detail in the luminance of a quad vs dual system, irrespective of the colours assigned.
secondly, having a wider overall bandpass than the dual, a quad will have a higher overall passed signal, therfore less exposure required for the same image intensity.

And why use a dual or quad filter with OSC at all?

well they will cut out all other wavelengths, so no light pollution, moon reduced etc, reduced gradients etc but mainly, It will create a far more detailed combined luminance image than you can get from broadband, on those wavelengths passed. As long of course the object transmits them!

 

To truly separate the wavelengths to give different colurs, you must use a true mono camera without any bayer matrix windows, and dedicated ha,oii,sii and hb filters.

 

Hope this helps and does not make the understanding worse!

 

 

 

 

 

 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

That is a really good explanation indeed, thanks!


   
Andy Booth reacted
ReplyQuote
(@wrosing)
Hydrogen Atom
Joined: 5 years ago
Posts: 2
 

Note that the  Bayer "filters", red, green and blue are leaky.  If you look at the Sony spec sheet or, for example the QHY specifications of the color cameras you can see that each filter admits some light from the other filters.  With very careful reductions it is possible to subtract the cross-talk, but only to a first order.  

 

Wayne


   
ReplyQuote
(@dean-taylor)
Hydrogen Atom
Joined: 2 years ago
Posts: 2
 

hi, can I please revive this conversation, I have a quad Band filter, did anyone decide on a simple way to process this in APP?

 

Thanks

 

 


   
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 

@dean-taylor Hi Dean,

With a quadruple band filter, where both Ha and SII end up in R, it is impossible to separate all four channels. Ha and SII will always end up in R. There is no way to separate them. Not for APP, not for any other software. Sorry.

Wouter


   
ReplyQuote
(@dean-taylor)
Hydrogen Atom
Joined: 2 years ago
Posts: 2
 

Hi Wouter, thank you for you're response, and to all the others previously. A very simple question was asked at the start of this thread and has not yet really been answered, given the popularity in quad band filters with OSC there will be many looking into processing techniques, i will be honest... Most of the science here baffles me..... But I get the basic concept. I understand you can't seperate  the channels of the quad band filter.

So for anyone else looking at ways to process a quad band image in App the answer has to be.... You process as you would normally process a OSC set of images. The results are very good with little to no light pollution, a massive reduction in noise, and longer exposures.

So the simple answer in my opinion is yes, and App responds to a quad band filter very well with default OSC settings and algorithm.

Thanks. 


   
ReplyQuote
Page 2 / 2
Share: