QUAD band filter pr...
 
Share:
Notifications
Clear all

QUAD band filter prosessing and more.

Page 2 / 3

(@szymon)
Brown Dwarf Customer
Joined: 3 years ago
Posts: 12
 
Posted by: @heno

@szymon

I totally agree on this. But as long as we do not know how Mabula has designed the extraction algorithm of Ha and OIII, if it is based on pixel values of passband around a central frequency (which may be sort of the same thing, I don't know) we will never be able to settle this discussion. That may be a good thing though, such discussions are great, and educating.

Mabula can do many things, but he cannot break the laws of physics :-).  His algorithms won’t be able to differentiate between different frequencies of light once they are collected by a sensor and translated into voltages then sampled into digital format and recorded as files.  I am sure he does clever things like writing his algorithms such that they are aware of what is in the data and therefore emphasise areas in processing, but none of the is relevant to the particular question of combined filters vs single band filters 🙂


ReplyQuote
(@jw_duijndamhetnet-nl)
Main Sequence Star Customer
Joined: 5 years ago
Posts: 96
 
Posted by: @heno

@jw_duijndamhetnet-nl

Maybe you are correct, maybe not. I'm thinking: If what you say is correct, and the Ha/SII is only selected based on the red pixel, what would be the point of having separate Ha and SII narrow band filters. The SII is "darker red" (have a lower frequency) than Ha. So if a nebula emits light in SII frequency you won't capture it with a Ha filter. Even though they both capture only red light.

@heno,

I do think you're thinking the wrong way. A quad band filter is "just" an sort of light pollution filter. But this filter aloud only the narrow band emission pass. So we can image only the narrow bands where are almost no light pollution possible.

 

The point of a quad band is that it makes possible to capture also S2, so simple said you're capturing a bit more red light!

 

Now you're comment why would we have seperate Ha and S2 filters... we need those seperate for monochrome sensors. When you capture mono pictures you can easy seperate the emission bands.

In most nebula the O3, Ha and S2 are the most prominent to "see". If you capture them apart with mono you can create stunning "false color images".

 

Now if you would capture ONLY S2 with you're color camera the Bayer mask on top of the sensor will not let the S2 "pass fast enough" so if you would be imaging you must make very long exposures and then still don't get enough detail.

This is why a S2 filter is "only" useful for mono sensors.

Now like I said the quad band let's pass also S2, so very simple said you only let true a very small amount of extra red light so in one shot you would capture the light and dark red.

 

Cheers Jan-Willem


ReplyQuote
(@jw_duijndamhetnet-nl)
Main Sequence Star Customer
Joined: 5 years ago
Posts: 96
 
Posted by: @heno

@szymon

An Altair quad band filter is exactly what I have bought. That is why I'm so anxious to find out how these extraction algorithms works. 
@Mabula: Where are you brother???? 🙂 

I can't speak for Mabula, but there is NO OTHER WAY THAN to use pixel base separation!!!!

 

Like I said before (and also as szymon!) The HA and S2 are both red. So only the red pixels will let true the HA and S2 data!! Both at the same time. 

 

I also use only a Ha filter If I would open a single unprocessed shot in App with the adaptive area disk enabled i woul see an checker box pattern. This way you can see that only the red pixels will capture data.

 

If you use the HA algorithm APP uses the data in the red pixel to calculate what the data would or should be in the surrounding green and blue pixels. Based on a calculation between "the first red pixel and the following red pixel".

 

The extract HA or O3 works almost like the normal O3 and Ha algorithm but have slight difference in the calculations to optimise the data output create with a tri or quad band filter.

 


ReplyQuote
(@vincent-mod)
Quasar Admin
Joined: 5 years ago
Posts: 4727
 

Fjew this is getting big. 🙂 I had to think about it a bit as well before I started, but if you break it down to a simple principle it may become more clear. A sensor only records light, period, it converts that into an electrical signal. So for a monochrome camera, having a sensor without any filters means it just converts all incoming light into a signal, it doesn't record a spectrum and has no idea about what it sees, only that is sees a signal. Now when you want to have a specific band of the spectrum, the sensor won't be able to help you, but you can... you introduce a filter. The sensor still has no idea about what it sees, just that the signal became lower, but you know it was because of that filter and hence can process that signal as such in the software. Having a filter with 2 bands which are in the red part of the spectrum.... the sensor has again no idea and simply records a bit more signal. Same thing for 4 bands. So the reason why APP can't separate those bands is because there is no information in the signal available to do so. Only when you have an OSC camera with built-in R,G,B filters on top of the sensor (bayer matrix) can APP separate those colors as the camera does then give APP info about those signals, but still it doesn't say "in this red signal, this is the part belonging to Ha, this to SII". So it has to combine them.


ReplyQuote
(@headworx)
White Dwarf Customer
Joined: 3 years ago
Posts: 16
 
Posted by: @vincent-mod

Fjew this is getting big. 🙂 I had to think about it a bit as well before I started, but if you break it down to a simple principle it may become more clear. A sensor only records light, period, it converts that into an electrical signal. So for a monochrome camera, having a sensor without any filters means it just converts all incoming light into a signal, it doesn't record a spectrum and has no idea about what it sees, only that is sees a signal. Now when you want to have a specific band of the spectrum, the sensor won't be able to help you, but you can... you introduce a filter. The sensor still has no idea about what it sees, just that the signal became lower, but you know it was because of that filter and hence can process that signal as such in the software. Having a filter with 2 bands which are in the red part of the spectrum.... the sensor has again no idea and simply records a bit more signal. Same thing for 4 bands. So the reason why APP can't separate those bands is because there is no information in the signal available to do so. Only when you have an OSC camera with built-in R,G,B filters on top of the sensor (bayer matrix) can APP separate those colors as the camera does then give APP info about those signals, but still it doesn't say "in this red signal, this is the part belonging to Ha, this to SII". So it has to combine them.

Vincent, your reasoning is very helpful but it applies only to "standard" CMOS sensors. I mentioned Foveeon on this thread a couple of times, for a reason.  Foveon captures BOTH light intensity (the number of photons) AND frequency (the color) at each pixel. That seems like a perfect match for multi-band  narrowband filters.  But I'm not sure anyone is using Foveon for astrophotography. The Sigma Quattro-H would be a potentially great candidate here: https://www.sigma-global.com/en/cameras/sd-series/#sd-h .

--Simon


ReplyQuote
(@vincent-mod)
Quasar Admin
Joined: 5 years ago
Posts: 4727
 

Ah right, yes that will ofcourse then work, but I don't know anyone using it in the hobby.


ReplyQuote
(@jw_duijndamhetnet-nl)
Main Sequence Star Customer
Joined: 5 years ago
Posts: 96
 

@headworx

Hi Szymon,

 

Indeed a foveon sensor looks very nice, but the big downside of this kind of sensor is that it only works good in light conditions. In our hobby the red light needs to travel through two layers and will be very faint. So in dark conditions the red channel will be very weak.

 

I believe this kind of sensor isn't yet a good alternative for the Bayer mask or mono camera. But the technology looks promising.

This post was modified 3 years ago 2 times by Jan-Willem

ReplyQuote
(@headworx)
White Dwarf Customer
Joined: 3 years ago
Posts: 16
 

I think I will give Foveon a try... The SD Quattro H goes easily to ISO 800 and this is the setting I have been using on my DSLR when capturing deep sky. It seems to be very easy to remove the filter in this camera ( https://www.dpreview.com/forums/post/61410892 ), and M42 adapters are available, so the exercise does not look too difficult... It will take some time, but I promise to report my results here. What may also be interesting is the open work on interpreting the native Foveon RAW format done by the Kalpanika team ( https://github.com/Kalpanika ).

--Simon


ReplyQuote
(@wvreeven)
Galaxy Admin
Joined: 4 years ago
Posts: 1649
 

I assume that you are talking about this sensor or something similar:

https://en.wikipedia.org/wiki/Foveon_X3_sensor

This sensor indeed detects the wavelength of the light entering and based on that produces a colour. But the colours again are limited to R, G and B. And this again means that Ha and SII can NOT be distinguished from each other.

The main difference with CMOS sensors is that each pixel is sensitive to all colours leading to a higher spatial resolution and higher quantum efficiency (more light detected). But it is not a spectrograph in the sense that it ouputs info about the wavelengths of the light received. Like I wrote above, it simply outputs R, G and B and that’s it.

The wiki page also mentions cross-contamination of colours due to the way the sensor works meaning that you may not get natural colours when you take pictures. With a quad filter I doubt there will be cross-contamination between G and R since the wavelengths of OIII/Hb and Ha/SII are quite far apart but there may be cross-contamination between B and G.

In any case, if you do purchase the camera then please post the results here. I am quite curious to see how these sensors perform!


ReplyQuote
(@headworx)
White Dwarf Customer
Joined: 3 years ago
Posts: 16
 
Posted by: @wvreeven

But the colours again are limited to R, G and B

No they are not limited. Different wavelengths penetrate the sensor to different depths. The dependency of depth on the frequency is continuous, not discrete. The processing software bins them into virtual discrete R,G,B bins. Not sure the RAW data has them binned or continuous though... I think it is continuous in X3F file (Foveon proprietary) and discrete in DNG.

It took Kalpanika quite a lot of effort to get the colors right. This bodes well, meaning the analog frequency information is available. Also considering the 4 narrow bands we would be interested in, the processing of the frequency data would be much simpler.

--Simon

This post was modified 3 years ago by Szymon Slupik

ReplyQuote
(@wvreeven)
Galaxy Admin
Joined: 4 years ago
Posts: 1649
 
Posted by: @headworx

I think it is continuous in X3F file (Foveon proprietary) and discrete in DNG.

Wow that’s fantastic. Thanks for correcting me on this point!


ReplyQuote
(@headworx)
White Dwarf Customer
Joined: 3 years ago
Posts: 16
 

So FWIW, I started the discussion with the Foveon community:  https://www.dpreview.com/forums/thread/4441374 .

I would say it is promising, considering this color frequency response characteristic:

Foveeon Frequency Response No IR

--Simon


ReplyQuote
 Heno
(@heno)
Red Giant Customer
Joined: 5 years ago
Posts: 113
Topic starter  

I see this turned into a discussion about Foveon sensors. I have not read all of that.
So let me repeat myself. I have no wish to split HA form SII and Hb from OIII. It probably can not be done anyway. I just want to assure that if I use the Extract Ha algorithm I also get the SII data. And likevise with OIII and Hb. 
If it does, fine. I'm happy. If not, can the developer do anything about it?
The rest of us can provide as many educated, logical and clever guesses and arguments as we may, but I personally will not be satisfied until Mabula tells me one way or the other.
Helge.


ReplyQuote
(@headworx)
White Dwarf Customer
Joined: 3 years ago
Posts: 16
 

I think, judging by this discussion, the Sii is included when you extract Ha and Hb is included when you extract Oiii. As long as your filter passes them through.

Speaking of that - do you have examples (photos) when including Sii and Hb improves them ? Also - as I want to push with my Foveon exercise, what object would you recommend to take the photos of, to gather good 4-channel narrowband data?

--Simon


ReplyQuote
(@chagen)
Hydrogen Atom Customer
Joined: 2 months ago
Posts: 2
 

@heno

I am  just starting out, still on Trial...

In center of Bortle 9+ skys of Las Vegas, so have to use narrow band filter.  I was searching on which box to check on the loading list, and found this question.. the answer to which means that I have even more to learn.  

I am using the Radian quad band filter, and am able to take 900second exposures! (Without filter, all white from skyshine.)

The published specs are:

Transmission lines:

H-Beta: 79% Peak Transmission, 5nm FWHM

O-III: 97% Peak Transmission, 4nm FWHM

H-Alpha: 87% Peak Transmission, 4nm FWHM

S-II: 90% Peak Transmission, 4nm FWHM

SO:  is it possible to do 4 processings for these, or because the lines are so close together, do we just do 2?


ReplyQuote
(@vincent-mod)
Quasar Admin
Joined: 5 years ago
Posts: 4727
 

It's not no, not when you have a color camera (is that the case?). H-alpha and SII are both basically red, which means that signal will be picked up by the red pixels on your sensor. They will be "mixed" there. Same principle for the other signals. If you have a mono sensor, you can take them seperately and combine them in any way you want. Still, the result will be a RGB image, so to combine them all will be a bit tricky.


ReplyQuote
 Heno
(@heno)
Red Giant Customer
Joined: 5 years ago
Posts: 113
Topic starter  

@chagen

I can tell you what I have done with a color camera and a quad band filter. The intention, however, was to combine frames from different scopes and cameras.
After loading all required files I opened RAW/FITS pane and selected algorithm SII, OII, Ha and Hb in turn.
In the Calibration pane I checked "split channels" and pressed the "Save calibrated files". APP will now split the colors and save the frames. I can now use these to combine with mono chrome frames from a different camera. How useful this would be if you don't intend to combine the result with other data, I'm not sure.
As Vincent said, you only have RGB data and the various bands will be mixed into these colors.  I have made several images with my Rasa8 + ASI294MC + Quad band filter without splitting out the various bands. The color calibration is a bit challenging as you are lacking major parts of the spectrum, but it is fully possible. Good luck.

Helge


ReplyQuote
(@chagen)
Hydrogen Atom Customer
Joined: 2 months ago
Posts: 2
 

Thank for the reply.

I am too new to astrophoto to have useful opinions; that said, the narrowband filter does an amazing job of defeating skyshine for me.  Because of this I can learn how to use astrophoto tools; no more 2 hrs to dark site 2hrs back.

2. Other posts have mentioned that the RGGB sensors have sensitivities in R,G, and blue.. the sensitivity graphs I have looked at show that their sensitivity for these sensors has a wide color distribution.  So it would seem that the R, G, B signals when we use the quad filter would  have mixtures of (for the H-alpha Sii pair would have lots of R signal, and some G, and B: while the H-beta Oiii pair would have roughly 2 parts blue and 1 part green)

             

image
image

                                        


ReplyQuote
(@andybooth)
White Dwarf Customer
Joined: 4 months ago
Posts: 18
 

Ok, i have in the passed gone through this thinking, as I have both dual and quad band filters, but  explaining in words is not easy. Let me try, and first i will explain dual band to help i hope.

First, the base chip is mono, and just simply reacts to a photon of any frequency hitting it. When it receives one, then that creates ‘luminosity’ signal for that pixel.

above the mono chip sits a bayer matrix, a grid of coloured filters aligned with each mono pixel. They are red, green and blue windows, which pass only those colour frequencies, and reject the rest. However the filters have a wide pass band, as already said in a previous post. If a red photon hits a red window, it will pass through and create a luminance signal on that red assigned pixel. If a red photon hits a blue or green pixel, then it will not pass, and the blue and green assigned pixel gets no luminance.

any software only reads the total luminance hits on that mono pixel, and by knowing the matrix, assigns a colour r,g or b to that total luminance count. When you specify the matrix , rggb for example, By default it ‘knows’ that the luminance count on a pixel under the red window must come from red photons, but has no idea of frequency except that it is within the wide red bandpass, or it would not have received a hit.

so, on to the dual/quad bit.

The dual filter passes ha and oii in a 7nm bandpass each. This provides a clean narrow red signal for ha, which is in the middle of the bayer matrix red window bandpass, so always is passed by a red pixel, and rejected by the blue and green pixels. So ha object will only provide luminance hits on 1/4 of the pixels on a rggb chip.

The oii signal  unfortunately straddles both both blue and green bayer windows, so will be rejected by the red pixels, but provide luminance hits on both blue and green pixels. So oii is providing signals on 3/4 of a rggb chip.

together, on an object emitting with both ha and oii, ( not all objects do!),  then all pixels of the chip are receiving signal, and with a dual band you can further say the red pixel signal are ha and the combination of  blue AND green pixel signal is oiii.

the software decodes this with the Airy disc setting as just a colour pic made from all three channels as is, and  interpolates any  missing pixels which gives us yellow etc. for our dual filter, we lose the identity of ha and oii due to this interpolation. This is true for any debayer software. The debayer process intelligently makes up 3 of the 4 pixels for red channel, 2 of the 4 pixels for the green channel and 3 of the 4 pixels for the blue channel. This is why you do get a sort of full colour image from dual and quad filters when using airy method, but the colours do not represent the actual wavelengths tht created them.

however, if you use ha extract, the software only extracts the red pixels and  uses red data only, to provide a mono full size image, so you KNOW it is from ha signal only. 
the oii extract only uses blue and green pixels, and uses only that data to create a full size picture  so you know it is from only oii signal.

then in rgb combine module you use hoo formula whicch asks for your ha mono extract and oii mono extract. It assigns ha to red at 100% and assigns oii to blue and green at 50% each it mimic the straddling of blue and green. So this way you KNOW red is only ha, and blue and green are only oii (unlike  the airy disc, or other softwares deyaber interpolation).

now onto quad. Can you guess? 
yes, the other two passed wavelengths are so close to the original ha and oii, that they are still passed by wide window band passes, only the red window for sii and only the blue for hb. For the base mono chip, it cannot see the difference between the ha only and ha plus sii, only a higher luminance total. For oii and hb, then the luminance count will be higher than just oii on the blue pixels, and same as oii on the green pixels BUT it does  not  know how much of the blue is oii or hb, and so cannot pass this info to the software.

So in summary, the software cannot discriminate between ha and sii in the ha/sii pair, and oii and hb in the oii/hb pair, as the signal  being passed by the bayer windows cannot discriminate them. The software only sees the resulting luminance totals per pixel, using the bayer matrix you  tell it you have used. For a true representation, use the ha and oii extract and rgb combination to form your picture, understand that red is ha and sii , and green is oii and blue is oii and hb.

 

So WHY use a quad over a dual? Two reasons.

One, the detail in a picture comes from the luminance, not the colour. So a quad, will have a different luminance picture than the dual. The red count is higher where there is sii (not passed by dual) and the blue count is higher in sii areas (not passed by dual), so there will be different detail in the luminance of a quad vs dual system, irrespective of the colours assigned.
secondly, having a wider overall bandpass than the dual, a quad will have a higher overall passed signal, therfore less exposure required for the same image intensity.

And why use a dual or quad filter with OSC at all?

well they will cut out all other wavelengths, so no light pollution, moon reduced etc, reduced gradients etc but mainly, It will create a far more detailed combined luminance image than you can get from broadband, on those wavelengths passed. As long of course the object transmits them!

 

To truly separate the wavelengths to give different colurs, you must use a true mono camera without any bayer matrix windows, and dedicated ha,oii,sii and hb filters.

 

Hope this helps and does not make the understanding worse!

 

 

 

 

 

 

This post was modified 2 months ago 7 times by Andy Booth

ReplyQuote
(@vincent-mod)
Quasar Admin
Joined: 5 years ago
Posts: 4727
 

That is a really good explanation indeed, thanks!


ReplyQuote
Page 2 / 3
Share: