QUAD band filter pr...
 
Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

QUAD band filter prosessing and more.

44 Posts
10 Users
14 Likes
4,009 Views
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

1.
I'm using a quad band filter with my ASI294MC/RASA8 combination. It gives me Ha/SII and Hb/OIII in two passbands, each 35 nM wide. Which extract algoritm(s) should I use to assure all available data is extracted?
2.
The camera will not maifest itself as a color camera unless I use "force Bayer CFA" option. As I understand it this is because of missing information in the FITS header. What should that information be and who should provide it, SGP or ZWO? (Btw, the tool-tip for this is very cryptic for a non expert!)
3.
Could use of the "force Bayer CFA" option have any adverse effect on the processing and final result?

Helge


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

1. There are multiple algorithms, in this case there is no 4-band filter option so you would need to select those separately and then process.

2. The one that creates or manipulates the fits should have that info in it, presumably ZWO in this case. But forcing it is the work-around for that.

3. None at all

 


   
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

Vincent, thanks for your reply. Regarding 1, I discussed this with you in another thread some months back. But then I did not have the filter, I was just curious. Now I have tried most of the algoritms except extracting each band separately like for narrowband filters, and the  standard Airy disk algoritm seem to give the best result.

With tri band and quad band filters becoming increasingly popular, it might be worht looking into the "Extract Ha/OIII" algorithems if they could be widened to cover SII and Hb. It should not have any adverse effect for those who have two band filters because their passbands are covered anyway.
But I'm not sure how this extract of data is done. Maybe you or Mabula could shed some light on that?


   
ReplyQuote
(@headworx)
Main Sequence Star
Joined: 5 years ago
Posts: 20
 
Posted by: @heno

it might be worht looking into the "Extract Ha/OIII" algorithems if they could be widened to cover SII and Hb

+1 for that!

--Simon


   
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 

Out of curiosity, what would the purpose of this widening be? Would you like to have 4 separate images (one for SII, one for Ha, one for OIII and one for Hb)? Because that, technically, is impossible. The camera has pixels that respond to R, G and B and nothing else. The R pixels catch both Ha and SII at the same time so they cannot be separated. And a similar situation exists for Hb and OIII. Depending on the wavelength sensitivity of the sensor of the camera, OIII and Hb may be caught by the G pixels, the B pixels or both and it would be impossible to separate both channels.

Wouter


   
ReplyQuote
(@szymon)
White Dwarf
Joined: 5 years ago
Posts: 12
 

Indeed.  From a scientific point of view, if you are trying to analyse individual emissions, a quadband or triband filter is useless -- you cannot differentiate between photons from individual channels.  However, from a purely artistic colour and detail point of view, your "red" channel (which will contain Ha and Sii) will have both merged together, and it will have extra detail which may work very well depending on the target.  Likewise you could merge the green and blue channels into a single image and have a combined Oiii/Hb channel, which again will have more detail than just the Oiii.  Processing these in say a HOO palette (which really would be a H+Sii/Oiii+Hb/Oiii+Hb) or similar would yield good results.  More detail is going to look good from a pure display point of view!  I've been considering getting one of these filters to use as a sort of "super luminance" for my narrowband images (and I use a mono camera)...

This post was modified 4 years ago 3 times by Simon Szymon

   
ReplyQuote
(@headworx)
Main Sequence Star
Joined: 5 years ago
Posts: 20
 
Posted by: @wvreeven

that, technically, is impossible

Right... Does that mean that using 4-band filter does not make sense with OSC or should we just be using the standard airy disk instead?

--Simon


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

You're absolutely correct Wouter and Simon. So yes, then it doesn't make sense to have more bands as an algorithm. And @headworkx I indeed think it then won't have much use to have a 4-band filter (in the sense of wanting separate channels) as they will always be merged on either the R, G or B pixels of the sensor. You can still use the 4-band filter, it will benefit you to separate those 2 "groups" and process them independently. Like said above, it still only passes through certain bandwidths so for certain targets that will be really nice. A mono sensor with separate filters will be the only way to really separate the signals.


   
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 
Posted by: @headworx
Right... Does that mean that using 4-band filter does not make sense with OSC or should we just be using the standard airy disk instead?

Using a 4-band filter with OSC certainly makes sense since it lets pass through almost only useful light and blocks almost all light pollution.


   
ReplyQuote
(@headworx)
Main Sequence Star
Joined: 5 years ago
Posts: 20
 
Posted by: @wvreeven
Posted by: @headworx
Right... Does that mean that using 4-band filter does not make sense with OSC or should we just be using the standard airy disk instead?

Using a 4-band filter with OSC certainly makes sense since it lets pass through almost only useful light and blocks almost all light pollution.

Agreed. But will Ha and Sii, and  Oiii and Hb be distinguished as different colors? Probably yes, as even if Ha and Sii are mostly red, they still are activate a bit of the green (and blue?) pixels...

 

--Simon


   
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 
Posted by: @headworx

Agreed. But will Ha and Sii, and  Oiii and Hb be distinguished as different colors? Probably yes, as even if Ha and Sii are mostly red, they still are activate a bit of the green (and blue?) pixels...

No they will NOT be distinguished as different colours. This is what I wrote before:

Posted by: @wvreeven

Would you like to have 4 separate images (one for SII, one for Ha, one for OIII and one for Hb)? Because that, technically, is impossible. The camera has pixels that respond to R, G and B and nothing else. The R pixels catch both Ha and SII at the same time so they cannot be separated. And a similar situation exists for Hb and OIII. Depending on the wavelength sensitivity of the sensor of the camera, OIII and Hb may be caught by the G pixels, the B pixels or both and it would be impossible to separate both channels.

On top of that, Ha and SII only activate the Red pixels, not the Green and Blue ones.


   
ReplyQuote
(@headworx)
Main Sequence Star
Joined: 5 years ago
Posts: 20
 

I am confused... From what you are saying an OSC camera would be able to only reproduce 3 colors (R,G,B). But somehow it does reproduce the full color information. So why cutting out light pollution (by using the 4-band filter) are we using the color information for colors that pass through?

And a related question, has anyone tried using a Foveon sensor for astrophotography? It basically gives a full "analog" color information for each pixel.

--Simon

This post was modified 4 years ago by Szymon Slupik

   
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

 Guys, it was never my intention to have the data split into four different bands. If that can be done at all.

I don't know how the "Extract Ha and OII"  algorithm works at all. But I'm quite sure that when I use these algorithms I don't get the SII and Hb data. At least, that's what it looks like.  My data seem more complete when I use the Airy disk algorithm, Which make sense to me if SII and Hb is not included in the first mentioned. The downside of the Airy disk algorithm is that there is only one image, not two to play with and combine.
So my question still stands, can the "Extract Ha/OIII" algotihm be altered to also include Hb and SII?

 


   
ReplyQuote
(@jan-willem)
Neutron Star
Joined: 7 years ago
Posts: 105
 

@heno

Hi all,

 

I did read the whole post and also want to pitch in.

First off al when you use a OSC camera you've got a sensor with on top of it a Bayer mask. On every pixel is a color, 1 red 2 green and 1 blue that's why you get a RGGB mask.

The HA and S2 are both in the red spectrum of light, the O3 and Hb are both in the blue and green spectrum.

 

When you choose for the extract HA algorithm APP looks only at the RED pixels. And extrapolate the data of the RED pixel to "calculate" what the data would be in the other pixels (blue and green, the don't have any HA or S2 data at all).

When you choose for extract O3 algorithm APP will only look at the Blue and Green pixels and "calculate" what the data would be in the Red pixels (they also don't get any data because O3 and Hb are both in the blue green spectrum).

 

Now you can use the adaptive area disk, you get an instant color image. The downside of this algorithm is (only when you use an quad ore tri band filter) that the O3 and Hb data are very weak, so in the end result you get little blue and green color.

 

When you use a extract HA ore O3 algorithm you get 2 mono images.

The HA algorithm gives you a mono picture with all the HA and S2 data. They can't be split because they are both in the red spectrum!!!

Also for the O3 algorithm it gives you a mono image with only the O3 and Hb data.

 

If you would combine the 2 mono images you're able to increase the intensity of the blue and green channel, so in the end result you're picture would look more like a regular RGB or "one shot picture".

 

Hope this helps.

 

Clear skies Jan-Willem


   
ReplyQuote
(@szymon)
White Dwarf
Joined: 5 years ago
Posts: 12
 
Posted by: @headworx

I am confused... From what you are saying an OSC camera would be able to only reproduce 3 colors (R,G,B). But somehow it does reproduce the full color information. So why cutting out light pollution (by using the 4-band filter) are we using the color information for colors that pass through?

And a related question, has anyone tried using a Foveon sensor for astrophotography? It basically gives a full "analog" color information for each pixel.

--Simon

All colours are made up of a mix of those three colours; red, green and blue.  They are the "primary" colours, and when you mix them by adding them to each other, you get other colours.  For example, if you mix red and green you get yellow, if you mix red and blue you get purple, etc.  This is commonly known as "additive mixing".  With the right mix of each of these colours, you can make up any colour in your "full colour" package.

One important point is that your multi-narrowband filter isn't just blocking out "light pollution" -- it's cutting out _all_ light other than that which it explicitly allows through!  When you use the "4 band" filter, you are only allowing a subset of light frequencies to hit your sensor.  Your sensor (on an OSC camera) also has a built in filter in front of the pixels, which also only allows a subset of light frequencies to hit your sensor.  The built in sensor only allows through:

  • "red" light (roughly 600-700nm wavelength) to the "red" pixels
  • "green" light (roughly 500-600nm wavelength) to the "green" pixels
  • "blue" light (roughly 400-500nm wavelength) to the "blue" pixels.  

(Those are rough values, I don't remember the exact boundaries).  The four narrow bands that you are allowing through are roughly:

  • Hydrogen Alpha 650-660nm (centre 656nm, this is "red" light)
  • Sulphur ii 667-677nm (centre 672nm, this is "red" light)
  • Oxygen iii 491-506nm (centre 496nm and 501nm, this is "blue/green" light)
  • Hydrogen Beta 481-491nm (centre 486nm, this is "blue" light)

That is why those emissions get picked up by individual pixels -- they fall within the frequency for the colour filter.

I hope that helps.

-simon


   
ReplyQuote
(@headworx)
Main Sequence Star
Joined: 5 years ago
Posts: 20
 

Yes I think I know how this works in principle. But was thinking that Ha red is a bit different than Sii red, when captured by a bayer sensor (the NB filter does not change anything here...). From what you say both reds are indistinguishable. If it is so, the only function of a quad band filter is removal of a light pollution. IOW it is not possible to “see” the separated Sii and Ha colors  as different colors using OSC (with or without a filter). Correct?

 

- Simon


   
Jan-Willem reacted
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

@jw_duijndamhetnet-nl

Maybe you are correct, maybe not. I'm thinking: If what you say is correct, and the Ha/SII is only selected based on the red pixel, what would be the point of having separate Ha and SII narrow band filters. The SII is "darker red" (have a lower frequency) than Ha. So if a nebula emits light in SII frequency you won't capture it with a Ha filter. Even though they both capture only red light.


   
ReplyQuote
(@szymon)
White Dwarf
Joined: 5 years ago
Posts: 12
 
Posted by: @headworx

Yes I think I know how this works in principle. But was thinking that Ha red is a bit different than Sii red, when captured by a bayer sensor (the NB filter does not change anything here...). From what you say both reds are indistinguishable. If it is so, the only function of a quad band filter is removal of a light pollution. IOW it is not possible to “see” the separated Sii and Ha colors  as different colors using OSC (with or without a filter). Correct?

 

- Simon

Right — you cannot separate the Sii from the Ha using a filter which passes both.  This is why people use monochrome and narrowband filters.  On the combined filter you will get both of them.  That can be cool too, and can produce some great images (I have seen many examples on the Altair Facebook group for example).


   
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

@szymon

I totally agree on this. But as long as we do not know how Mabula has designed the extraction algorithm of Ha and OIII, if it is based on pixel values or a passband around a central frequency (which may be sort of the same thing, I don't know) we will never be able to settle this discussion. That may be a good thing though, such discussions are great, and educating.


   
ReplyQuote
 Heno
(@heno)
Neutron Star
Joined: 7 years ago
Posts: 131
Topic starter  

@szymon

An Altair quad band filter is exactly what I have bought. That is why I'm so anxious to find out how these extraction algorithms works. 
@Mabula: Where are you brother???? 🙂 


   
ReplyQuote
(@szymon)
White Dwarf
Joined: 5 years ago
Posts: 12
 
Posted by: @heno

@szymon

I totally agree on this. But as long as we do not know how Mabula has designed the extraction algorithm of Ha and OIII, if it is based on pixel values of passband around a central frequency (which may be sort of the same thing, I don't know) we will never be able to settle this discussion. That may be a good thing though, such discussions are great, and educating.

Mabula can do many things, but he cannot break the laws of physics :-).  His algorithms won’t be able to differentiate between different frequencies of light once they are collected by a sensor and translated into voltages then sampled into digital format and recorded as files.  I am sure he does clever things like writing his algorithms such that they are aware of what is in the data and therefore emphasise areas in processing, but none of the is relevant to the particular question of combined filters vs single band filters 🙂


   
ReplyQuote
(@jan-willem)
Neutron Star
Joined: 7 years ago
Posts: 105
 
Posted by: @heno

@jw_duijndamhetnet-nl

Maybe you are correct, maybe not. I'm thinking: If what you say is correct, and the Ha/SII is only selected based on the red pixel, what would be the point of having separate Ha and SII narrow band filters. The SII is "darker red" (have a lower frequency) than Ha. So if a nebula emits light in SII frequency you won't capture it with a Ha filter. Even though they both capture only red light.

@heno,

I do think you're thinking the wrong way. A quad band filter is "just" an sort of light pollution filter. But this filter aloud only the narrow band emission pass. So we can image only the narrow bands where are almost no light pollution possible.

 

The point of a quad band is that it makes possible to capture also S2, so simple said you're capturing a bit more red light!

 

Now you're comment why would we have seperate Ha and S2 filters... we need those seperate for monochrome sensors. When you capture mono pictures you can easy seperate the emission bands.

In most nebula the O3, Ha and S2 are the most prominent to "see". If you capture them apart with mono you can create stunning "false color images".

 

Now if you would capture ONLY S2 with you're color camera the Bayer mask on top of the sensor will not let the S2 "pass fast enough" so if you would be imaging you must make very long exposures and then still don't get enough detail.

This is why a S2 filter is "only" useful for mono sensors.

Now like I said the quad band let's pass also S2, so very simple said you only let true a very small amount of extra red light so in one shot you would capture the light and dark red.

 

Cheers Jan-Willem


   
ReplyQuote
(@jan-willem)
Neutron Star
Joined: 7 years ago
Posts: 105
 
Posted by: @heno

@szymon

An Altair quad band filter is exactly what I have bought. That is why I'm so anxious to find out how these extraction algorithms works. 
@Mabula: Where are you brother???? 🙂 

I can't speak for Mabula, but there is NO OTHER WAY THAN to use pixel base separation!!!!

 

Like I said before (and also as szymon!) The HA and S2 are both red. So only the red pixels will let true the HA and S2 data!! Both at the same time. 

 

I also use only a Ha filter If I would open a single unprocessed shot in App with the adaptive area disk enabled i woul see an checker box pattern. This way you can see that only the red pixels will capture data.

 

If you use the HA algorithm APP uses the data in the red pixel to calculate what the data would or should be in the surrounding green and blue pixels. Based on a calculation between "the first red pixel and the following red pixel".

 

The extract HA or O3 works almost like the normal O3 and Ha algorithm but have slight difference in the calculations to optimise the data output create with a tri or quad band filter.

 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Fjew this is getting big. 🙂 I had to think about it a bit as well before I started, but if you break it down to a simple principle it may become more clear. A sensor only records light, period, it converts that into an electrical signal. So for a monochrome camera, having a sensor without any filters means it just converts all incoming light into a signal, it doesn't record a spectrum and has no idea about what it sees, only that is sees a signal. Now when you want to have a specific band of the spectrum, the sensor won't be able to help you, but you can... you introduce a filter. The sensor still has no idea about what it sees, just that the signal became lower, but you know it was because of that filter and hence can process that signal as such in the software. Having a filter with 2 bands which are in the red part of the spectrum.... the sensor has again no idea and simply records a bit more signal. Same thing for 4 bands. So the reason why APP can't separate those bands is because there is no information in the signal available to do so. Only when you have an OSC camera with built-in R,G,B filters on top of the sensor (bayer matrix) can APP separate those colors as the camera does then give APP info about those signals, but still it doesn't say "in this red signal, this is the part belonging to Ha, this to SII". So it has to combine them.


   
ReplyQuote
(@headworx)
Main Sequence Star
Joined: 5 years ago
Posts: 20
 
Posted by: @vincent-mod

Fjew this is getting big. 🙂 I had to think about it a bit as well before I started, but if you break it down to a simple principle it may become more clear. A sensor only records light, period, it converts that into an electrical signal. So for a monochrome camera, having a sensor without any filters means it just converts all incoming light into a signal, it doesn't record a spectrum and has no idea about what it sees, only that is sees a signal. Now when you want to have a specific band of the spectrum, the sensor won't be able to help you, but you can... you introduce a filter. The sensor still has no idea about what it sees, just that the signal became lower, but you know it was because of that filter and hence can process that signal as such in the software. Having a filter with 2 bands which are in the red part of the spectrum.... the sensor has again no idea and simply records a bit more signal. Same thing for 4 bands. So the reason why APP can't separate those bands is because there is no information in the signal available to do so. Only when you have an OSC camera with built-in R,G,B filters on top of the sensor (bayer matrix) can APP separate those colors as the camera does then give APP info about those signals, but still it doesn't say "in this red signal, this is the part belonging to Ha, this to SII". So it has to combine them.

Vincent, your reasoning is very helpful but it applies only to "standard" CMOS sensors. I mentioned Foveeon on this thread a couple of times, for a reason.  Foveon captures BOTH light intensity (the number of photons) AND frequency (the color) at each pixel. That seems like a perfect match for multi-band  narrowband filters.  But I'm not sure anyone is using Foveon for astrophotography. The Sigma Quattro-H would be a potentially great candidate here: https://www.sigma-global.com/en/cameras/sd-series/#sd-h .

--Simon


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Ah right, yes that will ofcourse then work, but I don't know anyone using it in the hobby.


   
ReplyQuote
(@jan-willem)
Neutron Star
Joined: 7 years ago
Posts: 105
 

@headworx

Hi Szymon,

 

Indeed a foveon sensor looks very nice, but the big downside of this kind of sensor is that it only works good in light conditions. In our hobby the red light needs to travel through two layers and will be very faint. So in dark conditions the red channel will be very weak.

 

I believe this kind of sensor isn't yet a good alternative for the Bayer mask or mono camera. But the technology looks promising.

This post was modified 4 years ago 2 times by Jan-Willem

   
ReplyQuote
(@headworx)
Main Sequence Star
Joined: 5 years ago
Posts: 20
 

I think I will give Foveon a try... The SD Quattro H goes easily to ISO 800 and this is the setting I have been using on my DSLR when capturing deep sky. It seems to be very easy to remove the filter in this camera ( https://www.dpreview.com/forums/post/61410892 ), and M42 adapters are available, so the exercise does not look too difficult... It will take some time, but I promise to report my results here. What may also be interesting is the open work on interpreting the native Foveon RAW format done by the Kalpanika team ( https://github.com/Kalpanika ).

--Simon


   
ReplyQuote
(@wvreeven)
Quasar
Joined: 6 years ago
Posts: 2133
 

I assume that you are talking about this sensor or something similar:

https://en.wikipedia.org/wiki/Foveon_X3_sensor

This sensor indeed detects the wavelength of the light entering and based on that produces a colour. But the colours again are limited to R, G and B. And this again means that Ha and SII can NOT be distinguished from each other.

The main difference with CMOS sensors is that each pixel is sensitive to all colours leading to a higher spatial resolution and higher quantum efficiency (more light detected). But it is not a spectrograph in the sense that it ouputs info about the wavelengths of the light received. Like I wrote above, it simply outputs R, G and B and that’s it.

The wiki page also mentions cross-contamination of colours due to the way the sensor works meaning that you may not get natural colours when you take pictures. With a quad filter I doubt there will be cross-contamination between G and R since the wavelengths of OIII/Hb and Ha/SII are quite far apart but there may be cross-contamination between B and G.

In any case, if you do purchase the camera then please post the results here. I am quite curious to see how these sensors perform!


   
ReplyQuote
(@headworx)
Main Sequence Star
Joined: 5 years ago
Posts: 20
 
Posted by: @wvreeven

But the colours again are limited to R, G and B

No they are not limited. Different wavelengths penetrate the sensor to different depths. The dependency of depth on the frequency is continuous, not discrete. The processing software bins them into virtual discrete R,G,B bins. Not sure the RAW data has them binned or continuous though... I think it is continuous in X3F file (Foveon proprietary) and discrete in DNG.

It took Kalpanika quite a lot of effort to get the colors right. This bodes well, meaning the analog frequency information is available. Also considering the 4 narrow bands we would be interested in, the processing of the frequency data would be much simpler.

--Simon

This post was modified 4 years ago by Szymon Slupik

   
ReplyQuote
Page 1 / 2
Share: