Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

Binning?

9 Posts
4 Users
0 Likes
5,014 Views
(@1llusiveastro)
Brown Dwarf
Joined: 6 years ago
Posts: 4
Topic starter  

Hello,

I'm enjoying APP so far. I have one question - is it able to bin the integrated image? I notice you can select a "scale", but I don't think that's what I'm looking for. I'd like to have the option to combine the nearest 4 pixels into one, to increase S/N and greatly increase sensitivity. The resulting image would be half the length and width. I have done this in a program called Sequator with really good results. It could be a killer feature.


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi 1llusiveAstro,

That's great to hear 😉

Actual binning your image is not possible currently, but a tool could be made fairly easy for this.

On the hand, what you want to accomplish is possible with APP using the batch rotate/resize tool.

You simply need to select

  • nearest neighbour data interpolation
  • and a resize factor of 0,50.

In down-scaling operations, the data interpolation will use 4 pixels in this case to reconstruct the new pixel ;-), since nearest neighbour is used, it will be the average of the 4 pixels, just like binning.

The upsides of this method are that it's much more flexible than just binning:

  • You can use any rescale factor and
  • you can use more advanced data interpolation kernels that will preserve the resolution of your data better.

 

Binning is only removing resolution, thereby removing information and small scale noise that will indeed give the image lower noise, but also less sharpness as you are well aware. You're simply trading resolution for lower noise.

From literature and experience I would recommend to use the following, instead of binning:

  • Cubic B-Spline data interpolation (will preserve more detail but will be very smooth as well)
  • and a resize factor of 0,50.

 

And in APP, you can actually directly create the integration using such a down-scale operation if you wish. Just set scale of integration to 0,5 and use Cubic B-Spline for data interpolation.

As an example, here is a crop of an integration of some pretty old data of my own. It's the Cocoon Nebula, shot with a Robtics 102ED doublet APO and a Nikon D5100BCF mod and a Baader UHC-s filter. 2,5 hours epxosure time in subs of 5 minutes, iso 400. Only calibrated with a Bad Pixel Map.

I integrated the data and post-processed up until star color calibration in APP. this is the crop at original size:

Cocoon final mod St

Antd this is the same image downscaled using Nearest neighbour and 0,5x scale:

Cocoon final mod  0degCW 0.5x NN St

Together with a Cubic B-spline version at top right (bottom right is Nearest Neighbour, left is original) in a mosaic of 3 images (can you tell the difference?):

Coc compare

To make it a bit more clear, further zoomed in original:

Cocoon final mod mod St

And downscaled with Cubic B-Spline:

Cocoon final mod mod  0degCW 0.5x CBS NS St

the original and down-scaled version next to each other, the loss of resolution is clear, as is the reduction in noise:

zoomed In compare

Let me know how the down-scaling works for you and if you think a binning tool would still be usefull 😉

Kind regards,

Mabula

 

 

 


   
ReplyQuote
(@ohmeye)
White Dwarf
Joined: 4 years ago
Posts: 13
 

I'd like to just confirm that the details in this thread still represent a good method for downsampling after the fact. I prefer to not bin in the camera driver, but when the pixel scale makes sense to do so I'd like to "bin" 2x2 after the fact, without batch processing all the subs individually before integrating.

Is it still a valid and preferred method to set the integration scale to the desired factor (0.5 for 2x2) and set the pixel interpolation filter to Cubic B_Spline? Or is there a better method 3 these days?

Are there any other relevant settings when doing this? Does the Kernel shape play any role for interpolation or is that only used for drizzle?

My main reason for "binning" is for the SNR advantage when the native pixel scale is oversampled. However, I'd still like to be able to drizzle, especially when reducing the data resolution with "binning."  Interpolation and Drizzle are mutually exclusive in the integration options. Is there a method to drizzle downsampled data and does it make sense to do so in APP? If there is currently no method to downsample and drizzle when integrating, is this a use case for an integration tool change or a new "binning" tool?

Put another way, what is the best workflow to "2x2 bin" data to increase SNR, then drizzle integrate the now slightly undersampled data?

Thanks!


   
ReplyQuote
(@1llusiveastro)
Brown Dwarf
Joined: 6 years ago
Posts: 4
Topic starter  

In another program, when I chose to do this, the image got brighter. It was a way of combining the light of the surrounding pixels. The description of the feature states:

"Merge neighbor 4 pixels to increase sensitivity. However, the result will be downsized to 1/4"

I suppose the method described above does the same thing, but only averages the noise instead of using the pixels to increase sensitivity. Right?


   
ReplyQuote
(@barnold84)
White Dwarf
Joined: 3 years ago
Posts: 15
 

@mabula-admin

I know it's an older post but nevertheless. I'm also interested in software binning after calibration because I've an Altair 183M with fan cooling and a good dark database is already not easy even without binning. 

I think from an SNR point of view, it might be even better to do the binning after integration if one uses clipping algorithms. Without clipping and just doing averaging, it shouldn't matter unless there's a non-linear process in between.

My actual point, the definition of "nearest neighbor" interpolation confuses me and I would like to confirm on this: in APP this means that it's averaging the values of the nearest neighbors and is not just taking the value of the nearest neighbor, correct?

The definition that I've seen in many applications is the latter. Could you clarify this somewhere through a tool-tip entry or maybe IMHO a more proper name, e.g. "nearest neighbor average"?

Björn


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @ohmeye

I'd like to just confirm that the details in this thread still represent a good method for downsampling after the fact. I prefer to not bin in the camera driver, but when the pixel scale makes sense to do so I'd like to "bin" 2x2 after the fact, without batch processing all the subs individually before integrating.

Is it still a valid and preferred method to set the integration scale to the desired factor (0.5 for 2x2) and set the pixel interpolation filter to Cubic B_Spline? Or is there a better method 3 these days?

Are there any other relevant settings when doing this? Does the Kernel shape play any role for interpolation or is that only used for drizzle?

My main reason for "binning" is for the SNR advantage when the native pixel scale is oversampled. However, I'd still like to be able to drizzle, especially when reducing the data resolution with "binning."  Interpolation and Drizzle are mutually exclusive in the integration options. Is there a method to drizzle downsampled data and does it make sense to do so in APP? If there is currently no method to downsample and drizzle when integrating, is this a use case for an integration tool change or a new "binning" tool?

Put another way, what is the best workflow to "2x2 bin" data to increase SNR, then drizzle integrate the now slightly undersampled data?

Thanks!

Just a small remark here on the described workflow @ohmeye, binning first to increase SNR (artifical SNR increase by downscaling, no real improvement) and then Drizzling to restore resolution will not maintain the gained SNR because the drizzle itself will inject noise... I am not sure why you would want to do this because drizzle is not magic in the sense that it will restore your resolution and remain the SNR of the binned data 😉

Cheers,

Mabula

 


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @1llusiveastro

In another program, when I chose to do this, the image got brighter. It was a way of combining the light of the surrounding pixels. The description of the feature states:

"Merge neighbor 4 pixels to increase sensitivity. However, the result will be downsized to 1/4"

I suppose the method described above does the same thing, but only averages the noise instead of using the pixels to increase sensitivity. Right?

@1llusiveastro,

I can easily make a bin2x2 algorithm to bin your data even before calibration, but I think it will be of less quality for several reasons when compared to processing at native resolution and then scale the integration combined with a suitable resampling filter like Cubic B_Spline which works like a gauss filter in it self.

Mabula


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @barnold84

@mabula-admin

I know it's an older post but nevertheless. I'm also interested in software binning after calibration because I've an Altair 183M with fan cooling and a good dark database is already not easy even without binning. 

I think from an SNR point of view, it might be even better to do the binning after integration if one uses clipping algorithms. Without clipping and just doing averaging, it shouldn't matter unless there's a non-linear process in between.

My actual point, the definition of "nearest neighbor" interpolation confuses me and I would like to confirm on this: in APP this means that it's averaging the values of the nearest neighbors and is not just taking the value of the nearest neighbor, correct?

The definition that I've seen in many applications is the latter. Could you clarify this somewhere through a tool-tip entry or maybe IMHO a more proper name, e.g. "nearest neighbor average"?

Björn

Hi Björn @barnold84,

Nearest neighbour applies to the resampling/interpolation filters that you can use to reconstruct the data while applying the registration/alignment parameters. Nearest Neighbour interpolation should never be used/chosen normally because it will create artefacts and will not maintain resolution. It is only good to preserve noise actually 🙂 All other filters will change the noise levels.. when compared to the original image.

Nearest Neighbour Interpolation/Resampling in APP uses the "nearest neighbour" and it is NOT averaging the nearest neighbours as it shouldn't. Then it is not nearest neighbour interpolation in principle. Then it would become the well-know bilinear interpolation, where the 4 nearest neighbours are bilinearly averaged using a simpel filter kernel.

Indeed, you could alter bilinear interpolation to simply average those 4 nearest pixels to make a filter which performs between Nearest Neighbour and Bilinear in terms of quality, but I see little use for that. Bilinear is superior to Nearest Neighbour already and it will be better than this "in between" filter, no doubt.

Mabula

 

 


   
ReplyQuote
(@barnold84)
White Dwarf
Joined: 3 years ago
Posts: 15
 

Hi Mabula, @mabula-admin

I guess I should have been more precise in my previous description. What I did is to use the batch resize tool and downsampled by 0.5 using nearest neighbors. To confirm my assumption, I used an artificial image which consisted of pure RGB colors patches and it seemed that at the borders, the colors have been averaged, e.g. at the B-G boundary, the color became cyan as it is the average color of B and G.

I guess a bilinear downsampling will likely behave similar at a factor of 0.5. But my question was rather that the nearest neighbor downsampling behaves like an average binning?

Based on my test it seems to be that case.

Björn 


   
ReplyQuote
Share: