Understanding the S...
 
Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

Understanding the SNR & Noise Numbers in APP

29 Posts
6 Users
34 Likes
4,740 Views
(@rfe3)
Brown Dwarf
Joined: 4 years ago
Posts: 5
Topic starter  

Hi everyone, I have a question about the SNR & Noise numbers in APP. How do you read them and what do they mean? How can I tell which of a group of images has the best SNR ratio by looking at these numbers? I have spent hours Googling to try and find a "Number" that says Ok...any image with an SNR number above (for example - 5,5661E+00 - 2,1343E-03) is a good SNR Ratio, I can't find any examples that correspond to a number like this in APP.

The reasoning behind this question comes from trying to up my game a bit from the Camera side of things. I understand the concepts of ISO(Gain) in a digital camera and how it affects SNR, Dynamic Range and Read Noise. I understand "Full Well and ISO-Invariance" and the effects of higher ISO values on image quality etc...what I dont understand is how to interpret the numbers....Every sensor has a point of ISO-Invariance where it makes no sense to go any higher because it doesn't do anything but add more upstream read noise. That point on my 1000d's is ISO 200 and on my 450d's is ISO 400.(article here: http://dslr-astrophotography.com/iso-values-canon-cameras). So, I know I don't want to shoot higher than ISO 200 (on the 1000D's) in order to have the greatest Dynamic Range available to capture every last Photon. In order to do that I need to find an optimal length for my Subs to be able to fill the sensor to it's Full Well capacity and make sure the Downstream signal overwhelms the Upstream noise, weither it be 20min or 40 min or 90min. Once I know that point I (in theory) should be able to make better descisions about Camera settings to maximize what little time I have to gather Photons.

Gee...this really looks like an incoherent ramble, but it makes sense in my head... 🤪 That might say something about my state of mind... So again, can these SNR & Noise numbers in APP be used as a refrence to guage the amount of Signal in relation to Noise and say "Ok, at x min Subs my Downstream Signal overcomes the Upstrean Noise, so any thing above x min. Subs is gathering more data than noise"

In the attachment you can see I've taken 5 Photos, all at ISO 200 (the other settings the same as well) the variance is the length of the Subs. Which one (using the SNR & Noise data) is the best and by what criteria do you use to determine that.

Thanks for any insight anyone can provide...

SNR

 


   
ReplyQuote
Topic Tags
(@astrogee)
Neutron Star
Joined: 6 years ago
Posts: 153
 

Making perfect sense. A nice explanation of iso-invariance and its importance. I'm not sure what the SNR numbers mean either. I know my images have better SNR than what is being displayed so what is it that APP is displaying? Is it some kind of average? But as far as capture time goes, you have a background sky noise which represents the noise floor of your image. You want to place that above the left edge of the histogram, otherwise you will lose signal, but don't place it so high as to waste dynamic range on the noise. I've done an analysis that says put the image noise floor - the peak of the histogram - at 2.2% of a linear histogram range, or the equivalent 20% of the DSLR histogram range (because it's non-linear). Why these numbers? Since you asked what is a good SNR. Photography engineers define a good SNR as 40:1 and the 2.2% number is derived from that.

But I know you want to know what the APP figures tell you. I would like to know too.


   
Rick Engman reacted
ReplyQuote
(@mestutters)
Neutron Star
Joined: 7 years ago
Posts: 167
 

Hi,

I'm a relative novice astroimager and have also experienced the anxiety of wondering what exposure times I should use. At one stage I imagined there ought to be a single optimum exposure time and, that if I could establish what this was, my imaging would dramatically improve. My purpose in responding to the OP's question is therefore as much concerned with testing my own understanding of this subject as it is in answering the actual question. Nonetheless I hope this may be of some help.

As an imager we are ultimately interested in reproducing with decent fidelity objects that are contained in an area of sky, the details of which are conveyed to us by photons of light which must traverse the relative vacuum of space and a constantly changing  atmosphere and ultimately detected by less than perfect imaging systems.

Information about an object is contained in the object’s photon count, or “signal.” Noise is the uncertainty of that information. The proportion of signal-to-noise is the “Signal to Noise Ratio,” S/ N or SNR. S/ N is a widely used measure of information quality. S/ N determines ability to discern contrasts in an imaged object.

To obtain reasonable fidelity in our images we need to capture sufficient photons (signal) within our sky-area of interest to substantially overwhelm the noise superimposed on our signal of interest by the photonic nature of light, atmospheric distortions, optical distortions and electronic noise. The fidelity required in an image depends on the audience for which the image is intended. The more discerning the audience, the higher the fidelity (>SNR) that will be needed/expected.

I cannot say exactly what algorithm Mabula has used in APP but different methods are possible depending on the need and computational limitations.

Out of interest I decided to see what results I would obtain from APP if I performed three integrations using the same 2x5 min Luminance subs of the Lagoon Nebula, M8. The first integration used a typical crop of the whole nebula, the second a tight crop of the core area, the third a tight crop of a less bright, smoothish outer region of the nebula containing only a small number of stars.

As expected the computed SNR values varied considerably depending on the crop despite all three being obtained from the same two image files. See also the screen-shots.

Full Nebula: SNR 21, Noise 0.000195
Core Crop: SNR 37, Noise 0.000356
Outer Crop: SNR 7.7, Noise 0.000246

So what might we deduce or infer from this exercise. I was fully expecting the SNR for the crop of the bright core area to be higher than the other two samples because the overall brightness / photon counts / ADU values of this tight crop would be the highest.

I was a little surprised by the Noise result. Maybe someone else can offer an explanation here but I think the answer lies with the histogram shapes. For the Full nebula crop the histogram is broad and smooth whereas for the other two crops they are more spiky and somewhat irregular.

It is clearly comparatively easy to obtain a high SNR / high fidelity in a bright area of an image as the photon count builds quickly but to obtain a similar SNR in a dim area requires a very much longer imaging time in order to capture a similar number of photons.

In practice, to double the SNR actually requires the sample size / photon count / imaging time to be quadrupled by, for example, increasing the number of exposures and/or increasing the exposure length. However if imaging time is constrained there are trade-offs to be made. Longer exposures risk saturation in bright areas and fidelity can be lost by increased risk from poor guiding and changing atmospheric conditions. Alternatively, with many short exposures, the download times, dithering overheads etc can eventually become very significant. Having said this, I have seen some remarkable good results from people using 'lucky' imaging techniques.

I use Sequence Generator Pro for managing my imaging sessions. This package and other alternatives contain algorithms that will quickly assess the pixel values of each exposure as it is downloaded and then suggest a suitable exposure time for subsequent exposures. I don't suggest following the recommendations absolutely but at least they give a reasonable indication of suitable times without getting heavily involved with mathematics.

When I was first getting to grips with imaging I found this publication useful but there are numerous publications and internet advice:

Gendler(Ed.), Robert. Lessons from the Masters: Current Concepts in Astronomical Image Processing: 179 (The Patrick Moore Practical Astronomy Series) (Kindle Locations 173-176). Springer New York. Kindle Edition.

Screenshot 2020 06 17 21.24.55
Screenshot 2020 06 17 21.28.05
Screenshot 2020 06 17 21.30.43

Regards

Mike

 PS:  As a final point, I think the SNR value in APP has value in comparing the quality of one frame versus another of the same target, and assuming the framing has not ignificantly changed. 

If you have not already read them Mabula has also commented recently on SNR and the impact of dispersion.

 

 

 

This post was modified 4 years ago 2 times by mestutters

   
Rick Engman reacted
ReplyQuote
(@rfe3)
Brown Dwarf
Joined: 4 years ago
Posts: 5
Topic starter  

@astrogee: I agree wih you about the Histogram, I always try to have mine between 20 to 30% regardless of the ISO I use. I have a feeling these numbers can be useful, but why 3 sets? Band 1, 2 and 3? What do the Bands represent? Should they be averaged or added together to get some kind of number...I don't know.

 

@mestutters: Interesting results with the variance of the SNR numbers. At what ISO were you shooting? I'm trying to find the best SNR to length of exopsure ratio at the point of ISO Invariance of my Cameras. Optimally, if I could find an exposure length that would give me a decent SNR Ratio at or as close to my invariance point (ISO 200 for a 1000D and ISO 400 for a 450D) then I could be sure of having the ability to capture the maximum amount of Photons. I could then make decisions based on Real Time conditions on whether to sacrifice some Photon gathering ability (ie...Fidelity) by raising the Floor/Gain/Iso to a higher level and shortening the exposure time. I had a friend that shot with a very ISO Invariant camera, a Nikon D7100 (I think) and you could literally take all your Photos at ISO 100. They will be almost black on the screen and the Histogram will be way over to the left but in post processing you can strech the S*#t out of them without as much loss of quality (ie...Noise) as the same Photo shot at a higher ISO because the data is all there (greater Full Well capacity). It really goes against what we learned about the Photography Triangle but those rules were based on Film cameras an don't really apply as hard or fast with modern DSLR's.

I like to look at hard numbers to base my decisions on...Hence I'm hoping Mabula or someone chimes in on what these numbers represent and if they're usable as a rough guide to quality. I'll have a look at those publications, they sound like they could be interesting.

 

Rick 


   
ReplyQuote
(@mestutters)
Neutron Star
Joined: 7 years ago
Posts: 167
 

Hi,

I think the Band 1, 2, 3 are the measures for the R, G and B channel receptors: I currently shoot with a monochrome camera and only see one set of numbers but obviously I get different results from the same target for subs captured when using the R, G and B filters.

As I recall ISO Invariance it is the ISO setting for a digital camera that will capture the highest dynamic range in a scene.  The actual dynamic range of any night-time scene can vary considerably depending on a) the visual magnitude of brightest and darkest points of the sky area of interest and b) the impact of contrast reducing factors such as light pollution, moon-light and atmospheric turbulence.   The brightest points in any particular night-time scene will depend on the magnitude of the brightest stars or galaxy core that you are trying to image, and these points may be sufffciently bright that they will very rapidly achieve the full-well capacity of your camera sensor while in darker areas the sensor will record virtually nothing in the same time period.    This is to say that the ideal exposure for any given target will vary somewhat and the normal way to handle this is to examine the histogram curve.

The standard advice for imaging is as stated by astrogee, to get the left-hand end of the histogram curve maginally away from the LH edge of the scale.  If having done this there is no peaking /over-exposure at the right-hand side then clearly there is capacity to handle a longer exposure, and assuming there are no down-sides (eg guiding accuracy) to doing this.

If you do see a peak/burnt-out pixels at the extreme right-hand end of the histogram then the solution is to capture subs at two or more different exposures.  As an example, the Trapezium stars in Orion nebula are sufficiently bright to generate a full well signal in under 30 seconds but in this time the sensor will have recorded virtually no signal in the faint outer extremes of the nebula.  APP works on a 32 bit image scale and is fully capable of scaling under- and over-exposed 12- or 16-bit image files to give a full HDR integrated image.

Hope this clarifies somewhat.

Mike

This post was modified 4 years ago 3 times by mestutters

   
Rick Engman reacted
ReplyQuote
(@rfe3)
Brown Dwarf
Joined: 4 years ago
Posts: 5
Topic starter  

That's what I thought about the Bands but wasn't sure, I just couldn't find anything to confirm it. I haven't used APP yet for any mono Images (I only started using it a few months ago) so I've only seen the 3 Bands. I shoot Mono as well but Mono DSLR's. I have 2 Mono 1000D's and 2 Mono 450D's. We haven't had a single clear night here in SW Germany for over 3 weeeks so I haven't had a chance to take any mono images. I spent the months of April and May using my "Normal" 7D and my Baader Spectrium Moddified 450D to really put my new SkyTech (Altair Astro) Quadband filter through it's paces. I was hoping to start some Narrowband and RGB sessions this month. I just got my Dual Rig cobbled together for my new EQM 35 Pro and haven't been able to test it in anger yet. With my EQ-6 I shoot with a Tripple Rig, but I'm getting older and that thing is a beast to carry around.

IMG 20200615 155226

 

I understand what you are saying about peak and burnt out photos, I've done it plenty of times over the last 7 or 8 years that I've been doing AP, but the thing about ISO Invariance is that with a Camera with a low enough Invariance point it's almost impossible to burn out an exposure...How long would it take you to move the Histogram all the way to the Right (at Night) at ISO 100? A very long time... Unfortunatly some cameras have a very high Invariant point...My 7D has an ISO 800 point and a 5D MkII has ISO 1600...You could blow out an exposure in a much shorter time frame with these. The thing about using ISO Invariance is to purposely underexpose the image in order to preserve the highlights. The data is all there and can be brought out in Post processing by raising the EV and stretching. I can often raise the EV four or five stops without any problem (from ISO 200), try the same thing starting at ISO 1600...

The reason you can recover stops UP is because cameras that are ISO Invariant aren’t really using the sensor to become more sensitive to light. they're just taking a picture and then using the info from the sensor to expose the shot in the Processor of the camera, so it’s really just doing the same thing you’d be doing in Photoshop or Lightroom when brightening. By shooting at the base ISO or Invariant point, you’re telling the camera not to do the brightening for you.

However, when a camera uses the SENSOR to increase the exposure (not the processor afterward), the brightness is baked into the shot the instant it’s recorded. That’s how most cameras work, and it means they are not ISO Invariant.

I must say it's really nice to be having intelligent and informed conversations about this topic...Most of the other "Photographers" I know shoot only in Daylight and don't realize their Cameras even have a Manual mode....

Rick


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Superb discussions here, I need to carefully read this later, but here's a recent discussion as well related also to noise levels in APP, with detailed answers by Mabula @mabula-admin: https://www.astropixelprocessor.com/community/appreleases/increased-fwhm-with-v1-080-beta-2/

 


   
Rick Engman and astrogee reacted
ReplyQuote
(@mestutters)
Neutron Star
Joined: 7 years ago
Posts: 167
 

Hi Rick,

I'm glad we seem to have cleared up one issue - the reason for the 3 bands at least.

Since we started our dialogue I've been doing a little more reading about ISO Invariant cameras but you may well have read more widely / deeply than I have and have a far better understanding.

I'm currently using a StarlightXpress CCD camera which utilises a a Sony CCD sensor.  As I recall, the Sony chip has a full-well capacity of c15000 photons.  When I download an image file the camera's electronics upscale the photon counts registered by the chip from values in the range 0-15000 to a values in the range 0-16535 (i.e 16-bit).

Now if I view the downloaded FITS file in linear mode in APP it looks generally very dark overall and I need to zoom in a fair amount in order to find a bright spot corresponding to a star.   Now even though the histogram of the unstretched image is typically piled up at the left-hand edge, this does not mean that a number of pixels on the sensor did not detect sufficient photons so as to exceed the full-well capacity of the sensor, i.e the final image will have some burnt-out highlights.   Now I may have missed something but I am pretty certain that the sensor of an ISO invariant camera is not hugely dissimilar to my CCD camera  such that eventually during a long exposure sufficient photons from a bright object (star or galaxy core) will strike a sensor pixel and thereby exceed its full-well capacity.   Unless the sensor has an infinite well-capacity this must eventually happen irrespective of whether the sensor or camera  applies any gain. The normal way to spot whether this has happened is to check the image histogram to see if there is any spike on the right-hand side or to use an image editing package to see if any pixels have been clipped.

Anyway, getting back to your original question I still think that examining the SNR and noise stats that will not tell you whether or not this has happened, but I'm happy to be convinced otherwise.

regards

Mike

PS : I hope you get some decent weather soon to make use of your new rig.  As with you I use a portable set-up that I can still just manage to carry from house to garden when the weather is good.  I've recently started a modest monthly subscription to iTelescope to get occasional use of more exotic equipment and locations that I am never personally likely to own or get to.

 

This post was modified 4 years ago by mestutters

   
Rick Engman reacted
ReplyQuote
(@rfe3)
Brown Dwarf
Joined: 4 years ago
Posts: 5
Topic starter  

Hi Mike,

You're exactly right, they are similar in that respect, you can blow ot an image but it takes a very long time. I've never imaged with a CCD Camera but I imagine if you set your Gain at it's lowest level it would it would also take ages before you'd blow out an Image. I've found that on my DSLR Mono's that I can easily make 20 to 30 min exposures and the Camera Histogram is between 20 to 30% from the Left...Alot of the time my Polar alignment and guiding  won't allow for longer exposures but I believe the Cameras will. I'm just starting to experiment seriously whith these concepts. Hence I'd like to have a "Guideline" to compare them against...Hoping the APP numbers will help in doing that.

 

Rick


   
ReplyQuote
(@astrogee)
Neutron Star
Joined: 6 years ago
Posts: 153
 
Posted by: @vincent-mod

Superb discussions here, I need to carefully read this later, but here's a recent discussion as well related also to noise levels in APP, with detailed answers by Mabula @mabula-admin: https://www.astropixelprocessor.com/community/appreleases/increased-fwhm-with-v1-080-beta-2/

 

After reading the link it seems the SNR averages out the signal over the whole image. It's a nice metric but I don't think it'll will help in setting exposure as the original post asks. An Iris nebula for example will have a far lower SNR than the Orion Nebula - if averaging over an image is true - but it doesn't mean that the image exposure is wrong. I simply means the imaging process could maybe use some improvement if you want to image the Iris nebula well. It will require a lot more noise suppression.

To my mind, the key is the noise. But the noise represented in APP is not clear. A histogram for example will show noise level in whole numbers like ADU but in APP, it is a small fractional number. Is it normalized? Then to what? Or is it averaged over the whole image like SNR?

Also, something else that is important is the ISO/gain stage is an analog amplifier. The consequence is that using the ISO/Gain properly is critical because its done before A/D conversion and so quantization noise is much less than 0 ISO/Gain and then stretching the image digitally. I haven't done an analysis on this but I think I will because it should give a lower bound on the correct ISO/Gain.

EDIT: It looks like the noise is also averaged over the whole image. Looking at the range of these values, they represent imaging noise and not noise in the image like background sky so again. I don't think these values can be use to determine exposure. But they do indicate the quality of the image train.


   
Rick Engman reacted
ReplyQuote
(@mestutters)
Neutron Star
Joined: 7 years ago
Posts: 167
 

Hi Rick,   cc:  Astrogee, Vincent,

If you are looking for a more 'scientific' method for deciding individual exposures lengths and overall exposure times you might want to take a look at for example SkyTools Imaging 4.   I discovered this when I was first seeking a method to help decide what overall exposure times / exposure durations to use.   I'm sure there are other tools with similar capability but ST includes an exposure calculator that provides an indication of the total exposure time needed to order to achieve a particular target SNR depending on your selected target, imaging system, geo location, sky brightness, etc. and what your particular interest is in the target (eg core area or faint outer regions.    It is then for you to decide if you want to achieve this overall total exposure time in a handful of very long exposures or numerous shorter ones. 

Screenshot 2020 06 19 08.52.13

Having used this tool for deciding a target overall exposure time I now use only a handful of actual exposure intervals for imaging, e.g. 120, 300, 600 and 1200 secs.  I now use only 1200secs for narrowband, 300 or 600 for RGB, and usually 120 or 300 for L.   For a really bright target e.g. M42 I will add a few L at say 30 sec in the hope of capturing the core details.  While my camera and mount are  capable of the longer times I found it too annoying to lose a long sub in the last few seconds to events outside my control e.g. clouds, that I no longer bother.   Using only a handful of expose times is also useful as it requires only a correspondingly small library of dark calibration frames.

Feature Request:  In connection with this approach to exposure calculation I wonder if it might be possible at some stage if APP could provide an 'SNR sampling tool'.  For this I envisage using the mouse pointer to select a sample  area, say 4, 9 or 16 pixels, of an image file and for APP then to display  the SNR and noise results for this sample area.   Currently APP gives x, y and signal levels when you mouse over an image.  Maybe this could be toggled to give some alternative stats.   Alternatively, I would be interested to hear for anyone who knows of an image processing  tool that already does this.

Clear skies

Mike

Can anyone tell me if the SNR value for a test image of a planned target can be simply multiplied up by the planned  number of subs to give an approximate indication of the likely overall SNR of the planned integration. As the signal is additive I think it is probably so also for SNR but would like a second opinon?

 

This post was modified 4 years ago 2 times by mestutters

   
Rick Engman and astrogee reacted
ReplyQuote
(@astrogee)
Neutron Star
Joined: 6 years ago
Posts: 153
 

Thanks @mestutters, That looks like a good tool. Like you I have also pretty much settled on 300s for RGB for similar reasons. After all is said and done, exposure is pretty limited in choices. Daytime photography can have a lot of varying lighting conditions but nighttime imaging has pretty much only one exposure! 😛

About SNR improvement with stacking, its improved by a factor of square-root(2) so you have to stack 4 images to double your SNR. But you have to stack 4x4 to double DNR again, and 4x4x4 for another doubling so its an exponential exercise.

Clear Skies!

EDIT: I forgot to mention that the stacking will only remove imaging chain noise, not sky-glow or background sky light pollution. 

EDIT2: @ralph has corrected me as sky-glow or shot noise can be improved by square-root(2) - I was a bit loose with my terms 🙁


   
Rick Engman reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi @rfe3, @astrogee, @mestutters, @vincent-mod,

I have been following your conversation and I wanted to let you know, that I will make a separate sticky topic soon that explains APP's noise and SNR calculations that should answer most of your questions I think.

For now, I will respond rather quickly:

Noise calculation in APP is done with MultiResolution Support (MRS) Gaussian noise estimate:

Automatic Noise Estimation from the Multiresolution Support

Jean‐Luc Starck and Fionn Murtagh

https://iopscience.iop.org/article/10.1086/316124

This is a rather famous article in astronomy concerning noise calculation. From the article you might understand that a good/reliable noise calculation, is in fact not very simple, really not as simple as the average/median/variance/standard deviation/MAD which are all very basic.

This is a noise estimate calculation with a precision of 1% or better, other methods are out there, but none of them are as reliable as the MRS estimate, most will have accuracy worse than 3%. The noise value is derived from certain pixels in the image data and will give you a very good noise estimate of the noise in the entire image of the areas in your image where there is mostly only noise, thus in the areas where the data is sky background dominated. For our purpose, that really makes sense I think.

 

Now regarding Signal-to-Noise-Ratio, S/N or SNR: SNR is a rather complicated subject. The way to calculate SNR is defined in numerous ways.... Radio astronomy is doing it totally different than X-Ray astronomy for example and within those astronomy fields, there are again different ways depending on what signal is of interest... so the question is always: what is the signal that we want to count?

So the SNR calculation is always

  1. based on assumptions and
  2. relative to a base signal/offset and
  3. corrected for signals that we don't want to measure possibly

 

For the same image, the SNR value of the sky background in an image will be a totally diffferent value compared to the SNR value of the same image relative to the sky background value, because in the latter case, we would not want to measure the signal of sky background but only of signals above the sky background (stars, galaxies, nebula) or below the sky background (dark nebula SNR). That latter definition is used in APP, because we are interested in the signals without the sky background signal. Which again makes a lot of sense for our purpose I think.

A "good" SNR for a certain object like the Orion Nebula versus the Hercules globular cluster will be a very different value and will also depend greatly on what SNR definition is used. Stating that an image has SNR X, only makes sense if you explain what SNR definition is used. Stating that an image should have a certain SNR value to be considered a "good" image also makes little sense, because it depends completely on the object to be imaged and the size of the field of view normally in astronomy. The SNR used for image quality in normal photography will be a totally different animal compared to what you are interested in, in an astronomical image I think.

The analytical results in the metadata of the stacks actually already should give you many answers regarding how your data performs I would think. The analytical results will show you how well SNR improves or noise drops in your integrations. If noise is not dropping nicely, then for instance, that could be related to too small dither steps, which has a clear influence on most sensors for noise in the results. From these analytical results you will also see, that

  • if you integrate twice as many images (keeping everything else equal), that the noise will drop roughly with a factor of square root of 2. 
  • if you integrate with twice the total exposure time (keeping everything else equal), that the noise will drop roughly with a factor of square root of 2.

Regarding the noise values, all integrations in APP are 32bits normalized floats in the range of 0-1. This is quite common in astronomy.

So a noise value of 5,1685E-04 = 5,1685 * 10^-4 = 0,00051685 in the float range of 0-1. That notation with the E-04 is called scientific notation.

So an SNR of 1,2367E+01 = 12,37, APP shows it simply as a ratio of Signal/Noise and the value is not converted to decibels.

Example of analytical data in the FITS header of an APP integration:

HDU1 - NOTE-1 = 'INTEGRATION METADATA'
HDU1 - EXPTIME = 7440.0 / exposure time (s)
HDU1 - NUMFRAME= 62 / number of frames used in this integration
HDU1 - BG-1 = ' 1,0029E-01' / background estimate of channel 1
HDU1 - BG-2 = ' 1,0023E-01' / background estimate of channel 2
HDU1 - BG-3 = ' 1,0040E-01' / background estimate of channel 3
HDU1 - SCALE-1 = ' 3,2873E-03' / dispersion of channel 1
HDU1 - SCALE-2 = ' 2,8077E-03' / dispersion of channel 2
HDU1 - SCALE-3 = ' 1,9219E-03' / dispersion of channel 3
HDU1 - NOISE-1 = ' 4,1973E-04' / noise level of channel 1
HDU1 - NOISE-2 = ' 5,1685E-04' / noise level of channel 2
HDU1 - NOISE-3 = ' 2,6293E-04' / noise level of channel 3
HDU1 - SNR-1 = ' 1,8166E+01' / Signal to Noise Ratio of channel 1
HDU1 - SNR-2 = ' 1,2367E+01' / Signal to Noise Ratio of channel 2
HDU1 - SNR-3 = ' 1,9213E+01' / Signal to Noise Ratio of channel 3
HDU1 - NOTE-2 = 'NR = Noise Reduction'
HDU1 - NOTE-3 = 'medNR = noise in median frame / noise in integration'
HDU1 - NOTE-4 = 'refNR = noise in reference frame / noise in integration'
HDU1 - NOTE-5 = 'ideal noise reduction = square root of number of frames'
HDU1 - NOTE-6 = 'the realized/ideal noise reduction ratio should approach 1 ideally'
HDU1 - NOTE-7 = 'the effective noise reduction has a correction for'
HDU1 - NOTE-8 = 'dispersion change between the frame and the integration'
HDU1 - NOTE-9 = 'because dispersion and noise are correlated'
HDU1 - medNR-1 = ' 4,2973E+00' / median noise reduction, channel 1
HDU1 - medNR-2 = ' 4,5804E+00' / median noise reduction, channel 2
HDU1 - medNR-3 = ' 5,7141E+00' / median noise reduction, channel 3
HDU1 - refNR-1 = ' 4,3097E+00' / reference noise reduction, channel 1
HDU1 - refNR-2 = ' 4,6624E+00' / reference noise reduction, channel 2
HDU1 - refNR-3 = ' 5,6644E+00' / reference noise reduction, channel 3
HDU1 - idNR-1 = ' 7,8740E+00' / ideal noise reduction, channel 1
HDU1 - idNR-2 = ' 7,8740E+00' / ideal noise reduction, channel 2
HDU1 - idNR-3 = ' 7,8740E+00' / ideal noise reduction, channel 3
HDU1 - ratNR-1 = ' 5,4576E-01' / realized/ideal noise reduction ratio, channel 1
HDU1 - ratNR-2 = ' 5,8171E-01' / realized/ideal noise reduction ratio, channel 2
HDU1 - ratNR-3 = ' 7,2569E-01' / realized/ideal noise reduction ratio, channel 3
HDU1 - medENR-1= ' 3,1875E+00' / effective median noise reduction, channel 1
HDU1 - medENR-2= ' 2,9635E+00' / effective median noise reduction, channel 2
HDU1 - medENR-3= ' 3,8803E+00' / effective median noise reduction, channel 3
HDU1 - refENR-1= ' 3,2324E+00' / effective reference noise reduction, channel 1
HDU1 - refENR-2= ' 3,0594E+00' / effective reference noise reduction, channel 2
HDU1 - refENR-3= ' 3,8560E+00' / effective reference noise reduction, channel 3

Hope that this already answers some of your questions?

Kind regards,

Mabula

 

 

This post was modified 4 years ago by Mabula-Admin

   
ReplyQuote
(@astrogee)
Neutron Star
Joined: 6 years ago
Posts: 153
 

Hi @mabula-admin, so good of you to respond to this discussion.

If I can comment and ask a couple questions:

To explain my use of "good" SNR, from the perspective of an amateur astrophotographer, I think the purpose is to create good visual images. In this respect there is such as thing as a good SNR, which I found to be declared as 40:1 by the camera engineering community. So the desire is to get this 40:1 SNR in your target image, say M31, from the bright core relative to the background sky. But of course, being astrophotographers we want the faint parts to also be represented to a "good" quality - because most targets are quite faint! This requires another 40:1 of SNR. So we need at least log(2,1600) ~= 11 EV stops of SNR (or Dynamic Range in our imaging chain) - or approx. noise level of 1/1600 = 6.25E-04 which is about what I see in APP for my last image. (So now I get the noise part 🙂

About SNR: In APP I see for example, SNR-1 = 4.4975E+00. So are you saying this SNR is relative to background sky, not imaging noise?

Thanks!

PS: Is there a typo? shouldn't twice the exposure time give twice the SNR?


   
ReplyQuote
(@ralph)
Neutron Star
Joined: 5 years ago
Posts: 78
 

Thanks for the explanation Mabula!

A few words (for astrogee and mestutters) about "generic" SNR and how it scales. Very simply put, noise is composed of read-out noise of the detector and shot noise of the recorded signal (in electrons). Read-out noise is what you get when you read out the detector, for today's good detectors it is on the order of a few electrons, but can be more depending on the gain setting and type of detector. Shot noise is very simply the square root of the signal in electrons (after subtraction of bias signal but before subtraction of dark signal, light pollution, etc.).

Noise adds in square, so adding noise_A and noise_B together is done by taking the square root of (noise_A^2 + noise_B^2). This applies irrespective of where the noise comes from, as long as the noise terms are not correlated. That's where the factor square root of 2 comes from.

Now, how does noise scale with exposure time?

Doubling the exposure time doubles the recorded signal (including dark signal, sky background, and of course the useful signal, but bias signal is not doubled). If you compare, say, a 300s exposure versus a 600s exposure, both are read-out only once each, so the read-out noise is identical for both exposures.

Now we can have two regimes for the SNR: read-out noise dominated and shot noise dominated. In the shot noise dominated regime, the read-out noise is negligible (e.g. 1e6 recorded electrons, thus 1000 electrons shot noise, versus e.g. 3 electrons read-out noise). The combined noise in this case is 1000.0015 electrons, so basically just the 1000 shot noise. Doubling the exposure time doubles the signal, thus 2e6 electrons, and a combined noise of 1414.21462303 electrons, the factor sqrt(2) bigger. The SNR in the 300s exposure is (let's assume all electrons are useful signal) 1e6/1000.0015 = 1000 (rounded off), and for 600s we get 2e6/1414.2146 = 1414.2, or a factor of 1.4142 bigger, which happens to the square root of 2.

Now for the read-out noise dominated regime things are totally different. Imagine we have an extremely dark location, narrow band filters, a great cooled camera with negligible dark current, and we're imaging the blackest Bok globule we can find. And we have on average 0.5 electrons signal in the darkest pixels of the Bok globule in our 300s exposure. That means that the combined noise is sqrt(0.5^2 + 3^2) = 3.041, very close to the 3 electrons read-out noise. The SNR is 0.5/3.041=0.164. Now we double our exposure time, we get 1 electron signal, the combined noise is sqrt(1^2 + 3^2) = 3.162, and the SNR is 0.316. And here we see that the SNR almost doubled when we doubled the exposure time, as opposed to only a factor sqrt(2) increase for the shot noise dominated regime.

So to summarise: noise adds in square, and SNR increases by the square root of the exposure time in shot noise limited cases (SNR >> 1) and increases linearly with exposure time in read-out noise limited cases (SNR << 1).

(and then there are a million other noise sources to consider, but none really as dominant as these two)


   
ReplyQuote
(@astrogee)
Neutron Star
Joined: 6 years ago
Posts: 153
 

Thanks @ralph, great explanation. I do try to simplify things but perhaps I shouldn't ignore shot noise. I'm going to do 600s captures next time and see how the noise stacks up (heheh). I have a relatively dark site ~Bortle 3.5, and the background sky is very low in the histogram - takes about 600-900s or more to get it off the floor.


   
ReplyQuote
(@ralph)
Neutron Star
Joined: 5 years ago
Posts: 78
 

For any decent looking image you're going to be shot noise limited. Period. So don't ignore it if you want to understand noise 😉 .

Realising that shot noise is (with a few exceptions perhaps) your biggest noise source, it also makes sense to just collect as many photons as possible. Bigger aperture, longer exposure time. Gain settings and other tweaks are not going to affect your shot noise.

But do realise that shot noise is the noise caused by the entire (electron) signal, not just the useful part of it. So if you have much more light pollution than useful signal, the shot noise of the light pollution is going to swamp the shot noise of your useful signal. And in those cases the best noise reduction is just finding a way to lower the light pollution, e.g. by going narrow band or by driving to a darker place.


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @astrogee

Hi @mabula-admin, so good of you to respond to this discussion.

If I can comment and ask a couple questions:

To explain my use of "good" SNR, from the perspective of an amateur astrophotographer, I think the purpose is to create good visual images. In this respect there is such as thing as a good SNR, which I found to be declared as 40:1 by the camera engineering community. So the desire is to get this 40:1 SNR in your target image, say M31, from the bright core relative to the background sky. But of course, being astrophotographers we want the faint parts to also be represented to a "good" quality - because most targets are quite faint! This requires another 40:1 of SNR. So we need at least log(2,1600) ~= 11 EV stops of SNR (or Dynamic Range in our imaging chain) - or approx. noise level of 1/1600 = 6.25E-04 which is about what I see in APP for my last image. (So now I get the noise part 🙂

About SNR: In APP I see for example, SNR-1 = 4.4975E+00. So are you saying this SNR is relative to background sky, not imaging noise?

Thanks!

PS: Is there a typo? shouldn't twice the exposure time give twice the SNR?

Hi @rfe3, @astrogee, @ralph, @mestutters, @vincent-mod,

To clarify: APP counts signal relative to the measured sky background level of the entire field of view. So if you measure SNR in an image with a strong gradient... you will never be able to get a reliable SNR value for the entire image unless you first correct for the light pollution/gradient 😉 This is something I need to add in APP for sure. A tool that enables you to calculate the noise and SNR on the integrations that are corrected for Light Pollution and gradients and also Star Color Calibrated (which will change things as well with regard to Signal) and possibly only on a selected part of the field of view.

Proposal, I would like to make a tool for this, just like the starmap tool as an image viewer mode. So you load an image, and set the image viewer to noise/SNR analysis. The tool would then show a MultiResolution Support noise result, to show which pixels were used for noise calculation. And the tool will show SNR and noise and would allow you to also perform it on a crop of the field of view quickly. Does this sound like a good tool to implement? I personally think it will be very usefull for nosie and SNR analysis and understanding.

Mabula


   
Ralph Snel, astrogee, Rick Engman and 1 people reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @ralph

Thanks for the explanation Mabula!

A few words (for astrogee and mestutters) about "generic" SNR and how it scales. Very simply put, noise is composed of read-out noise of the detector and shot noise of the recorded signal (in electrons). Read-out noise is what you get when you read out the detector, for today's good detectors it is on the order of a few electrons, but can be more depending on the gain setting and type of detector. Shot noise is very simply the square root of the signal in electrons (after subtraction of bias signal but before subtraction of dark signal, light pollution, etc.).

Noise adds in square, so adding noise_A and noise_B together is done by taking the square root of (noise_A^2 + noise_B^2). This applies irrespective of where the noise comes from, as long as the noise terms are not correlated. That's where the factor square root of 2 comes from.

Now, how does noise scale with exposure time?

Doubling the exposure time doubles the recorded signal (including dark signal, sky background, and of course the useful signal, but bias signal is not doubled). If you compare, say, a 300s exposure versus a 600s exposure, both are read-out only once each, so the read-out noise is identical for both exposures.

Now we can have two regimes for the SNR: read-out noise dominated and shot noise dominated. In the shot noise dominated regime, the read-out noise is negligible (e.g. 1e6 recorded electrons, thus 1000 electrons shot noise, versus e.g. 3 electrons read-out noise). The combined noise in this case is 1000.0015 electrons, so basically just the 1000 shot noise. Doubling the exposure time doubles the signal, thus 2e6 electrons, and a combined noise of 1414.21462303 electrons, the factor sqrt(2) bigger. The SNR in the 300s exposure is (let's assume all electrons are useful signal) 1e6/1000.0015 = 1000 (rounded off), and for 600s we get 2e6/1414.2146 = 1414.2, or a factor of 1.4142 bigger, which happens to the square root of 2.

Now for the read-out noise dominated regime things are totally different. Imagine we have an extremely dark location, narrow band filters, a great cooled camera with negligible dark current, and we're imaging the blackest Bok globule we can find. And we have on average 0.5 electrons signal in the darkest pixels of the Bok globule in our 300s exposure. That means that the combined noise is sqrt(0.5^2 + 3^2) = 3.041, very close to the 3 electrons read-out noise. The SNR is 0.5/3.041=0.164. Now we double our exposure time, we get 1 electron signal, the combined noise is sqrt(1^2 + 3^2) = 3.162, and the SNR is 0.316. And here we see that the SNR almost doubled when we doubled the exposure time, as opposed to only a factor sqrt(2) increase for the shot noise dominated regime.

So to summarise: noise adds in square, and SNR increases by the square root of the exposure time in shot noise limited cases (SNR >> 1) and increases linearly with exposure time in read-out noise limited cases (SNR << 1).

(and then there are a million other noise sources to consider, but none really as dominant as these two)

Excellent @ralph, thank you 🙂 ! Very well explained, I have nothing to add here... 😉

 

 


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @ralph

For any decent looking image you're going to be shot noise limited. Period. So don't ignore it if you want to understand noise 😉 .

Realising that shot noise is (with a few exceptions perhaps) your biggest noise source, it also makes sense to just collect as many photons as possible. Bigger aperture, longer exposure time. Gain settings and other tweaks are not going to affect your shot noise.

But do realise that shot noise is the noise caused by the entire (electron) signal, not just the useful part of it. So if you have much more light pollution than useful signal, the shot noise of the light pollution is going to swamp the shot noise of your useful signal. And in those cases the best noise reduction is just finding a way to lower the light pollution, e.g. by going narrow band or by driving to a darker place.

Hi @rfe3, @astrogee, @ralph, @mestutters, @vincent-mod,

Indeed ! that is the reason why it is hard to detect faint signals with a lot of light pollution. You simpy need to expose much longer for the faint signal to overcome the shot noise of the sky background. So on a severly light polluted observing site, you will need to have a much longer total exposure time to get as deep as you might get on a darker observing site and some details will still be buried in the shot noise of the light pollution and thus will not reveal themselves...

Mabula


   
Rick Engman reacted
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Yes, thanks @ralph, this is starting to click for me as well. So shot-noise is apparently just something that happens with a given electrical signal? Do you happen to know as to why as well, maybe related to increasing magnetic fields on the sensor? I love the exact physics behind these things, but have no education in it so bear with me. 😉


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: @vincent-mod

Yes, thanks @ralph, this is starting to click for me as well. So shot-noise is apparently just something that happens with a given electrical signal? Do you happen to know as to why as well, maybe related to increasing magnetic fields on the sensor? I love the exact physics behind these things, but have no education in it so bear with me. 😉

@vincent-mod , shot noise of the sky background is the natural stochastic noise of nature, photons don't come in in regular intervals, nature has noise as well 😉 . You need poisson statistics to deal with that, and that means that shot noise of the sky background will be the square root of the sky background level.

So even with the perfect camera without read noise and without dark signal, you still have to deal with shot noise of mother nature! And thus you still need more exposure time to reduce that noise contribution...

Mabula

 


   
ReplyQuote
(@rfe3)
Brown Dwarf
Joined: 4 years ago
Posts: 5
Topic starter  

Wow...this has turned into a very enlightning thread....it still kind of beats around the Bush of my initial question of: How can I use the Signal & Noise #'s in APP to judge the quality and quantity of  Signal vs. Noise in a single image. Do I want the widest deveation between Signal and Noise? Do I want a Signal ratio above X? Noise ratio below X?

Or is this the Key?

Posted by: @mabula-admin

HDU1 - NOTE-2 = 'NR = Noise Reduction'
HDU1 - NOTE-3 = 'medNR = noise in median frame / noise in integration'
HDU1 - NOTE-4 = 'refNR = noise in reference frame / noise in integration'
HDU1 - NOTE-5 = 'ideal noise reduction = square root of number of frames'
HDU1 - NOTE-6 = 'the realized/ideal noise reduction ratio should approach 1 ideally'
HDU1 - NOTE-7 = 'the effective noise reduction has a correction for'
HDU1 - NOTE-8 = 'dispersion change between the frame and the integration'
HDU1 - NOTE-9 = 'because dispersion and noise are correlated'

Posted by: @mabula-admin

HDU1 - ratNR-1 = ' 5,4576E-01' / realized/ideal noise reduction ratio, channel 1
HDU1 - ratNR-2 = ' 5,8171E-01' / realized/ideal noise reduction ratio, channel 2
HDU1 - ratNR-3 = ' 7,2569E-01' / realized/ideal noise reduction ratio, channel 3

 

mestutters

         That SkyTools Prog. looks really interesting. I'm also basicly just trying to "Simplify"(as if anything in this Hobby is simple...) my shooting regime. I'm trying to narrow my ISO's to maybe 2 (Invariance Point and 1 other) and a couple of Exposure durations...I'm trying to find the point where I get the most "Bang for my Buck...or Euro...

I finally got a few hours of Cloudless...ness...(if that's a real word) last night and the night before and was able to test my new Dual Rig...not too bad. On both nights I took a series of single images at the same ISO and diferent exposures to examine their values and also a set if two images to stack using the same methods as before, to compare the results. I was really suprised at how clean the image from the 1000D-Mono was at ISO 200 and 1800 sec. With no Calibration frames it was pretty good (I think). As soon as I get them all processed I'll post a screen shot (this goes back to my original premise of shooting at the ISO invarient point of my cameras).

Thanks eveyone for chiming in on this topic...I've learned lots of things that I didn't expect to... Now if I can only figure out how to use APP properly...

 

Rick           


   
astrogee reacted
ReplyQuote
(@mestutters)
Neutron Star
Joined: 7 years ago
Posts: 167
 

Hi,

Mabula suggested:

Proposal, I would like to make a tool for this, just like the starmap tool as an image viewer mode. So you load an image, and set the image viewer to noise/SNR analysis. The tool would then show a MultiResolution Support noise result, to show which pixels were used for noise calculation. And the tool will show SNR and noise and would allow you to also perform it on a crop of the field of view quickly. Does this sound like a good tool to implement? I personally think it will be very usefull for nosie and SNR analysis and understanding.

-----------------------------------------------------------------------------------------------------------------

I think this would be a great idea, so a 'yes please' from me especially if it is not a too time consuming task given the other feature requests and ideas for improvement you have in mind.

 

I've read in the past a few texts on causes of noise and guidance for noise reduction and fully appreciate that it is a complicated and multidimensional subject.  In any quiz on the subject I doubt I would score more than 2/10!

I think I now have a slightly better  grasp of how noise is identified and the metrics calculated in APP though the mathematics beyond the very basics is way beyond me. 

I was curious to know why the SNR stats APP was displaying for my integrated images was so low despite having several tens of hours of exposure time.   I now appreciate  that my widefield shots of winter galaxies contain a lot of background and thus, overall, generating little signal above substantial sky background.

I am still curious to understand better:

a) If/how APP uses the SNR and noise stats that it displays more than simply for ranking and weighting subs at the integration stage.

b) How if at all APP users might practically use the stats that APP reports, one of Rick's original questions.

I've enjoyed the discussion.  It has certainly made me think.

Cheers to noise reduction

Mike


   
astrogee reacted
ReplyQuote
(@astrogee)
Neutron Star
Joined: 6 years ago
Posts: 153
 
Posted by: @mabula-admin

Proposal, I would like to make a tool for this, just like the starmap tool as an image viewer mode. So you load an image, and set the image viewer to noise/SNR analysis. The tool would then show a MultiResolution Support noise result, to show which pixels were used for noise calculation. And the tool will show SNR and noise and would allow you to also perform it on a crop of the field of view quickly. Does this sound like a good tool to implement? I personally think it will be very usefull for nosie and SNR analysis and understanding.

 

Yes, absolutely. I think it would be very good for understanding too. At the moment I'm at the point-and-shoot level of using APP but now as I've got most of my rig issues resolved, I'll be diving more into the processing. This subject piqued my curiosity in that respect 🙂


   
ReplyQuote
(@ralph)
Neutron Star
Joined: 5 years ago
Posts: 78
 

+1 on the noisemap tool!

@mabula-admin, if you need more inspiration on what to include there feel free to poke me! 😉


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

@mabula-admin Thanks! Yes, quantum fluctuations will always rule out zero noise anyway. 😉


   
ReplyQuote
(@ralph)
Neutron Star
Joined: 5 years ago
Posts: 78
 

Nothing to do with quantum fluctuations, just simple basic statistics.

Imagine an infinitely large bowl of marbles, 50% black, 50% white, all well mixed. The 50% can be compared with the "true" light flux we want to measure with our detector. Now grab a hand full of marbles and calculate the "measured" ratio black:white. This will of course have an uncertainty coupled to the finite number of marbles you picked. The more marbles you pick and count, the better your determination of the ratio will be.

Same thing with counting photons (or electrons for that matter), with the slight difference we only count the white marbles and there are no black ones.


   
Rick Engman reacted
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Yes I know this wasn't fluctuations, that was just a way of saying, noise will always be around. 😉

Love your explanation! It does really make it clear for me, thank you!


   
ReplyQuote
Share: