15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes
7th December 2023: added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.
I am processing images from a QHY 600M camera, using darks, flats and bias frames as usual. This is one of the first times I have used the camera. As part of the calibration, APP makes a bad pixel map and I happened to click on it (BPM file in FITS format), and I get a HUGE number of dots on the screen.
I am not sure this shows very well, but here is a screen shot of a small portion of the frame.
It is a black background filled with green dots (looks like the Matrix!). The green dots are how FITS Liberator shows clipping to white, so I presume these are the bad pixels for this file??
The dots show some strong X, Y correlations.
Note that the file is too big to attach (60 Mb). I could in principle crop it and post a portion of it.
At any rate, if the green dots are bad pixels, then there are hundreds of thousands to millions of bad pixels. Which seems very odd indeed.
There seem to me that there are three possibilities:
1. My camera has a terrible sensor and there are lots of genuinely bad pixels. It's a new camera that has hardly been used so that would be very strange, but not impossible, I guess.
2. APP is detecting too many bad pixels, perhaps classifying pixels as bad that aren't really bad. The most likely cause would be that I have some setting wrong; it may not be APP's fault.
3. The green dots should not be interpreted as bad pixels, and really mean something else. This seems very unlikely to me but for completeness sake I will mention it. Why call it the "bad pixel map" if it isn't what it the name implies?
Does anybody know if this is a normal thing to find?
I would expect a bad pixel to be one that gives a fixed output regardless of the exposure or scene, but APP might define the bad pixels differently.
Nathan
I looked in more detail. The APP documentation says this
The values in the bad pixel map can take on 3 values:
- 0 - cold pixel
- 127 - linear pixel (so correct)
- 255- hot pixel
Furthermore, you can find statistics on the amount of hot and cold pixels in the Fits Header.
So, I looked in the FITS header of the BPM and I found this
NPIX = 61171488 / raw number of pixels
HOTKAPPA= '3.00 ' / kappa value used for hot pixel determination
COLDFRAC= '0.10 ' / percentage used for cold pixel determination
NBADPIX = 5107716 / number of bad pixels
PBADPIX = '8.350 ' / percentage of bad pixels
NHOTPIX = 5107716 / number of hot pixels
PHOTPIX = '8.350 ' / percentage of hot pixels
NCOLDPIX= 0 / number of cold pixels
PCOLDPIX= '0.000 ' / percentage of cold pixels
NLINPIX = 56063772 / number of linear pixels
PLINPIX = '91.650 ' / percentage of linear pixels
NUMDARKS= 1568 / # of dark frames used in creation of the BPM
NUMFLATS= 0 / # of flat frames used in creation of the BPM
So, it does seem that APP thinks I have 5.1 million hot pixels.
The most interesting part of this is NUMFLATS = 0
I don't know how APP determines hot pixels, but I don't think it would list the number of flats unless it uses flats.
I am mystified as to why it wouldn't be using flats since I had flats as part of the calibration and it seemed that it did do normal flats processing. But somehow it seems flats were left out of the bad pixel processing.
Nathan
The bad pixel map above is from a session where I had darks for 8 different exposures, so it says it used ~1500 dark files. That seems very strange to me because i thought that APP looked at the exposure and grouped images by exposure in making the master darks. It seems that is not true, or the bad pixel map is made across the entire session.
Here is the FITS header of the BMP for a single exposure time with 190 darks and 98 flats.
NPIX = 61651200 / raw number of pixels
HOTKAPPA= '3.00 ' / kappa value used for hot pixel determination
COLDFRAC= '0.10 ' / percentage used for cold pixel determination
NBADPIX = 2293266 / number of bad pixels
PBADPIX = '3.720 ' / percentage of bad pixels
NHOTPIX = 1820351 / number of hot pixels
PHOTPIX = '2.953 ' / percentage of hot pixels
NCOLDPIX= 472915 / number of cold pixels
PCOLDPIX= '0.767 ' / percentage of cold pixels
NLINPIX = 59357934 / number of linear pixels
PLINPIX = '96.280 ' / percentage of linear pixels
NUMDARKS= 190 / # of dark frames used in creation of the BPM
NUMFLATS= 98 / # of flat frames used in creation of the BPM
This is better than the one above, but there are still 1.8 million hot pixels according to this.
My conclusion is that APP classifies a what seems intuitively to be very excessive number of pixels as hot. Maybe there is a good reason but classifying 3% as hot, but it seems weird to me.
Also, note that the cold pixels above (472,915 of them!) seem to all be in the overscan area, so the way that APP classifies cold pixels does something odd with the overscan.
Nathan
CCD sensors are known to be quite high in noise and hot pixel counts, so 3% could actually be the case (I'll check with Mabula though). A BPM will, however, not be destructive to your data. A BPM can also be created by using 1 very noisy dark (long exposure, higher than normal temperature), so you could also give that a try to compare.
The QHY 600M is a recent CMOS sensor using the Sony IMX 455 chip. No, it cannot possibly have 3% hot pixels as a normal condition - that number would mean there is something deeply wrong with it.
I checked several dark frames. Short exposures (8 seconds) typically would have ONE pixel that was saturated. Longer exposures (160 seconds) seem to have about 246 hot pixels. Across 5 frames I checked it was the same pixels. So these appear to be cases where the fixed pattern noise, plus the dark current noise (which is typically linear in exposure time) is enough to saturate the pixel.
246 out of 61 million pixels would be a rate of 0.0004%. A longer exposure would have higher dark current noise and one would expect to find more.
That is assuming that a "hot" pixel is one that is white (or otherwise very high value) even when there is no light.
-------------
The problem seems to be that APP seems to take the view that on a dark frame the highest 3%, regardless of their actual pixel value, ought to be classified as "hot".
It seems a little odd to me to call those "hot" pixels. You might call them the highest 3% of dark current and fixed pattern noise, but they are not actually hot pixels in the sense of being stuck at a high value.
Why such a large % ?
-------------
If the "bad" pixel map is actually grabbing the worst 3% of fixed pattern noise, then that is how it should be described. But most imagers use dithering of the image to get rid of fixed pattern noise, and for that matter, dithering can also get rid of hot pixels (in combination with outlier removal during stacking).
-------------
Also, I don't understand what you mean by "A BPM will, however, not be destructive to your data". From what i understand that is exactly what happens. The data from these 3% of "bad" pixels is discarded and replaced with a value interpolated from neighboring pixels.
Assuming this is correct, then yes it is destructive to 3% or so of the picture. They are discarded and do not appear in the calibrated images (assuming of course that APP does what its documentation says that it does).
If those pixels are actually bad, then it is helpful to discard them and replace them with the interpolation!
Look at the green dots in the screen shot in the original post. If that many pixels were hot - which usually means they are stuck at white - then the image would look terrible.
But if they are not actually bad then throwing them out is throwing out a lot of the image. That might be what is happening.
Nathan
Hi @nathanm,
The creation of the Bad Pixel Map (BPM) will use all the darks for all sessions like you experienced. This is normal. In your case, i can only assume that the shorter exposure darks are giving this very high count in that BPM somehow. Best will be to create one proper BPM for all your sessions using darks with rather long exposures, preferably uncooled. In that way, the bad pixels are very well detected from those darks. (of course those darks are only used for the BPM statistics). Why that first BPM did not use your flats is odd, it should have been taken into account.
Non-linear pixels are pixels that by principal do not behave linearly to incoming photons. To be clear, not all hot pixels will be seen as hot in all of your single darks, but this does not mean that they are good pixels. If the ADU level in such a pixel is too often too high in too many darks, it will not be a well-behaving pixel for sure.
3% of hot pixels on any sensor is very normal in our experience. And this holds for both CCD and CMOS sensors, old and new. For older sensors, you should not be surprised to have much more than 3% of hot pixels especially on CCD sensors with long service.
With regard to where APP shows the hot pixels, simply check visually with some of your long exposure darks and the BPM by loading them in APP's imageviewer. It should remove all confusion normally and you will see the bad pixels yourself if you load several of the darks that created the BPM. (Load long exposure darks (10-15 minutes) ideally for a good BPM).
Hope this clarifies it?
Mabula
Thanks for the reply. I seem to have posted above simultaneously with your reply. Some points.
- I thought that APP made a master dark per exposure, and matched them to the images with the same exposure. Is that not how it works? I thought that was the whole point of taking darks at the same exposures as the lights. Is that wrong?
- As per a post above I did redo the master dark with only one exposure. It still seems to be finding the top 3% of dark pixels and calling them hot.
- In what I have checked it seems to be the long exposure darks that have more bad pixels not the short ones.
- I don't understand how darks all taken at the same exposure could tell you which pixels are not linear. Sorry if I am being dense, but wouldn't you need to compare images taken at different exposures to determine linearity?
Nathan
I have sent a dark file to QHY to see what they think from a hardware perspective.