[Sticky] DSLR RAW processing: why do the master calibration frames and the light frames have different image dimensions? or How does the raw conversion work?
The images/frames of your DSLR camera are always crops of your entire sensor. All DSLR manufacturers make a crop of the entire sensor to present you with the image that you shot. So from each border, a couple of lines are removed in the raw converters.
This cropping is done at the end of the RAW conversion of the data. So the image dimensions of your calibration frames are the image dimensions of the entire sensor. That is the reason why you see this difference.
What happens in calibration of a light frame, an example:
the light frame is reported as having image dimensions of 5184x3456, the calibration frames have the dimension of 5344x3516
- your light frames are read in the raw converter, this reads your entire sensor, so this raw image raster of your light frame is your entire sensor: 5344x3516
- this data is linear monochrome CFA data, undebayered.
- the calibration CFA frames are then used in the right order to calibrate the light frame. So your light frame raw raster data with dimensions 5344x3516 is calibrated with 5344x3516 calibration frames. So calibration is done on the entire sensor.
- After having applied all calibration frames, your data will automatically be debayered according to the settings in 0) RAW/FITS
- Finally, the fixed sensor crop is performed on your light frames. So from all 4 borders some data is removed. This leaves you with the 5184x3456 image dimensions for you light frames.
So this means that you have a 100% guarantee that this is done perfectly, it will not cause trouble with dark subtraction or division by flat frames
And that APP calibrates your data in the raw converter at the right moment. Other applications that use DCRAW (almost all other programs) simply can't do this.
Main developer of Astro Pixel Processor and owner of Aries Productions