Does astro pixel pr...
 
Share:

[Sticky] Does astro pixel processor apply the color matrix correction to digital camera raw data?  

Page 1 / 2
  RSS

(@rudypohlgmail-com)
White Dwarf Customer
Joined: 2 years ago
Posts: 23
September 26, 2017 19:21  

Hi Mabula:

Someone in my astro forum today has asked if "Astro Pixel Processor applies the color matrix correction to digital camera raw data"? Honestly, I'm not even sure what this means, but I thought I would ask you.

Thanks,
Rudy

Note by Mabula: upgraded this excellent question to a Sticky.


ReplyQuote
(@jeroenm)
White Dwarf Customer
Joined: 2 years ago
Posts: 23
September 27, 2017 14:44  

Hi Rudy,

sure it does. Just place a tick at "Force CFA" in tab 0 (zero) "RAW/FITS". APP will then assume that frames all have CFA (color matrix or bayer matrix).

If I'm not mistaken APP will apply the CFA just before the integration of the light frames.

Cheers,

Jeroen


ReplyQuote
(@rudypohlgmail-com)
White Dwarf Customer
Joined: 2 years ago
Posts: 23
September 27, 2017 15:02  

Hi Jeroen,

Thanks so much for your input, this certainly is good news.

However, due to the crucial nature of this key question for some potential new users in our astro forum I need to also hear Mabula's answer on this, I'm sure you understand. Thanks very much once again and I'm sure we'll be touching base from time to time in this forum.

Best regards,
Rudy 


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2081
September 27, 2017 22:40  

Hi Rudy & Jeroen,

This is a very good and interesting question to say the least 😉

First of all, the question:

Does Astro Pixel Processor apply the color matrix correction to digital camera raw data? 

probably doesn't refer to the debayering process I think. If it does then Jeroen's answer is correct.

The debayering in APP is done automatically after the data calibration, so before 3) analyse stars. The data needs to be debayered for star analysis and the registration of the data. Without debayering, the star location calculations will suffer severely... and thus registration will suffer severely.

But as I mentioned, the color matrix correction probably doesn't refer to the debayering. If it doesn't, then I would ask the asker of the question, which color matrix correction he means first.

If raw DSLR data is converted using a raw processor like adobe raw, then several color matrix corrections will be applied actually.

Let me explain what normally happens when the raw data is converted using any raw converter meant for normal photography purposes.

  1. the sensor data is first multiplied with a 3 factor white balance setting and in some cases, the black point is adjusted as well. The sensor data becomes camera data
  2. the next step is debayering.. 
  3. then the first color correction matrix is applied, the camera data is multiplied with a 3x3 camera specific! matrix to convert the camera data to the neutral XYZ colorspace. The camera data becomes color neutral data.
  4. Then the color neutral data is again multiplied with a 3x3 colorspace matrix, this can be the well known XYZ-> sRGB or XYZ -> Adobe 1998  matrix for instance. 
  5. Lastly, these sRGB (or Adobe 1998) pixels are then corrected for the way our eyes and brain perceive incoming light/photons, that means that the data needs to undergo a logarithmic conversion (with a gamma of about 2.2).

After these steps, your raw sensor data has been converted to the normal photography image that it is supposed to be for normal photography.

Some remarks:

  • any of the steps 3-5 make the data unsuitable for accurate astronomical processing. The data becomes non-linear, noise of the separate RGB channels are injected into the other channels due to the 2 3x3 matrices that are involved.
  • camera white balance applying on linear data without steps 3-5 really has no sensible value in the way camera white balance works and is meant to work. The white balance needs both of the two 3x3 matrix conversions and the logarithmic conversion as well to make sense. (The logarithmic conversion has influence on the perceived colours as well. For instance, monitor calibration is immediately off if you adjust the brightness/luminance of your monitor 😉 )

Rudy, can you ask your fellow-forum member what exactly he refers to? 

I hope this does explain some details of how raw dslr data works and why it's essential we don't apply steps 3-5 for accurate astronomical processing. The 3x3 matrices mix noise and signals. The log conversion is non-linear. And all of these steps will cause processing steps like:

  • data calibration,
  • data normalisation,
  • average/median integration,
  • outlier rejection during integration
  • light pollution/gradient removal
  • background calibration
  • star color calibration

to work suboptimal, all the steps have the assumption that the data is linear.

A simple example: if we perform an average integration (which is the same as summing everything), with non-linear data, we really can't calculate a correct average based on the amount of photons, thus signal, the sensor received. That information will be lost due to the non-linear conversions. The resulting average value from  non-linear data will not be equal to the stretched average value from linear data. 

Kind regards,

Mabula

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
(@rudypohlgmail-com)
White Dwarf Customer
Joined: 2 years ago
Posts: 23
September 27, 2017 23:08  

Hi Mabula:

Thanks for this very comprehensive response. I will post the link to this thread in our forum post and will encourage the original asker to come here and read this thread and encourage him to pass on to me whatever further questions or clarifications he might have.

This gentleman has been a professional scientific imaging specialist for many decades. He has a Ph.D in Planetary Science from MIT and is now a senior scientist at the Planetary Science Institute, and has worked as a science member and co-investigator on many collaborative research teams over the years such as the Cassini Mission, the Mars Reconnaissance Orbiter,  NASA's Europa Mission and others. Here's his About page on his website:  http://www.clarkvision.com/rnc/index.html

This gentleman has been one of the key contributors for years on our astrophotography forum at DP review of which I am a member, and he generously gives his time to help any and all of us who are interested in learning many of the technical aspects of astrophotography and image processing. 

Cheers,
Rudy


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2081
September 27, 2017 23:41  

Hi Rudy,

That's fine, I would love to discuss this topic with Roger Clarke myself actually ;-),  so I can perhaps join your forum to discuss this further or invite Roger to this forum. (I'll need to register Roger at my site then). 

Kind regards,

Mabula

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
(@rudypohlgmail-com)
White Dwarf Customer
Joined: 2 years ago
Posts: 23
September 28, 2017 00:06  

Hi Mabula:

I sent Roger a private message at our forum inviting him to come hear and read your reply. I'm sure he will be in touch as he seems to really like discussing these sorts of things, most of which are completely over my head, but I work at them anyway.

Best regards,
Rud


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2081
September 28, 2017 00:54  

Excellent Rudy,

I know Roger is very knowledgeable, so I would love to hear/read his view/opinions on these matters. It might lead to changes in APP perhaps to further improve the results and possibilities with APP 😉 I am always open to new insights/techniques...

Kind regards,

Mabula

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2081
September 28, 2017 14:25  

Hi Rudy,

I have read your topic on dpreview:

https://www.dpreview.com/forums/thread/4207826

And i have been reading part of the Mark's (sharkmelley) topic on Cloudy Night

https://www.cloudynights.com/topic/529426-dslr-processing-the-missing-matrix/

I know how to do this, in fact the code is already in APP. APP can show the DLSR raw data in non-linear sRGB or Adobe 1998 using the camera specific matrix -> XYZ - > sRGB/Adobe1998 using the image viewer mode setting of image.

For all supported DSLR camera's in APP the camera specific matrices are stored in APP. So I can easily build in the option to apply conversion to sRGB or Adobe1998 using the matric conversion without applying the logarithmic conversion which would render the data non-linear.

An example with a normal photograph:

linear image without the matrix conversion:

GUI imageViewerLinear

and with matrix conversion and gamma 2.2 conversion

GUI imageViewerImage

I'll ask Roger and/or Mark for some extra information 😉

Mabula

 

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
(@sharkmelley)
White Dwarf Customer
Joined: 2 years ago
Posts: 14
September 29, 2017 00:29  

I noticed this conversation taking place, so I though I would join in 🙂

What Mabula says above is absolutely correct. When processing a normal photo, the bias must be subtracted, it must be white balanced then the next two operations that are applied are a camera specific colour matrix and gamma.  In fact instead of gamma it is an sRGB tone curve but that is very similar to a gamma of 2.2.    The matrix has an effect on colour and so does the gamma.  But the gamma also has an effect on intensity - similar to a "curves" operation.  For processing normal photos there is usually some kind of exposure adjustment to give the user control over what parts of the image are under exposed or over exposed.

In the world of astroprocessing this colour matrix is usually ignored unless you let Photoshop (or similar) do the raw conversion.  But a Photoshop raw conversion prevents you using calibration frames - bias, flats and darks.  However, those who use Photoshop based workflow experience problems with the huge dynamic range in astro-images.  That's why Roger has written his rnc-color-stretch software, in order to stretch the data much more than Photoshop allows whilst retaining star colour.  Otherwise, typically what happens during a curve stretch is a loss of colour to the brighter objects - they get bleached to white.

For a long time I have been using a stretch known as the ArcSinh stretch which allows huge amounts of stretching whilst preserving colour because it deliberately preserves the original ratios of R,G&B in each pixel.  However, I found I couldn't match the colour calibrated "Photoshop Colours" because the camera's colour matrix was not being applied.  But you can't apply the colour matrix without the gamma otherwise the colours are still wrong.  But I don't want to apply the gamma because I want total control over the tone curve (the intensity curve).

The thread on Cloudy Nights documents this struggle.  It's still at an experimental stage but I'm almost at the point where I can apply the necessary "twist" to the colourspace without also applying the gamma intensity transformation.  It isolates the colourspace "twist" that occurs during the combined operation of colour matrix and gamma. So the approach allows an arbitrary "curves" stretch to be applied to the intensity whilst preserving colour from the dimmest to the brightest object and also facilitate the colourspace "twist" that would match the colour calibrated workflow to which Roger Clark often refers.  And it allows proper calibration frames to be used.

One final part of the puzzle would be to generate the colour matrix for one-shot colour cameras that don't have a published matrix.  This could be done by imaging a colour chart and then solving for the matrix parameters.

Mark


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2081
September 29, 2017 11:44  

Hi Mark,

Thank you very much for joining in 😉

Indeed, strictly speaking, the gamma curve is colorspace specific, so the curve for sRGB is slightly different than for Adobe 1998.

In the world of astroprocessing this colour matrix is usually ignored unless you let Photoshop (or similar) do the raw conversion.  But a Photoshop raw conversion prevents you using calibration frames - bias, flats and darks.  However, those who use Photoshop based workflow experience problems with the huge dynamic range in astro-images.  That's why Roger has written his rnc-color-stretch software, in order to stretch the data much more than Photoshop allows whilst retaining star colour.  Otherwise, typically what happens during a curve stretch is a loss of colour to the brighter objects - they get bleached to white.

Indeed, that's clear to me. We need to have a way to be able to apply the color matrix without applying the gamma curve, keeping the data linear, so the user still has full control over the data.

For a long time I have been using a stretch known as the ArcSinh stretch which allows huge amounts of stretching whilst preserving colour because it deliberately preserves the original ratios of R,G&B in each pixel.  However, I found I couldn't match the colour calibrated "Photoshop Colours" because the camera's colour matrix was not being applied.  But you can't apply the colour matrix without the gamma otherwise the colours are still wrong.  But I don't want to apply the gamma because I want total control over the tone curve (the intensity curve).

Yes, I am aware of arc-sinh stretching. I did read the original article on it a couple of months ago. I remember that the stretch itself is discussed, as is a method to prevent white star cores. Do you happen to have a link where I can find the original article? I seem to have lost the URL ;-(

What I would propose to implement in APP is the following: since I have written the raw conversion myself, (APP doesn't rely on DCRAW) I have complete control. I can simply give the user the option to apply color matrix conversion to either sRGB or Adobe 1998 through the XYZ neutral colorspace from the camera/sensor colors, (without applying the last step, the gamma curve). This will happen as a final step in image loading. Data calibration would not be affected because this is done dynamically in the image loaders immediately after having read the sensor colors from the raw file. Off course, the color matrix won't be applied to the calibration frames, only to the lights. This would solve all problems I think that you discuss.

So, are you writing your own specific raw converter so you can do this on your own data then ? Or do you get linear sensor data through dcraw and then manipulate it with own code?

One final part of the puzzle would be to generate the colour matrix for one-shot colour cameras that don't have a published matrix.  This could be done by imaging a colour chart and then solving for the matrix parameters.

Indeed, I believe that's precisely how the camera specific matrices to get to XYZ are found by Dave Coffin of dcraw. A matrix is found to go from camera colors to sRGB for instance, then this matrix contains the well-known XYZ - > sRGB matrix (for the D50 or D65 illuminant), so the camera specific matrix from camera to XYZ is found. If you dive into the dcraw code, you'll find these camera to XYZ matrices for all supported camera's 😉

Mabula

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
(@rudypohlgmail-com)
White Dwarf Customer
Joined: 2 years ago
Posts: 23
September 29, 2017 13:57  

Hi Mabula and Mark:

Although I hardly understand a word of what you are saying, I am so pleased that you two have connected and are having this important conversation. Most of us casual, relatively non-technical astrophotographers don't really need to know how or why some things work, but to know that they do work, and work well, is most encouraging.

There are many things which I really like about APP, the main one being its user-friendly, intuitive, step-by-step interface, plus its dynamic preview screen that gives you instant feedback as you are processing your image or selecting a file, or viewing the effects of a calibration, and more. It's great for people like me.

I made my living for the last 17 years in the area of software GUI design (recently retired), and user-friendly, intuitive interfaces and workflows were the key to getting people to use the software. I think APP is one of these programs, and in addition to having a great interface and workflow I would really like to see it also excel in all the technical aspects like the one your are discussing on color management, to make it a leading contender in the astrophotography processing world.

Thanks for all your efforts,
Rudy


(@sharkmelley)
White Dwarf Customer
Joined: 2 years ago
Posts: 14
September 30, 2017 00:40  

Hi Mabula,

Here are some quick comments.

The original ArcSinh stretch paper is by Lupton et al.  It can be found here: https://arxiv.org/abs/astro-ph/0312483

My experiments on the colour matrix are currently being done in PixInsight's environment using PixelMath.  Once the white balance has been performed, the best approach I've found so far is as follows:

1) Calculate the pixel luminance

2) Apply the colour matrix and gamma

3) Calculate the new pixel luminance

4) Multiply RGB values by original_luminance/new_luminance

The resulting pixel has the new colour but the original intensity.   Then any arbitrary colour preserving stretch (including arcsinh) can be applied. 

However, the above steps can only be legitimately be done on data that has been background subtracted i.e. on data where the light pollution gradients have been subtracted - so it can't really be done at the raw conversion stage.  The black point needs to be set correctly for the gamma to work as intended.  So I do it on the stacked data which has been background subtracted.

By the way, I think the matrices embedded in the DCRAW code are probably the Adobe matrices.  I went through the code once, trying to understand why DCRAW output only 12bit data for Sony A7S compressed raw files instead of the 13 bits that are actually encoded but that's another story. 

Mark

 

 


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2081
September 30, 2017 12:00  
Posted by: Rudy Pohl

Hi Mabula and Mark:

Although I hardly understand a word of what you are saying, I am so pleased that you two have connected and are having this important conversation. Most of us casual, relatively non-technical astrophotographers don't really need to know how or why some things work, but to know that they do work, and work well, is most encouraging.

There are many things which I really like about APP, the main one being its user-friendly, intuitive, step-by-step interface, plus its dynamic preview screen that gives you instant feedback as you are processing your image or selecting a file, or viewing the effects of a calibration, and more. It's great for people like me.

I made my living for the last 17 years in the area of software GUI design (recently retired), and user-friendly, intuitive interfaces and workflows were the key to getting people to use the software. I think APP is one of these programs, and in addition to having a great interface and workflow I would really like to see it also excel in all the technical aspects like the one your are discussing on color management, to make it a leading contender in the astrophotography processing world.

Thanks for all your efforts,
Rudy

Thank you very much, for bringing this to my attention 😉 Rudy,

We are always looking for ways to improve our data processing.

This aspect of color management for DSLR users by Mark and Roger is really very interesting.

Mabula

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2081
September 30, 2017 12:21  
Posted by: Mark Shelley

Hi Mabula,

Here are some quick comments.

The original ArcSinh stretch paper is by Lupton et al.  It can be found here: https://arxiv.org/abs/astro-ph/0312483

My experiments on the colour matrix are currently being done in PixInsight's environment using PixelMath.  Once the white balance has been performed, the best approach I've found so far is as follows:

1) Calculate the pixel luminance

2) Apply the colour matrix and gamma

3) Calculate the new pixel luminance

4) Multiply RGB values by original_luminance/new_luminance

The resulting pixel has the new colour but the original intensity.   Then any arbitrary colour preserving stretch (including arcsinh) can be applied. 

However, the above steps can only be legitimately be done on data that has been background subtracted i.e. on data where the light pollution gradients have been subtracted - so it can't really be done at the raw conversion stage.  The black point needs to be set correctly for the gamma to work as intended.  So I do it on the stacked data which has been background subtracted.

By the way, I think the matrices embedded in the DCRAW code are probably the Adobe matrices.  I went through the code once, trying to understand why DCRAW output only 12bit data for Sony A7S compressed raw files instead of the 13 bits that are actually encoded but that's another story. 

Mark

 

 

Thank you Mark, I have downloaded the PDF 😉

Okay, thank you for sharing your steps.

Is there a particular reason why you correct for the luminance? I read from your post on DPreview that the camera matrix has it's value for delivering a full RGB spectrum of colors since the filters leak to the other channels as well. But there must also be a correction in the matrix which is sensor specific, to correct for relative sensitivity per wavelength of the sensor, so I would think that correcting for the luminance is not needed (otheriwse the matrix would be adjusted already?), but maybe your testing indicates otherwise?

I find it interesting that you do apply the gamma curve belonging to the colorspace and then use the arc-sinh stretch for further stretching. I know the gamma curve, strictly speaking, is necessary to get the result as meant by camera white balance combined with the matrix conversions. If you don't apply the gamma curve and only the arc-sinh stretch, is the result really much different?

I like your idea of applying the color matrix on the background subtracted data, I think that really makes sense to get the best colors in the signal we are interested in. Excellent 😉

By the way, I think the matrices embedded in the DCRAW code are probably the Adobe matrices.  I went through the code once, trying to understand why DCRAW output only 12bit data for Sony A7S compressed raw files instead of the 13 bits that are actually encoded but that's another story.

Do you mean, that the camera specific matrices to go from camera colors to the XYZ colorspace are published/created by Adobe per camera model? I was under the impression that Dave Coffin actually found these matrices himself by renting different camera models from shops and/or borrrowed them from other photographers.

Kind regards,

Mabula

 

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
(@marc_theunissen)
Main Sequence Star Customer
Joined: 2 years ago
Posts: 26
September 30, 2017 13:21  

Very interesting discussion. I suppose that the purpose of a color correction is to achieve a kind of more natural colors of the image on a sub level?

If this is true, why not implement the color correction as a color calibration not on sub level but on stack level? I can image a kind of PhotometricColorCalibration method as in PI? In which case plate-solving is a pre-requisite?

I can also imagine that the latter could yield better results with adequate color correction of subs?

 


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2081
September 30, 2017 13:48  

Hi Marc,

I suppose that the purpose of a color correction is to achieve a kind of more natural colors of the image on a sub level?

Yes, but this is a correction that is determined/fixed by how a specific camera is made. Each DSLR camera model has a specific camera 3x3 matrix that is needed to go from sensor/camera colors to the neutral XYZ colorspace from which you can go the non-linear sRGB and Adobe 1998 colorspaces using XYZ -> sRGB/Adobe1998 3x3 matrices. A raw convertor for normal photography performs these conversions.

Conventional deepsky linear data processing only uses the sensor/camera colors, so it neglects these 2 matrix conversions. There actually is no deep sky processing application with a GUI out there that uses these conversions as I understand.

The camera specific matrix is needed to correct for transparancy of the CFA filters on the sensor and to correct for sensor sensitivity per wavelength of the sensor. For instance applying white balance on raw sensor data without these conversions really doens't make any sense. Furthermore, to correctly go to sRGB/Adobe1998 colors, a colorspace specific stretch (S-curve, more or less gamma 2.2) is needed as well. Especially this gamma stretch/tone curve adjustment is causing practical problems in using these color corrections effectively for Deep Sky image processing.

Roger N. Clarke has made a commandline tool that does this, the RNC-color-stretch. And Mark Shelley has been investigating this thoroughly using pixelmath in PI as I understand.

The goal is to achieve more saturation and better colors after having applied regular routines for background correction and star color calibration. The algorithms needed to correct background and calibrate star colors are not the discussion here, although they are directly related off course, since we aim to get more color and better saturation 😉

Mabula

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
(@sharkmelley)
White Dwarf Customer
Joined: 2 years ago
Posts: 14
October 1, 2017 00:25  

Hi Mabula,

Addressing some of your questions:

Is there a particular reason why you correct for the luminance?

The reason I adjust for luminance is that I don't actually want the gamma stretch to be performed.  I just want to isolate the effect that the gamma adjustment has on the RGB colour ratios in each pixel.  After applying the colour matrix, a gamma of 2.2 is necessary to obtain the correct colours.  But for a typical deep sky astro image, a gamma of 2.2 provides nowhere near enough stretching.  So further stretching is required but in my experience it is really difficult to stretch data that already has a gamma applied.  The results just never look right.  The purpose of my luminance adjustment is to remove the gamma stretch whilst preserving the effect the gamma stretch had on the RGB ratios.

I find it interesting that you do apply the gamma curve belonging to the colorspace and then use the arc-sinh stretch for further stretching. I know the gamma curve, strictly speaking, is necessary to get the result as meant by camera white balance combined with the matrix conversions. If you don't apply the gamma curve and only the arc-sinh stretch, is the result really much different?

That's a very important question!

 Assuming that the white balance is correct, the arcsinh stretch is far more important than the colour matrix.  More generally, it is crucial to have the ability to perform an arbitrary stretch on the data without diluting its colour or altering the RGB ratios in the pixel i.e. a stretch that preserves colour.  Ideally I would like to perform a "curves" type of stretch that preserves colour i.e. a stretch that affects luminance but keeps the RGB ratios fixed in each pixel.  The folk who use Photoshop curves to stretch astro images end up having to iteratively stretch then saturate, stretch then saturate etc. to stop their star colours being washed out.  Such an approach is nonsense.  It's far better not to lose the colour in the first place.  But there's no way to achieve it in Photoshop and that's why Roger wrote his rnc-colour-stretch.

To be honest once you have the ability to perform colour preserving non-linear stretches, the resulting colours are consistent across the whole dynamic range and are accurate enough for  everyone except the most demanding users.  For a typical astro-image, the twist to the colour space that comes from applying the colour matrix and gamma is very subtle.  I only really notice it in areas of hydrogen emissions - it turns them a bit more pink.  So if we only apply an arcsinh stretch then the result isn't really much different.

Do you mean, that the camera specific matrices to go from camera colors to the XYZ colorspace are published/created by Adobe per camera model? I was under the impression that Dave Coffin actually found these matrices himself by renting different camera models from shops and/or borrrowed them from other photographers.

Inside the DCRAW code there is the following comment: "All matrices are from Adobe DNG Converter unless otherwise noted".  They are mostly the Adobe ColorMatrices for the D65 illuminant.  Inside a typical DNG file there are ColorMatrices and ForwardMatrices for 2 different illuminants - to allow interpolation between them for arbitrary illuminants.  But for some of the other cameras it might be the case that Dave Coffin generated the matrices himself.

Mark


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2081
October 2, 2017 18:20  

Hi Mark,

Thank you for your clear answers.

The reason I adjust for luminance is that I don't actually want the gamma stretch to be performed.  I just want to isolate the effect that the gamma adjustment has on the RGB colour ratios in each pixel.  After applying the colour matrix, a gamma of 2.2 is necessary to obtain the correct colours.  But for a typical deep sky astro image, a gamma of 2.2 provides nowhere near enough stretching.  So further stretching is required but in my experience it is really difficult to stretch data that already has a gamma applied.  The results just never look right.  The purpose of my luminance adjustment is to remove the gamma stretch whilst preserving the effect the gamma stretch had on the RGB ratios.

Excellent, yes, I understand what you are doing, I was already thinking of needing to write some regression algorithm that will get the same colors after any stretch applied when compared with the colors after the colorspace specific gamma curve. But your method might be much easier, need to test it though I think.

Assuming that the white balance is correct, the arcsinh stretch is far more important than the colour matrix.

Okay, I do see that the color preserving stretch produces a much more dramatic difference 😉

Inside the DCRAW code there is the following comment: "All matrices are from Adobe DNG Converter unless otherwise noted".  They are mostly the Adobe ColorMatrices for the D65 illuminant.  Inside a typical DNG file there are ColorMatrices and ForwardMatrices for 2 different illuminants - to allow interpolation between them for arbitrary illuminants.  But for some of the other cameras it might be the case that Dave Coffin generated the matrices himself.

Ok, you're totally right:

/*
   All matrices are from Adobe DNG Converter unless otherwise noted.
 */
void CLASS adobe_coeff (const char *make, const char *model)
{
  static const struct {
    const char *prefix;
    short black, maximum, trans[12];
  } table[] = {
    { "AgfaPhoto DC-833m", 0, 0,    /* DJC */
    { 11438,-3762,-1115,-2409,9914,2497,-1227,2295,5300 } },
    { "Apple QuickTake", 0, 0,        /* DJC */
    { 21392,-5653,-3353,2406,8010,-415,7166,1427,2078 } },
    { "Canon EOS D2000", 0, 0,
    { 24542,-10860,-3401,-1490,11370,-297,2858,-605,3225 } },

etc...

Mabula

 

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
(@1llusiveastro)
Molecular Cloud Customer
Joined: 1 year ago
Posts: 3
March 14, 2018 05:18  

Hi,

I know you're probably working on a lot of things right now, but how does this feature fit into your timeline? Is it a high or low priority? I think it would be very desirable.


ReplyQuote
Page 1 / 2
Share: