Does astro pixel pr...
 
Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

[Sticky] Does astro pixel processor apply the color matrix correction to digital camera raw data?

45 Posts
13 Users
15 Likes
19.5 K Views
(@rudypohlgmail-com)
Main Sequence Star
Joined: 7 years ago
Posts: 23
Topic starter  

Hi Mabula:

Someone in my astro forum today has asked if "Astro Pixel Processor applies the color matrix correction to digital camera raw data"? Honestly, I'm not even sure what this means, but I thought I would ask you.

Thanks,
Rudy

Note by Mabula: upgraded this excellent question to a Sticky.

NOTE by Mabula: The next APP version 2.0.0-beta15 will finally support data processing using the camera color conversion matrix 🙂 It will convert the sensor data to the Linear RGB color space (so the sRGB/Adobe1998 gamma curve is not applied to keep the data linear). Soon I will publish some results in the new release notes. I expect 2.0.0-beta15 to be released within a couple of days time...

This topic was modified 1 year ago 2 times by Mabula-Admin

   
Mabula-Admin reacted
ReplyQuote
(@jeroenm)
Red Giant
Joined: 7 years ago
Posts: 31
 

Hi Rudy,

sure it does. Just place a tick at "Force CFA" in tab 0 (zero) "RAW/FITS". APP will then assume that frames all have CFA (color matrix or bayer matrix).

If I'm not mistaken APP will apply the CFA just before the integration of the light frames.

Cheers,

Jeroen


   
Mabula-Admin reacted
ReplyQuote
(@rudypohlgmail-com)
Main Sequence Star
Joined: 7 years ago
Posts: 23
Topic starter  

Hi Jeroen,

Thanks so much for your input, this certainly is good news.

However, due to the crucial nature of this key question for some potential new users in our astro forum I need to also hear Mabula's answer on this, I'm sure you understand. Thanks very much once again and I'm sure we'll be touching base from time to time in this forum.

Best regards,
Rudy 


   
Mabula-Admin reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi Rudy & Jeroen,

This is a very good and interesting question to say the least 😉

First of all, the question:

Does Astro Pixel Processor apply the color matrix correction to digital camera raw data? 

probably doesn't refer to the debayering process I think. If it does then Jeroen's answer is correct.

The debayering in APP is done automatically after the data calibration, so before 3) analyse stars. The data needs to be debayered for star analysis and the registration of the data. Without debayering, the star location calculations will suffer severely... and thus registration will suffer severely.

But as I mentioned, the color matrix correction probably doesn't refer to the debayering. If it doesn't, then I would ask the asker of the question, which color matrix correction he means first.

If raw DSLR data is converted using a raw processor like adobe raw, then several color matrix corrections will be applied actually.

Let me explain what normally happens when the raw data is converted using any raw converter meant for normal photography purposes.

  1. the sensor data is first multiplied with a 3 factor white balance setting and in some cases, the black point is adjusted as well. The sensor data becomes camera data
  2. the next step is debayering.. 
  3. then the first color correction matrix is applied, the camera data is multiplied with a 3x3 camera specific! matrix to convert the camera data to the neutral XYZ colorspace. The camera data becomes color neutral data.
  4. Then the color neutral data is again multiplied with a 3x3 colorspace matrix, this can be the well known XYZ-> sRGB or XYZ -> Adobe 1998  matrix for instance. 
  5. Lastly, these sRGB (or Adobe 1998) pixels are then corrected for the way our eyes and brain perceive incoming light/photons, that means that the data needs to undergo a logarithmic conversion (with a gamma of about 2.2).

After these steps, your raw sensor data has been converted to the normal photography image that it is supposed to be for normal photography.

Some remarks:

  • any of the steps 3-5 make the data unsuitable for accurate astronomical processing. The data becomes non-linear, noise of the separate RGB channels are injected into the other channels due to the 2 3x3 matrices that are involved.
  • camera white balance applying on linear data without steps 3-5 really has no sensible value in the way camera white balance works and is meant to work. The white balance needs both of the two 3x3 matrix conversions and the logarithmic conversion as well to make sense. (The logarithmic conversion has influence on the perceived colours as well. For instance, monitor calibration is immediately off if you adjust the brightness/luminance of your monitor 😉 )

Rudy, can you ask your fellow-forum member what exactly he refers to? 

I hope this does explain some details of how raw dslr data works and why it's essential we don't apply steps 3-5 for accurate astronomical processing. The 3x3 matrices mix noise and signals. The log conversion is non-linear. And all of these steps will cause processing steps like:

  • data calibration,
  • data normalisation,
  • average/median integration,
  • outlier rejection during integration
  • light pollution/gradient removal
  • background calibration
  • star color calibration

to work suboptimal, all the steps have the assumption that the data is linear.

A simple example: if we perform an average integration (which is the same as summing everything), with non-linear data, we really can't calculate a correct average based on the amount of photons, thus signal, the sensor received. That information will be lost due to the non-linear conversions. The resulting average value from  non-linear data will not be equal to the stretched average value from linear data. 

Kind regards,

Mabula


   
ReplyQuote
(@rudypohlgmail-com)
Main Sequence Star
Joined: 7 years ago
Posts: 23
Topic starter  

Hi Mabula:

Thanks for this very comprehensive response. I will post the link to this thread in our forum post and will encourage the original asker to come here and read this thread and encourage him to pass on to me whatever further questions or clarifications he might have.

This gentleman has been a professional scientific imaging specialist for many decades. He has a Ph.D in Planetary Science from MIT and is now a senior scientist at the Planetary Science Institute, and has worked as a science member and co-investigator on many collaborative research teams over the years such as the Cassini Mission, the Mars Reconnaissance Orbiter,  NASA's Europa Mission and others. Here's his About page on his website:  http://www.clarkvision.com/rnc/index.html

This gentleman has been one of the key contributors for years on our astrophotography forum at DP review of which I am a member, and he generously gives his time to help any and all of us who are interested in learning many of the technical aspects of astrophotography and image processing. 

Cheers,
Rudy


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi Rudy,

That's fine, I would love to discuss this topic with Roger Clarke myself actually ;-),  so I can perhaps join your forum to discuss this further or invite Roger to this forum. (I'll need to register Roger at my site then). 

Kind regards,

Mabula


   
ReplyQuote
(@rudypohlgmail-com)
Main Sequence Star
Joined: 7 years ago
Posts: 23
Topic starter  

Hi Mabula:

I sent Roger a private message at our forum inviting him to come hear and read your reply. I'm sure he will be in touch as he seems to really like discussing these sorts of things, most of which are completely over my head, but I work at them anyway.

Best regards,
Rud


   
Mabula-Admin reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Excellent Rudy,

I know Roger is very knowledgeable, so I would love to hear/read his view/opinions on these matters. It might lead to changes in APP perhaps to further improve the results and possibilities with APP 😉 I am always open to new insights/techniques...

Kind regards,

Mabula


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi Rudy,

I have read your topic on dpreview:

https://www.dpreview.com/forums/thread/4207826

And i have been reading part of the Mark's (sharkmelley) topic on Cloudy Night

https://www.cloudynights.com/topic/529426-dslr-processing-the-missing-matrix/

I know how to do this, in fact the code is already in APP. APP can show the DLSR raw data in non-linear sRGB or Adobe 1998 using the camera specific matrix -> XYZ - > sRGB/Adobe1998 using the image viewer mode setting of image.

For all supported DSLR camera's in APP the camera specific matrices are stored in APP. So I can easily build in the option to apply conversion to sRGB or Adobe1998 using the matric conversion without applying the logarithmic conversion which would render the data non-linear.

An example with a normal photograph:

linear image without the matrix conversion:

GUI imageViewerLinear

and with matrix conversion and gamma 2.2 conversion

GUI imageViewerImage

I'll ask Roger and/or Mark for some extra information 😉

Mabula

 


   
ReplyQuote
(@sharkmelley)
White Dwarf
Joined: 7 years ago
Posts: 15
 

I noticed this conversation taking place, so I though I would join in 🙂

What Mabula says above is absolutely correct. When processing a normal photo, the bias must be subtracted, it must be white balanced then the next two operations that are applied are a camera specific colour matrix and gamma.  In fact instead of gamma it is an sRGB tone curve but that is very similar to a gamma of 2.2.    The matrix has an effect on colour and so does the gamma.  But the gamma also has an effect on intensity - similar to a "curves" operation.  For processing normal photos there is usually some kind of exposure adjustment to give the user control over what parts of the image are under exposed or over exposed.

In the world of astroprocessing this colour matrix is usually ignored unless you let Photoshop (or similar) do the raw conversion.  But a Photoshop raw conversion prevents you using calibration frames - bias, flats and darks.  However, those who use Photoshop based workflow experience problems with the huge dynamic range in astro-images.  That's why Roger has written his rnc-color-stretch software, in order to stretch the data much more than Photoshop allows whilst retaining star colour.  Otherwise, typically what happens during a curve stretch is a loss of colour to the brighter objects - they get bleached to white.

For a long time I have been using a stretch known as the ArcSinh stretch which allows huge amounts of stretching whilst preserving colour because it deliberately preserves the original ratios of R,G&B in each pixel.  However, I found I couldn't match the colour calibrated "Photoshop Colours" because the camera's colour matrix was not being applied.  But you can't apply the colour matrix without the gamma otherwise the colours are still wrong.  But I don't want to apply the gamma because I want total control over the tone curve (the intensity curve).

The thread on Cloudy Nights documents this struggle.  It's still at an experimental stage but I'm almost at the point where I can apply the necessary "twist" to the colourspace without also applying the gamma intensity transformation.  It isolates the colourspace "twist" that occurs during the combined operation of colour matrix and gamma. So the approach allows an arbitrary "curves" stretch to be applied to the intensity whilst preserving colour from the dimmest to the brightest object and also facilitate the colourspace "twist" that would match the colour calibrated workflow to which Roger Clark often refers.  And it allows proper calibration frames to be used.

One final part of the puzzle would be to generate the colour matrix for one-shot colour cameras that don't have a published matrix.  This could be done by imaging a colour chart and then solving for the matrix parameters.

Mark


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi Mark,

Thank you very much for joining in 😉

Indeed, strictly speaking, the gamma curve is colorspace specific, so the curve for sRGB is slightly different than for Adobe 1998.

In the world of astroprocessing this colour matrix is usually ignored unless you let Photoshop (or similar) do the raw conversion.  But a Photoshop raw conversion prevents you using calibration frames - bias, flats and darks.  However, those who use Photoshop based workflow experience problems with the huge dynamic range in astro-images.  That's why Roger has written his rnc-color-stretch software, in order to stretch the data much more than Photoshop allows whilst retaining star colour.  Otherwise, typically what happens during a curve stretch is a loss of colour to the brighter objects - they get bleached to white.

Indeed, that's clear to me. We need to have a way to be able to apply the color matrix without applying the gamma curve, keeping the data linear, so the user still has full control over the data.

For a long time I have been using a stretch known as the ArcSinh stretch which allows huge amounts of stretching whilst preserving colour because it deliberately preserves the original ratios of R,G&B in each pixel.  However, I found I couldn't match the colour calibrated "Photoshop Colours" because the camera's colour matrix was not being applied.  But you can't apply the colour matrix without the gamma otherwise the colours are still wrong.  But I don't want to apply the gamma because I want total control over the tone curve (the intensity curve).

Yes, I am aware of arc-sinh stretching. I did read the original article on it a couple of months ago. I remember that the stretch itself is discussed, as is a method to prevent white star cores. Do you happen to have a link where I can find the original article? I seem to have lost the URL ;-(

What I would propose to implement in APP is the following: since I have written the raw conversion myself, (APP doesn't rely on DCRAW) I have complete control. I can simply give the user the option to apply color matrix conversion to either sRGB or Adobe 1998 through the XYZ neutral colorspace from the camera/sensor colors, (without applying the last step, the gamma curve). This will happen as a final step in image loading. Data calibration would not be affected because this is done dynamically in the image loaders immediately after having read the sensor colors from the raw file. Off course, the color matrix won't be applied to the calibration frames, only to the lights. This would solve all problems I think that you discuss.

So, are you writing your own specific raw converter so you can do this on your own data then ? Or do you get linear sensor data through dcraw and then manipulate it with own code?

One final part of the puzzle would be to generate the colour matrix for one-shot colour cameras that don't have a published matrix.  This could be done by imaging a colour chart and then solving for the matrix parameters.

Indeed, I believe that's precisely how the camera specific matrices to get to XYZ are found by Dave Coffin of dcraw. A matrix is found to go from camera colors to sRGB for instance, then this matrix contains the well-known XYZ - > sRGB matrix (for the D50 or D65 illuminant), so the camera specific matrix from camera to XYZ is found. If you dive into the dcraw code, you'll find these camera to XYZ matrices for all supported camera's 😉

Mabula


   
ReplyQuote
(@rudypohlgmail-com)
Main Sequence Star
Joined: 7 years ago
Posts: 23
Topic starter  

Hi Mabula and Mark:

Although I hardly understand a word of what you are saying, I am so pleased that you two have connected and are having this important conversation. Most of us casual, relatively non-technical astrophotographers don't really need to know how or why some things work, but to know that they do work, and work well, is most encouraging.

There are many things which I really like about APP, the main one being its user-friendly, intuitive, step-by-step interface, plus its dynamic preview screen that gives you instant feedback as you are processing your image or selecting a file, or viewing the effects of a calibration, and more. It's great for people like me.

I made my living for the last 17 years in the area of software GUI design (recently retired), and user-friendly, intuitive interfaces and workflows were the key to getting people to use the software. I think APP is one of these programs, and in addition to having a great interface and workflow I would really like to see it also excel in all the technical aspects like the one your are discussing on color management, to make it a leading contender in the astrophotography processing world.

Thanks for all your efforts,
Rudy


   
ReplyQuote
(@sharkmelley)
White Dwarf
Joined: 7 years ago
Posts: 15
 

Hi Mabula,

Here are some quick comments.

The original ArcSinh stretch paper is by Lupton et al.  It can be found here: https://arxiv.org/abs/astro-ph/0312483

My experiments on the colour matrix are currently being done in PixInsight's environment using PixelMath.  Once the white balance has been performed, the best approach I've found so far is as follows:

1) Calculate the pixel luminance

2) Apply the colour matrix and gamma

3) Calculate the new pixel luminance

4) Multiply RGB values by original_luminance/new_luminance

The resulting pixel has the new colour but the original intensity.   Then any arbitrary colour preserving stretch (including arcsinh) can be applied. 

However, the above steps can only be legitimately be done on data that has been background subtracted i.e. on data where the light pollution gradients have been subtracted - so it can't really be done at the raw conversion stage.  The black point needs to be set correctly for the gamma to work as intended.  So I do it on the stacked data which has been background subtracted.

By the way, I think the matrices embedded in the DCRAW code are probably the Adobe matrices.  I went through the code once, trying to understand why DCRAW output only 12bit data for Sony A7S compressed raw files instead of the 13 bits that are actually encoded but that's another story. 

Mark

 

 


   
Mabula-Admin reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: Rudy Pohl

Hi Mabula and Mark:

Although I hardly understand a word of what you are saying, I am so pleased that you two have connected and are having this important conversation. Most of us casual, relatively non-technical astrophotographers don't really need to know how or why some things work, but to know that they do work, and work well, is most encouraging.

There are many things which I really like about APP, the main one being its user-friendly, intuitive, step-by-step interface, plus its dynamic preview screen that gives you instant feedback as you are processing your image or selecting a file, or viewing the effects of a calibration, and more. It's great for people like me.

I made my living for the last 17 years in the area of software GUI design (recently retired), and user-friendly, intuitive interfaces and workflows were the key to getting people to use the software. I think APP is one of these programs, and in addition to having a great interface and workflow I would really like to see it also excel in all the technical aspects like the one your are discussing on color management, to make it a leading contender in the astrophotography processing world.

Thanks for all your efforts,
Rudy

Thank you very much, for bringing this to my attention 😉 Rudy,

We are always looking for ways to improve our data processing.

This aspect of color management for DSLR users by Mark and Roger is really very interesting.

Mabula


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: Mark Shelley

Hi Mabula,

Here are some quick comments.

The original ArcSinh stretch paper is by Lupton et al.  It can be found here: https://arxiv.org/abs/astro-ph/0312483

My experiments on the colour matrix are currently being done in PixInsight's environment using PixelMath.  Once the white balance has been performed, the best approach I've found so far is as follows:

1) Calculate the pixel luminance

2) Apply the colour matrix and gamma

3) Calculate the new pixel luminance

4) Multiply RGB values by original_luminance/new_luminance

The resulting pixel has the new colour but the original intensity.   Then any arbitrary colour preserving stretch (including arcsinh) can be applied. 

However, the above steps can only be legitimately be done on data that has been background subtracted i.e. on data where the light pollution gradients have been subtracted - so it can't really be done at the raw conversion stage.  The black point needs to be set correctly for the gamma to work as intended.  So I do it on the stacked data which has been background subtracted.

By the way, I think the matrices embedded in the DCRAW code are probably the Adobe matrices.  I went through the code once, trying to understand why DCRAW output only 12bit data for Sony A7S compressed raw files instead of the 13 bits that are actually encoded but that's another story. 

Mark

 

 

Thank you Mark, I have downloaded the PDF 😉

Okay, thank you for sharing your steps.

Is there a particular reason why you correct for the luminance? I read from your post on DPreview that the camera matrix has it's value for delivering a full RGB spectrum of colors since the filters leak to the other channels as well. But there must also be a correction in the matrix which is sensor specific, to correct for relative sensitivity per wavelength of the sensor, so I would think that correcting for the luminance is not needed (otheriwse the matrix would be adjusted already?), but maybe your testing indicates otherwise?

I find it interesting that you do apply the gamma curve belonging to the colorspace and then use the arc-sinh stretch for further stretching. I know the gamma curve, strictly speaking, is necessary to get the result as meant by camera white balance combined with the matrix conversions. If you don't apply the gamma curve and only the arc-sinh stretch, is the result really much different?

I like your idea of applying the color matrix on the background subtracted data, I think that really makes sense to get the best colors in the signal we are interested in. Excellent 😉

By the way, I think the matrices embedded in the DCRAW code are probably the Adobe matrices.  I went through the code once, trying to understand why DCRAW output only 12bit data for Sony A7S compressed raw files instead of the 13 bits that are actually encoded but that's another story.

Do you mean, that the camera specific matrices to go from camera colors to the XYZ colorspace are published/created by Adobe per camera model? I was under the impression that Dave Coffin actually found these matrices himself by renting different camera models from shops and/or borrrowed them from other photographers.

Kind regards,

Mabula

 


   
ReplyQuote
(@marc_theunissen)
Red Giant
Joined: 7 years ago
Posts: 26
 

Very interesting discussion. I suppose that the purpose of a color correction is to achieve a kind of more natural colors of the image on a sub level?

If this is true, why not implement the color correction as a color calibration not on sub level but on stack level? I can image a kind of PhotometricColorCalibration method as in PI? In which case plate-solving is a pre-requisite?

I can also imagine that the latter could yield better results with adequate color correction of subs?

 


   
Mabula-Admin reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi Marc,

I suppose that the purpose of a color correction is to achieve a kind of more natural colors of the image on a sub level?

Yes, but this is a correction that is determined/fixed by how a specific camera is made. Each DSLR camera model has a specific camera 3x3 matrix that is needed to go from sensor/camera colors to the neutral XYZ colorspace from which you can go the non-linear sRGB and Adobe 1998 colorspaces using XYZ -> sRGB/Adobe1998 3x3 matrices. A raw convertor for normal photography performs these conversions.

Conventional deepsky linear data processing only uses the sensor/camera colors, so it neglects these 2 matrix conversions. There actually is no deep sky processing application with a GUI out there that uses these conversions as I understand.

The camera specific matrix is needed to correct for transparancy of the CFA filters on the sensor and to correct for sensor sensitivity per wavelength of the sensor. For instance applying white balance on raw sensor data without these conversions really doens't make any sense. Furthermore, to correctly go to sRGB/Adobe1998 colors, a colorspace specific stretch (S-curve, more or less gamma 2.2) is needed as well. Especially this gamma stretch/tone curve adjustment is causing practical problems in using these color corrections effectively for Deep Sky image processing.

Roger N. Clarke has made a commandline tool that does this, the RNC-color-stretch. And Mark Shelley has been investigating this thoroughly using pixelmath in PI as I understand.

The goal is to achieve more saturation and better colors after having applied regular routines for background correction and star color calibration. The algorithms needed to correct background and calibrate star colors are not the discussion here, although they are directly related off course, since we aim to get more color and better saturation 😉

Mabula


   
ReplyQuote
(@sharkmelley)
White Dwarf
Joined: 7 years ago
Posts: 15
 

Hi Mabula,

Addressing some of your questions:

Is there a particular reason why you correct for the luminance?

The reason I adjust for luminance is that I don't actually want the gamma stretch to be performed.  I just want to isolate the effect that the gamma adjustment has on the RGB colour ratios in each pixel.  After applying the colour matrix, a gamma of 2.2 is necessary to obtain the correct colours.  But for a typical deep sky astro image, a gamma of 2.2 provides nowhere near enough stretching.  So further stretching is required but in my experience it is really difficult to stretch data that already has a gamma applied.  The results just never look right.  The purpose of my luminance adjustment is to remove the gamma stretch whilst preserving the effect the gamma stretch had on the RGB ratios.

I find it interesting that you do apply the gamma curve belonging to the colorspace and then use the arc-sinh stretch for further stretching. I know the gamma curve, strictly speaking, is necessary to get the result as meant by camera white balance combined with the matrix conversions. If you don't apply the gamma curve and only the arc-sinh stretch, is the result really much different?

That's a very important question!

 Assuming that the white balance is correct, the arcsinh stretch is far more important than the colour matrix.  More generally, it is crucial to have the ability to perform an arbitrary stretch on the data without diluting its colour or altering the RGB ratios in the pixel i.e. a stretch that preserves colour.  Ideally I would like to perform a "curves" type of stretch that preserves colour i.e. a stretch that affects luminance but keeps the RGB ratios fixed in each pixel.  The folk who use Photoshop curves to stretch astro images end up having to iteratively stretch then saturate, stretch then saturate etc. to stop their star colours being washed out.  Such an approach is nonsense.  It's far better not to lose the colour in the first place.  But there's no way to achieve it in Photoshop and that's why Roger wrote his rnc-colour-stretch.

To be honest once you have the ability to perform colour preserving non-linear stretches, the resulting colours are consistent across the whole dynamic range and are accurate enough for  everyone except the most demanding users.  For a typical astro-image, the twist to the colour space that comes from applying the colour matrix and gamma is very subtle.  I only really notice it in areas of hydrogen emissions - it turns them a bit more pink.  So if we only apply an arcsinh stretch then the result isn't really much different.

Do you mean, that the camera specific matrices to go from camera colors to the XYZ colorspace are published/created by Adobe per camera model? I was under the impression that Dave Coffin actually found these matrices himself by renting different camera models from shops and/or borrrowed them from other photographers.

Inside the DCRAW code there is the following comment: "All matrices are from Adobe DNG Converter unless otherwise noted".  They are mostly the Adobe ColorMatrices for the D65 illuminant.  Inside a typical DNG file there are ColorMatrices and ForwardMatrices for 2 different illuminants - to allow interpolation between them for arbitrary illuminants.  But for some of the other cameras it might be the case that Dave Coffin generated the matrices himself.

Mark


   
Mabula-Admin reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi Mark,

Thank you for your clear answers.

The reason I adjust for luminance is that I don't actually want the gamma stretch to be performed.  I just want to isolate the effect that the gamma adjustment has on the RGB colour ratios in each pixel.  After applying the colour matrix, a gamma of 2.2 is necessary to obtain the correct colours.  But for a typical deep sky astro image, a gamma of 2.2 provides nowhere near enough stretching.  So further stretching is required but in my experience it is really difficult to stretch data that already has a gamma applied.  The results just never look right.  The purpose of my luminance adjustment is to remove the gamma stretch whilst preserving the effect the gamma stretch had on the RGB ratios.

Excellent, yes, I understand what you are doing, I was already thinking of needing to write some regression algorithm that will get the same colors after any stretch applied when compared with the colors after the colorspace specific gamma curve. But your method might be much easier, need to test it though I think.

Assuming that the white balance is correct, the arcsinh stretch is far more important than the colour matrix.

Okay, I do see that the color preserving stretch produces a much more dramatic difference 😉

Inside the DCRAW code there is the following comment: "All matrices are from Adobe DNG Converter unless otherwise noted".  They are mostly the Adobe ColorMatrices for the D65 illuminant.  Inside a typical DNG file there are ColorMatrices and ForwardMatrices for 2 different illuminants - to allow interpolation between them for arbitrary illuminants.  But for some of the other cameras it might be the case that Dave Coffin generated the matrices himself.

Ok, you're totally right:

/*
   All matrices are from Adobe DNG Converter unless otherwise noted.
 */
void CLASS adobe_coeff (const char *make, const char *model)
{
  static const struct {
    const char *prefix;
    short black, maximum, trans[12];
  } table[] = {
    { "AgfaPhoto DC-833m", 0, 0,    /* DJC */
    { 11438,-3762,-1115,-2409,9914,2497,-1227,2295,5300 } },
    { "Apple QuickTake", 0, 0,        /* DJC */
    { 21392,-5653,-3353,2406,8010,-415,7166,1427,2078 } },
    { "Canon EOS D2000", 0, 0,
    { 24542,-10860,-3401,-1490,11370,-297,2858,-605,3225 } },

etc...

Mabula

 


   
Mark Shelley reacted
ReplyQuote
(@1llusiveastro)
Brown Dwarf
Joined: 6 years ago
Posts: 4
 

Hi,

I know you're probably working on a lot of things right now, but how does this feature fit into your timeline? Is it a high or low priority? I think it would be very desirable.


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi @1llusiveastro,

This feauture will come soon 😉

APP 1.060 will have a new and upgraded calibration engine which has the highest priority now.

Using the camera matrix while keeping the data linear is something that no other application currently does I believe and my aim will be do to just that. The problem is getting the right colors with the camera matrix while not applying the appropriate gamma curve which affect the colors as well.

An equally as interesting feautre will be color preserving stretching, so that will come soon as well. (similar to properties of the arcsinh stretch)

Kind regards,

Mabula


   
ReplyQuote
(@rudypohlgmail-com)
Main Sequence Star
Joined: 7 years ago
Posts: 23
Topic starter  

Hi Mabula,

Good for you for working so persistently on APP!

Best regards,
Rudy


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi Rudy @rudypohlgmail-com,

Thank you, your proposed changes for the colors with DSLR camera's will come soon after APP's next release 😉

1) applying the camera model specific matrix from camera colors  to XYZ to sRGB or Adobe 1998 while keeping the data linear

2) rebuitl the preview filters to include colro preserving stretching

First, APP 1.060 will see a major upgrade in the calibration engine...

Cheers,

Mabula


   
ReplyQuote
(@jjsmith)
Hydrogen Atom
Joined: 5 years ago
Posts: 2
 

Hi, I recently signed up for a trial license and am using it to stack some DSLR (Canon 5DS) Raw files as I read this programme preserves the colour matrix. I understand from clarkvision.com the histogram should have separate RGB peaks and not have them aligned as one of his points to remain 'colour-accurate'. Is this possible in APP? If so, what should the appropriate settings be? I seem to get default normalized stacks and the histogram is RGB aligned into a single peak.

A second related query is for my OSC ASI094pro, it uses the same CMOS as the Nikon 810(A - also I think). How should I set the RGB  in tab 0 (raw and fits) to apply this colour matrix correctly and also to process so the final stack histogram are not auto-aligned (unless by coincidence)?

I hope the qeury is clear as I am not very technical, so please feel free to ask if any clarification is needed. Many thanks!


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Just having peaks aligned isn't necessarily "colour-correct", nothing is really. 🙂 But you can say that we do know the wavelengths belonging to certain elements and you can use these things to kind of get a natural look. But people like to change that to others just because it looks nicer to them. To get a kind of accurate, natural starting point you can use the color correction tool in the 'Tools tab (9)' and play a bit with that, it's used at the end of the processing.

Second query I don't quite understand, does APP not automatically give you the correct pattern for that camera? If not you would have to look up the pattern in the sensor details of the manufacturer and apply it in that tab.


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi @jjsmith & @vincent-mod,

Hi, I recently signed up for a trial license and am using it to stack some DSLR (Canon 5DS) Raw files as I read this programme preserves the colour matrix. I understand from clarkvision.com the histogram should have separate RGB peaks and not have them aligned as one of his points to remain 'colour-accurate'. Is this possible in APP? If so, what should the appropriate settings be? I seem to get default normalized stacks and the histogram is RGB aligned into a single peak.

Off course, the information that you mention as provided by Clarkvision is in this case rather confusing and also rather incomplete I would say. It really depends on the data. To get accurate colors, severals processing steps need to be taken as well.

If you have data without any nebulosity showing, of for instance a regular star cluster like the double cluster then the statement "the histogram should have separate RGB peaks and not have them aligned" really is not very valid I think.

In that particular case, of no nebulosity present, the sky background is represented by the peaks in your histogram and for background calibration you really expect them to have the peaks aligned. For data with clear nebulosity however you really can't trust the location of the histogram peaks as an nidication of proper background calibration.

Now what happens in APP with normalization for color data is that, by default, the background is neutralized (which you can easily switch off). This will not harm your data and also is not proper background calibration. Background neutralization is a histogram operation and for data with nebulosity, you will see that it is not the same as background calibration. Proper background calibration can be done with a separate tool in APP in menu 9).

Further more, proper star colors will only be achieved after proper background calibration and the Star Color Calibration module in APP will even correct the sky background again in that case.

So it actually is a lot more complicated then simply stating "the histogram should have separate RGB peaks and not have them aligned" because there is data that will actually need the peaks to be aligned for proper colors. And then you have the question, what is proper color ? Is that the color that your camera sees (each sensor sees the world differently)? Is that the color that your eyes would see ? In addition, the used filters will again have effects that really will change color as perceived by your camera... Star Color Calibration needs to take into account the filters as well.

A second related query is for my OSC ASI094pro, it uses the same CMOS as the Nikon 810(A - also I think). How should I set the RGB in tab 0 (raw and fits) to apply this colour matrix correctly and also to process so the final stack histogram are not auto-aligned (unless by coincidence)?

For the asi094pro it depends on the capture software used what you need to do. IN most cases, the capture software will not store metadata about the data being from a Bayer CFA sensor, so will you need to enable Force Bayer CFA in 0) Raw/FITS and you will need to set the correct Bayer Pattern as well to get proper data processing.

Please let me know if you have additional questions.

Mabula

This post was modified 5 years ago by Mabula-Admin

   
ReplyQuote
(@jethro777)
Molecular Cloud
Joined: 5 years ago
Posts: 3
 

Just to follow up on this, is the ability to adjust for the cameras color response as per sharkmelly's comments now possible?

Where, when and how is this done, assuming you have the 3x3 matrix available?


   
ReplyQuote
(@euripides)
Main Sequence Star
Joined: 4 years ago
Posts: 18
 

I was reading about the method at clarkvision.com and just found this topic here too 🙂

As I can see it is a command line tool and I need to test it - I am in the middle of an M31 integration at the moment - but it would be really interesting if this is already “part” of our APP 🙂 


   
ReplyQuote
(@theo950)
Hydrogen Atom
Joined: 3 years ago
Posts: 2
 
Posted by: @mabula-admin

Thank you, your proposed changes for the colors with DSLR camera's will come soon after APP's next release 

1) applying the camera model specific matrix from camera colors  to XYZ to sRGB or Adobe 1998 while keeping the data linear

2) rebuitl the preview filters to include colro preserving stretching

Hi Mabula, I am relatively new to AP and have been using APP for (pre-)processing. Lately, I got interested in attempting to develop a colour calibrated workflow and found this thread (and the original one on CN).

Has the feature mentionned above yet been implemented in APP? 1) performing a colour matrix correction + gamma-curve on the integrated data, 2) correct for the effect of the gamma-curve on the luminance (as Mark Shelley pointed out), to preserve the linearity of the data.

If so, when does it take place during the whole process? If I've understood it well, CM+gamma should be applied after background extraction and calibration. Is this correct? Is there a way we could choose to either perform or noot perform these corrections on the data?

Best regards,

Théo


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Yes, they have been in APP since that release basically, but internally applied in the algorithms. For the preview filter this is applied when loading in a light frame to preview and other I'm not really sure about, but likely early in the process as well. These have been working very reliable now for quite a while, color correction is then best to achieve with the color correction tool, given you use broadband data (not narrowband).


   
ReplyQuote
Page 1 / 2
Share: