4) REGISTER Tab 'sa...
 
Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

4) REGISTER Tab 'same camera and optics' option doesn't seem to do anything for my data

5 Posts
3 Users
2 Likes
415 Views
(@skyguiderpro)
White Dwarf
Joined: 1 year ago
Posts: 5
Topic starter  

New to APP, so I apologize in advance if this was already covered.  (I did a quick search in the forum and didn't see this specific topic addressed.)

I ran test cases for which I had 119 light frames (and corresponding darks, flats, and bias frames).  The data were captured on a Canon R5 OSC (unmodded) body (90 second ISO 800) with a 100-500mm Canon RF lens (at 363mm f/5.6, no filters), unguided on a iOptron SkyGuider Pro.  The data were all captured in a single session under clear Bortle 7 skies.  The target was the Flaming Star Nebula.  I used APP 2.0.0-beta17 to process the data.

Registration scores dropped significantly with the 4) REGISTER Tab 'same camera and optics' option enabled.  Each of the permutations below had the same, low registration scores:

CASE 1: 4) REGISTER Tab with both 'same camera and optics' and 'use dynamic distortion correction' enabled

CASE 2: 4) REGISTER Tab with 'same camera and optics' enabled, but with 'use dynamic distortion correction' disabled

CASE 3: 4) REGISTER Tab with both 'same camera and optics' and 'use dynamic distortion correction' disabled

The one case where registration scores looked (what I will call) 'much improved' was with 4) REGISTER Tab 'use dynamic distortion correction' enabled, and with 'same camera and optics' disabled.  For consistency I will refer to this as CASE 4.

This seemed non-intuitive to me.  It made be wonder exactly what 'same camera and optics' is doing (or what it was intended to do / when it is supposed to have some effect).

To assess the scores, I initially took screenshots of the graphical analytical results and pasted them into PowerPoint in order to do a rough comparison.  I used the page up/down keyboard buttons to emulate a 'blink comparator'.  The scores in the first 3 cases above all looked identical. 

Below is a screenshot of CASE 2 registration scores.  (The scores looked the same for CASE 1 & CASE 3 as well).

image

Below is a screenshot of CASE 4 registration scores.  They look much better.  The key difference is that only 'use dynamic distortion correction' was enabled.

image

I then exported the registration scores to Excel and numerically compared them.  I found the registration scores were indeed identical between CASE 1 & CASE 3.  (I didn't compare CASE 2 numerically in Excel.) 

This all left me scratching my head.  I asked myself why the 4) REGISTER Tab 'same camera and optics' option, as evidenced by the registration scores, did not appear to do anything, at least not with this data set. 

I also ran a number of integrations for which the only difference was the setting of these 2 registration options.  There was indeed a difference in the quality of the integration results.  CASE 4 results were visibly better (when zoomed in to 800% to inspect the stars--yes, guilty of pixel peeping).  CASE 4 stars were a little smaller and less fuzzy.  [I don't claim the differences were monumental, but I expect anyone reading this forum post would be able to pick out the 'better' result if they could see them on their monitor as I did using PowerPoint.]  The takeaway here is that in addition to the numeric registration scores looking worse, the integration results also looked (at least) a little worse.  So there was more to the story than just the scores looking different.

(I used the aforementioned PowerPoint approach to compare the integration results.  The difference was obvious when using the keyboard to page up and down in PowerPoint.  I pasted the zooms of the integration results below, but the forum's 'preview' feature suggested the differences may not clearly show up in the forum post.  You may have to take my word for it.) 

Below is a zoom for CASE 4 (i.e. only 'use dynamic distortion correction' enabled).  The registration scores were better for this run.

image

Below is a zoom for CASE 2 (i.e. only 'same camera and optics' enabled).  The registration scores were worse for this run.

image

At the end of the day, this all got me wondering if I am doing something wrong (perhaps there is another setting I should be using but am unaware of)?  Or perhaps that option is not supposed to have any effect on the type of data I provided (Canon .CR3 RAW files)?

I confess I did not go back to prior versions of APP and try them to see if the behavior was ever different.  Nor did I try other file formats for my data.  I was hoping someone would have insight into this and share.  Perhaps others have done experimentation in the past and gained wisdom to help decide when to use this option (and what to expect when you do).

Thanks in advance.


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Just to be clear, you're only integrating a single FOV right? Not a mosaic? Thanks for the very elaborate post, I'll forward this to Mabula as this goes a bit deeper into the algorithms.


   
ReplyQuote
(@skyguiderpro)
White Dwarf
Joined: 1 year ago
Posts: 5
Topic starter  

Yes, I am only integrating a single FOV; not a mosaic.  To be sure, my mount doesn't track perfectly, so there is some drift over the session.  But there is significant overlap in all frames from the start to end of the session.  Below is a screenshot of the first and last frame in the data set to give a sense of how much drift/overlap there is.

image
image

I went back and re-reviewed the session notes.  I did not do a meridian flip, or stop the mount / re-frame during the session.  I did change the camera battery once [about an hour into the session], after which I did re-focus.  Frame 38 was the first frame after the battery swap/re-focus.  The registration scores changed abruptly at Frame 38 in CASE 1 - 3.  You can kinda see the registration scores are a bit lower for Frames 1-37 in CASE 4.  Oh, and I was wrong in the original post when I said the mount was unguided.  On this particular night I was autoguiding. I did not have dithering enabled.  Sorry for fibbing about it in the original post.

It probably doesn't hurt for me to mention that other than the first 3 check boxes on the 1) LOAD Tab, and the 2 options I discussed in my original post [both on the 4) REGISTER Tab], all other settings up to and including those on the 4) REGISTER Tab were the defaults.  I can't swear that settings on the 6) INTEGRATE Tab were all at the defaults, but the registration scores are determined when I click on the 'start registration' button on the 4) REGISTER Tab, so I assume any settings on tabs past that one would not come into play.  (Maybe that's not a good assumption on my part, but I thought it worth explicitly stating.)

Hopefully that answers your question.  Thanks for your help.


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi Bob @skyguiderpro thank you very much for your interesting question.

Okay, yes, what all of this is telling you is that clearly the dataset need dynamic distortion correction. This is because the frames do not perfectly overlap due to the drift in the session. It is also due to your optics. Regular photography lenses (even the most expensive ones) all have a degree of optical distortion becoming larger with shorter focal lengths.  This is because the optics contain multiple pieces of glass, making it harder to project a perfect rectilinear view of reality. (Rest assured, even the most expensive telescopes with similar focal lengths have clear optical distortion as well and it needs to be corrected to get better results).

First, what are the actual registration scores shown in the frame list panel when dynamic distortion correction is disabled? For OSC data, I would expect the good values to be around 0,1-0,2 pixels and the worse ones would be around 0,5-0,75 pixels give or take?

I agree that it is odd that disabling same camera and optics in this case gives slightly better results. The argumentation would be that those registration algorithms are more flexible than the ones with same camera and optics. Simply what happens is this: with same camera and optics, APP will assume an identical optical distortion correction model for all frames. And with same camera and optics disabled, it assumes the frames are from different optical setups and it will calculated an optical distortion correction model for each individual frame. In this case, it does improve things as you can see.

Now, to make a bit more sense out of this, I think it will be related to the quality of your optics. Your zoom images showing the difference illustrate it will, your optics seem to have clear coma optical aberration, which is very common with such optics. See https://en.wikipedia.org/wiki/Coma_(optics)

The problem with coma is that the star centroid that is used to align the images gets a structural error. And due to drift between your images, that structural error is not the same for the same stars in your images. Which can then explain why disabling same camera and optics does help here. So I would not be surprised that with coma-less optics, things would be more logical 😉

The coma causes any star centroid calculation to get a structural error I think because all those algorithms assume some sort of gaussian shaped intensity profile and not a smeared out one. The intensity axes is not pendicular to the imaging plane (the sensor) clearly.

Hope it clarifies the issue somewhat? It is really great that you stumbled on this and started to think about what is happening here I feel 😉 !

Mabula


   
ReplyQuote
(@skyguiderpro)
White Dwarf
Joined: 1 year ago
Posts: 5
Topic starter  

Thanks for the well-written explanation Mabula.  What you said all made sense to me.  Now to answer your questions…

CASE 2 (only 'same camera and optics' enabled) frame list panel registration RMS scores were 0.13 – 0.74 pixels

CASE 4 (only 'use dynamic distortion correction' enabled) frame list panel registration RMS scores were 0.12 – 0.19 pixels

So your estimates were very (very) good indeed.

 

It occurs to me that a good ‘best practice’ here would be to start with ‘same camera and optics’ enabled…and ‘use dynamic distortion correction’ disabled (which are the default settings).  Then run up to and including registration, and examine the registration scores.  If they look "good" in the frame list panel, then proceed to normalization. 

[Perhaps something worth stating explicitly is that the practice of jumping straight to 6) INTEGRATE and hitting the ‘integrate’ button is OK if you already know your registration scores are to your liking.  But on the first run through a data set, maybe jumping straight to 4) REGISTER and hitting the ‘start registration’ button is a good practice.  Lesson learned for me anyway.  Another lesson I think I learned is to look at the registration scores in the frame list panel.  The normalized registration scores in the analytical plot are helpful to see if anything changed over the session, as happened in my data when I re-focused.  However, the frame list panel registration scores would have done a better job of informing me that I could have done better than the default settings for this particular data set.  So both sources of information are useful, but in different ways.  I got lucky because one eventually forced me to look at the other (because of your prompting) 😀.] 

If after the initial registration you are not satisfied (or even if you just want to see if you can do better in terms of registration scores), disable ‘same camera and optics’ and enable ‘use dynamic distortion correction’, and re-run registration.  After re-running registration, decide which of the 2 permutations of these options offered the best result, then proceed to normalization. 

That all sound reasonable so far?

 

To be sure I have my mind properly wrapped around this, I need to ask a few follow-on questions. 

Q1. Could there ever be a use case where you would want to disable both of these options?  If so, under what circumstances? 

Q2. Could there ever be a use case where you would want to enable both of these options?  If so, under what circumstances? 

What I’m poking at here is that we have 2 options, so we have 4 permutations of those 2 options.  Off-Off, On-Off (default), Off-On, and On-On.  I wonder if there can ever be a case where registration will be best with neither option selected, or with both options selected.  Is it worth the time to try each of the 4 permutations (as opposed to just the On-Off and Off-On permutations) because for some data, there may be even better registration to be had?  Or perhaps you already answered by saying, “For OSC data, I would expect the good values to be around 0,1-0,2 pixels”, so if registration scores in the frame list panel are on the order of 0.1 – 0.2 pixels, with OSC data you are probably already good enough...and there’s probably not a lot of additional benefit to be gained by spending time trying other permutations.  Thoughts?

 

Thanks again for the prompt and well-explained responses.  I learned some good stuff here, and it will help me get even more benefit out of the software.

Bob K 


   
ReplyQuote
Share: