APP 2.0.0 beta43 is...
 
Share:
Notifications
Clear all

MAY 4 2026: APP 2.0.0-beta44 has been released !

New improved internal memory controls should now work on all computers

May 1 2026: APP 2.0.0-beta43 has been released !

Improved internal memory controls (much more stable and faster on big datasets), fixed CPU image viewer, fixed Narrowband extraction demosaic algortihms.

Apr 29 2026 APP 2.0.0-beta42 has been released !

New improved Normalization engine, Fixed random crashes in integration, fixed RGB Combine & Calibrate Star Colors, fixed Narrowband extraction algorithms, new development platform with performance gains, bug fixes in the tools, etc...

Apr 14 2026: Google Pay, Apple Pay & WeChat Pay added as payment options

Update on the 2.0.0 release & the full manual

We are getting close to the 2.0.0 stable release and the full manual. The manual will soon become available on the website and also in PDF format. Both versions will be identical and once released, will start to follow the APP release cycle and thus will stay up-to-date to the latest APP version.

Once 2.0.0 is released, the price for APP will increase. Owner's license holders will not need to pay an upgrade fee to use 2.0.0, neither do Renter's license holders.

 

APP 2.0.0 beta43 is slow on normalization

6 Posts
3 Users
0 Reactions
81 Views
(@philippe-bernhard)
Red Giant
Joined: 8 years ago
Posts: 62
Topic starter  

Hello

I am working on MBP M4Pro Max and if star analyser and registration is faster, normalization (non overlap) is very slow compare to past versions. 

I preprocessed something close to 700 images R, G, B, S, H, O and normalization took a much longer time than before (maybe something close to 1-2 seconds per image. Before these last versions the time for 700 images was very fast (several seconds for all). Images have 100 Mpix (Moravian C5)

 

Anyway, there is something I don't understand in the reasonnement :

Why normaization reflect to the reference image ? As if a red image is reference, all S, H, O will be normaliezd along this red one ? 

 

The real (or optional) procedure should have each stack get normalized with its own reference. Usually I find a green image to be set as reference. There are lot of stars and maybe some gradients or huge signal on it. If I take for example the OIII ones, signal is very low and I don't understant why it is normalized with the green reference. 

Please explain if I am wrong

Best,

Philippe

 


This topic was modified 1 week ago by Philippe BERNHARD

   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5254
 

Hi Philippe @philippe-bernhard,

Thank you very much for sharing your concerns and feedback.

Please check the release notes for 2.0.0-beta42, in beta42 5) Normalize was completely improved which explains why it is a bit slower. It is slower because the actual calculations are now improved and more robust.

On my macbook pro M4 max, the new normalization engine is also slightly slower, but still very fast I think. Since your images are 100Mpix, it explains why it is  slower. Now the normalization parameters are calculated using all non-zero pixels. The old engine always analysed only part of the image (using blocks spread over the data to be analysed) , so the results were less robust but much faster.

I think the new engine should work better on normalizing all your R,G,B,S,H,O when compared to pre bet42 versions, because both the lokation and the dispersion are calculated differently in a mathematical/statistical sense. Also the MRS gaussian noise value is more precise in the new engine. Did the normalization work okay or not with beta43? The new engine should work much better with bright/HDR objects.

I agree that normalizing different filter data can be difficult epsecially if some filters have strong gradients/light-poillution. In those cases, normalization can be improved if you normalize without scale, so use add or multiply without scale. But best would be that the data is corrected for the gradients before the scale is calculated actually. Then all should work robustly, so also normalizing between broadband and narrowband filters.

What I want to implement to make this work better overall and for your case, is to have Local Normalization Correction to be applied before 5) Normalize, then the gradients are gone and dispersion measures are not affected by any possible gradients. Or maybe better, the data to be normalized should dynamically be corrected for gradients (like LNC does) and the parameters should be calculated on those corrected images. I need to test which is the most efficient and fast way to make this work and implement in APP going forward. 

Finally, if like you suggest, to only normalize inside each filter and not all combined, then you still need to normalize those filter stacks in some way before making a good composite.

If the new engine is not improving things with your data, then please share a dataset with me that clearly shows this problem 😉 It is in fact still on our issue list as well because you have reported this in the past.

You can upload it here:

https://upload.astropixelprocessor.com/

username: uploadData

password: uploadTestData

Please make a folder with your name and issue like: Philippe-RGBSHO-normalization

and upload the data there and let me know once done.

 

https://www.astropixelprocessor.com/community/release-information/astro-pixel-processor-2-0-0-beta42-release-notes/#post-34416

  • 5) NORMALIZE upgraded ! More robust and better with bright/HDR objects.

    The normalization engine has been upgraded. The calculations of the sky background (lokation) level and the dispersion (the spread of the pixel values relative to the sky background) have been improved. Previously we calculated the sky background and the dispersion in blocks spread over each image (or overlap area) and not the full image/overlap area was analysed. This was a fast and effective method. But in the new normalization engine, the whole image or overlap area is now analysed and not in blocks. We are using very fast sorting of all the pixels values and we then aggressively clip the distribution on both the positive and negative side to get a very good sky background estimate. We then calculate the dispersion as MAD or BWMV on only the truncated distribution. Testing has shown that these methods are more robust on data with bright/HDR object and strong gradients. OVeral the sky background/lokation and dispersion values will be more robust leading to better image normalization and also better initial background neutralization by the normalization engine and the neutralize-BG option below the histogram. Initial normalization for mosaics with bright objects will be much better as a consequence. Previously we had two modes in the normalization engine: regular and advanced.

    The new engine still has 2 modes, but they work differently and so we have renamed them. The first mode is now called full-image. This mode is most similar to the previous regular mode. The new second mode is called overlap area and this mode is more similar to the old advanced mode. Full-image always analyses each image fully as the name indicates. Overlap area mode will only analyze the overlap area between the image to be analyzed and it's reference frame. If you run a mosaic, and an image does not overlap with the main reference, another frame that does overlap with the image to be analyzed will be used to calculate the overlap area. Simply put, for normal registration where most images overlap almost fully, using any mode will give nearly identical results. if the images have some difference in Field Of View, the overlap area will be more robust because the sky background and dispersion will be based on exactly the same field of view. If however, you run a classic mosaic, like a 5x5 mosaic with only 10% overlap between the images, the full image mode will be best. All is explained as well in new tool tips, the little pop up menus attached to each option. The below screenshot shows the initial normalization of a 3x3 HDR mosaic of M42 with the new normalization engine in Full-image mode. The sky backgrounds already match very well. LNC and MBB only need to do little work to make it a seamless mosaic now.

NewNormalizationEngine Better With HDR objects

 The new tooltips are shown in the next screenshots:

NewTooltip normalization method
NewTooltip normalization mode

 



   
ReplyQuote
(@philippe-bernhard)
Red Giant
Joined: 8 years ago
Posts: 62
Topic starter  

Thanks for your reply

Ok, I understand the extra time for normalization. No problem. Result was fine.

I will do some different test on my dataset and will come back to you

Best



   
ReplyQuote
(@minusman)
Black Hole
Joined: 9 years ago
Posts: 251
 

Posted by: @mabula-admin

Hi Philippe @philippe-bernhard,

Thank you very much for sharing your concerns and feedback.

.

I agree that normalizing different filter data can be difficult epsecially if some filters have strong gradients/light-poillution. In those cases, normalization can be improved if you normalize without scale, so use add or multiply without scale. But best would be that the data is corrected for the gradients before the scale is calculated actually. Then all should work robustly, so also normalizing between broadband and narrowband filters.

What I want to implement to make this work better overall and for your case, is to have Local Normalization Correction to be applied before 5) Normalize, then the gradients are gone and dispersion measures are not affected by any possible gradients. Or maybe better, the data to be normalized should dynamically be corrected for gradients (like LNC does) and the parameters should be calculated on those corrected images. I need to test which is the most efficient and fast way to make this work and implement in APP going forward.

 

Hi Mabula, the option (to dynamically correct for gradients, as LNC does) is the better one. This is because a script called NormalizeScaleGradient (NSG) uses this method very effectively.

First, a noise calculation is performed for each image, then the gradients are corrected, and finally the scale is calculated. This also makes it easier for the rejection algorithm to perform its integration.

With this method, a separate reference frame is defined for normalization (the one with the lowest noise and fewest gradients in the image, usually with the smallest median).

So, separate reference frames are used for registration and normalization.

Best regards, Henry.

 

 



   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5254
 

Posted by: @philippe-bernhard

Thanks for your reply

Ok, I understand the extra time for normalization. No problem. Result was fine.

I will do some different test on my dataset and will come back to you

Best

Great Phillipe @philippe-bernhard,

That is good to know, please share a dataset that creates a problem in this sense if you find one.

Mabula

 

 



   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5254
 

Posted by: @minusman

Hi Mabula, the option (to dynamically correct for gradients, as LNC does) is the better one. This is because a script called NormalizeScaleGradient (NSG) uses this method very effectively.

First, a noise calculation is performed for each image, then the gradients are corrected, and finally the scale is calculated. This also makes it easier for the rejection algorithm to perform its integration.

With this method, a separate reference frame is defined for normalization (the one with the lowest noise and fewest gradients in the image, usually with the smallest median).

So, separate reference frames are used for registration and normalization.

Best regards, Henry.

Hi Henry @minusman,

Indeed, we need to pursue this way I think. I have added this to my issue list and have set it as a very high priority.  I have some widefield mosaic data that was not corrected well for vignetting and then you get many issues is you do not correct for gradients first.

Mabula

 



   
ReplyQuote
Share: