Registration iterat...
 
Share:
Notifications
Clear all

Mar 28 2026 APP 2.0.0-beta40 will be released in 7 days.

It did take a long time to have the work finished on this and it  will have a major performance boost of 30-50% over 2.0.0-beta39 from calibration to integration. We extensively optimized many critical parts of APP. All has been tested to guarantee correct optimizations. Drizzle and image resampling is much faster for instance, those modules have been completely rewritten. Much less memory usage. LNC 2.0 will be released which works much better and faster than LNC in it's current state. And more, all will be added to the release notes in the coming weeks...

Update on the 2.0.0 release & the full manual

We are getting close to the 2.0.0 stable release and the full manual. The manual will soon become available on the website and also in PDF format. Both versions will be identical and once released, will start to follow the APP release cycle and thus will stay up-to-date to the latest APP version.

Once 2.0.0 is released, the price for APP will increase. Owner's license holders will not need to pay an upgrade fee to use 2.0.0, neither do Renter's license holders.

 

Registration iterations

10 Posts
2 Users
3 Reactions
4,062 Views
(@sil)
Main Sequence Star
Joined: 9 years ago
Posts: 17
Topic starter  

I'm just wondering how i can estimate remaining time, the last 30hrs my Register step has been at 80%. The output window is up to iteration 8. I assume it runs iterations until an RMS threshold is reached, or is there currently any way I can guess how long its likely to take? I'm registering 301x 36MP raws as mosaic so wasnt expecting it to be fast, I did notice though at the start of this step the output said "registering 903 lights" , so does it split each into 3 frames? or is this an output bug?

On that note search your output messages for 4) Register you have misspelt Simple as "simpel" in there. Sorry I didn't note full message phrase.

steve



   
Mabula-Admin reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 

Hi Steve,

I'm just wondering how i can estimate remaining time, the last 30hrs my Register step has been at 80%.

This can't be estimated, it's a regression and it will only stop when it converges to a solution.

But loading 301 raws for a mosaic will take forever I think ;-(... it's not very efficient to make the mosaic in that way. Because APP need to calculates parameters for all frames at once. In this case almost 6000 parameters need to be calculated which is a very difficult task.

APP reports 903 frames because it checks for overlap of each frame with 3 other frames in this case so thats 301x3 = 903 , so this is by no means an output bug.

The most efficient and best way to make a mosaic, is to first make the panels of the mosaics themselves. For instance, if you have a 3x3 mosaic with 10 frames for each panel, then you would have a total of 3x3x10 = 90 frames for the mosaic. The first step would be to make the 9 panels by integrating 10 frames per panel. Then compose the mosaic of the 3x3=9 panels. And this usually shouldn't take very long at all. The amount of time required to calculate a mosaic increases strongly with more frames though.

Let me know if this helps you with your mosaic or need more help.

Cheers,

Mabula



   
ReplyQuote
(@sil)
Main Sequence Star
Joined: 9 years ago
Posts: 17
Topic starter  

Not really helpful, what determines the convergion of a solution? Is it RMS Value of 1.0? or 0.1? or something you don't display (why not)?? There must be a criteria or threshold value or this feature will be impossible to finish for anyone.

APP also doesn't seem to use or recognise/store metadata in the integration files, when they are reloaded for panel mosaics which would be nice to know. I'm not doing panel mosaics the way you want/expect so alternative ideas not helpful either. Unless you can restore my arm so I can reuse my good setup, I HAVE to work the way I can with my Physical limitations and in this case I NEED to process ALL my raws into one. SO if I did them in batches instead what would be a limit of data to process in one batch?

Its really unhelpful to just respond with the obvious " a lot of data will take a lot longer to process" style replies, please qualify with definitions of "a lot", maybe use that knowledge to have a step estimation time in APP so it becomes actually useful.



   
Mabula-Admin reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 
Posted by: Steve

Not really helpful, what determines the convergion of a solution? Is it RMS Value of 1.0? or 0.1? or something you don't display (why not)?? There must be a criteria or threshold value or this feature will be impossible to finish for anyone.

Indeed Steve, you are totally right. There are several convergence criteria actually that determine when a solution is reached. The main criterium is not a RMS value, but rather a measure of how the RMS value changes between consecutive iterations.

convergenceCriterium = absolute( (currentRMS - oldRMS)/currentRMS) );

if convergenceCriterium drops below a certain value ( 1/10000 ), then the criterium is reached and the iterations will stop.

APP users have created lots of mosaics already using these criteria I might add.

APP also doesn't seem to use or recognise/store metadata in the integration files, when they are reloaded for panel mosaics which would be nice to know.

Storing integration data in the FITS header is on my TODO list, so it will definitely be implemented 😉 Manipulating the FITS header and reading it is no problem at all, it just needs to be implemented for the integration task.

I'm not doing panel mosaics the way you want/expect so alternative ideas not helpful either. Unless you can restore my arm so I can reuse my good setup, I HAVE to work the way I can with my Physical limitations and in this case I NEED to process ALL my raws into one. SO if I did them in batches instead what would be a limit of data to process in one batch?

My sincere apologies Steve, I wasn't aware of the setup that you used to shoot your data. I assumed you created different panels, again, please accept my apologies.

If you have shot the 301 frames from a tripod that doens't track the earth's rotation, then probably there is a much more easier way to mosaic your data and get a good result with distortion correction.

Does the first frame and the last frame have some overlap with the frames halfway of your exposures?

If so, then the way to approach this is:

in 4)REGISTER

set one of the frames in the middle of all 301 frames as a reference.

keep scale start and stop at defaults

use dynamic distortion correction

enable same camera and optics.

Leave registration mode at normal, you won't need the mosaic mode in this case.

This is an example, shot from a non-rotating tripod with a 50mm objective attached to a Nikon D5100.

20 frames of 2,5 second exposure ISO3200. I have selected 20 frames of a series of 700 frames, just to illustrate the situation. Is this similar to your data set?

St med 51.0s NR x 1.0 LZ3 NS full eq add sc BWMV adv AA RL MBB10 2ndLNC it3 St

If the first and last frame don't have overlap with on of the middle frames, then we can also solve this, but we will need to make a camera profile and save it first, before running the mosaic mode.. Is this the case? Let me know, and I'll walk you though on how to make an accurate camera profile with optical distortion parameters.



   
ReplyQuote
(@sil)
Main Sequence Star
Joined: 9 years ago
Posts: 17
Topic starter  

Thats close to what I have but I dont always have a strip of data. Shooting from tripod I lock my trigger and it takes 100 frames before the camera buffer is full and halts. Then I repeat, but sometimes I first reframe my shot if I'm not certain I am getting my target. My camera is in a "permanent" state for astrophotography, no other use, and I installed a light pollution filter in the body, so Live View doesn't show many stars to help me aim, with the filter the optical viewfinder is blocked so I have its shutter closed too to prevent back light leaks. I can get 600 shots before the battery is low and the memory card 95% full, so it works out well too as a place for me to stop each session. Sometimes I only go to 400-500 and take darks, and flats, and rarely bias too.

 

All this is to say I dont know after all this if i even got my target, often too faint to see in a jpeg or on camera screen, but with hundreds of shots it reveals itself and I can bring it out nice and bright usually. I dont care about putting images online, this is for my own prints and portfolio and I am painfully aware how much better they could be using gear I bought shortly before my stroke left me unable to use it. Bloody annoying.

So I want to be able to make full use of all the data I've got, plate solve, find my target (I do comet hunting a lot so they are very faint and hard to track down. If I register to one frame that "window" of my data might not have my target, it might be 50pixels to the left of it. And repeating for a variety of frames through the set is a long way and I still might miss it if it in there. By registering everything as a mosaic I can know definitely if its even in the field of view of my data or not and if it was only in one frame then it'll be in a noisy part of the edge, but if its there it'll have all the data and be the best of the data so I can crop and process from there.

So far its been slow and I have limited time to experiment with settings to test alternatives. With pixinsight I usually do a fast test integration with a small set, find my target and crop wide around it then throw everything at it with the cropped integration as my reference target. I should be able to do the same with APP but again the mosaic registration would be ideal to just build a complete integration of ALL the data, then could crop and process the target from there, plus with widefield often multiple interesting things can be in the field of view so would only need to to the integration once and just crop out each item to process if wanted. One small project I have is a gallery collection of Messier objects, So priority one for me is often "just getting the object" after that its about improving, getting more or better data etc.Another project I'd like to do is a whole sky/milkyway image. As i keep the integration file and work off copies then I was hoping the mosaic integration could take a heap of integration files and give me a whole sky image. Wouldn't care how long it took, as long as i was confident it would finish. I've got data over many years, with a variety of lenses and cameras but it shouldn't be too much of a problem to deal with in software.

Any idea how long before some better docs are available? I'm sure I can find a way to get what I'm after from APP, but I don't know for certain if  I'm understanding what the options or what the limitations are. Making full use of 100% of my data is always my goal and I'm almost there with PI, mosaicing my data is problematic but APP is dead easy with small amounts. So today I'm still waiting for my ideal astrphotography processor to come along. Why do programmers limit software by cpu cores and RAM? When drive space is fast and cheap, and people like me are willing to wait for results, not everyone expects real time results at the expense of quality. GPU and network processing would be usefull though too 🙂



   
Mabula-Admin reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 
Posted by: Steve

Thats close to what I have but I dont always have a strip of data. Shooting from tripod I lock my trigger and it takes 100 frames before the camera buffer is full and halts. Then I repeat, but sometimes I first reframe my shot if I'm not certain I am getting my target. My camera is in a "permanent" state for astrophotography, no other use, and I installed a light pollution filter in the body, so Live View doesn't show many stars to help me aim, with the filter the optical viewfinder is blocked so I have its shutter closed too to prevent back light leaks. I can get 600 shots before the battery is low and the memory card 95% full, so it works out well too as a place for me to stop each session. Sometimes I only go to 400-500 and take darks, and flats, and rarely bias too.

Hi Steve, okay, so in that way, you're collecting overlapping strips of data (sort of..) I don't think you need the mosaic feauture for this to be able to register the data. Just make sure that on of the middel frames is chosen as a reference and enable distortion correction and same camera and optics 😉 Does this work?

All this is to say I dont know after all this if i even got my target, often too faint to see in a jpeg or on camera screen, but with hundreds of shots it reveals itself and I can bring it out nice and bright usually. I dont care about putting images online, this is for my own prints and portfolio and I am painfully aware how much better they could be using gear I bought shortly before my stroke left me unable to use it. Bloody annoying.

Yes, that is the beauty of combining multiple exposures 😉

So I want to be able to make full use of all the data I've got, plate solve, find my target (I do comet hunting a lot so they are very faint and hard to track down. If I register to one frame that "window" of my data might not have my target, it might be 50pixels to the left of it. And repeating for a variety of frames through the set is a long way and I still might miss it if it in there. By registering everything as a mosaic I can know definitely if its even in the field of view of my data or not and if it was only in one frame then it'll be in a noisy part of the edge, but if its there it'll have all the data and be the best of the data so I can crop and process from there.

Okay, I understand.  To be absolutely clear regarding the mosaic function. It's only needed when you have frames in the dataset that don't have any overlap at all with the chosen reference frame 😉

So far its been slow and I have limited time to experiment with settings to test alternatives. With pixinsight I usually do a fast test integration with a small set, find my target and crop wide around it then throw everything at it with the cropped integration as my reference target. I should be able to do the same with APP but again the mosaic registration would be ideal to just build a complete integration of ALL the data, then could crop and process the target from there, plus with widefield often multiple interesting things can be in the field of view so would only need to to the integration once and just crop out each item to process if wanted. One small project I have is a gallery collection of Messier objects, So priority one for me is often "just getting the object" after that its about improving, getting more or better data etc.Another project I'd like to do is a whole sky/milkyway image. As i keep the integration file and work off copies then I was hoping the mosaic integration could take a heap of integration files and give me a whole sky image. Wouldn't care how long it took, as long as i was confident it would finish. I've got data over many years, with a variety of lenses and cameras but it shouldn't be too much of a problem to deal with in software.

Any idea how long before some better docs are available? I'm sure I can find a way to get what I'm after from APP, but I don't know for certain if  I'm understanding what the options or what the limitations are. Making full use of 100% of my data is always my goal and I'm almost there with PI, mosaicing my data is problematic but APP is dead easy with small amounts. So today I'm still waiting for my ideal astrphotography processor to come along. Why do programmers limit software by cpu cores and RAM? When drive space is fast and cheap, and people like me are willing to wait for results, not everyone expects real time results at the expense of quality. GPU and network processing would be usefull though too 🙂

I am working hard on fixing know bugs/limitations and writing further documentation. Within a couple of weeks/months the amount of documentation will be much more extensive and better. Improving the documentation is one of the main priorities currently.

APP is mostly limited by

  • the amount of memory that you have, this puts a limit on the size of images that APP can process. You can find these limitation in the Quick Reference Guide.
  • your harddrive space and speed. APP uses your harddrive for the integration of data. The more space you have, the larger the integration can be and if you work on a fast SSD drive, it will boost the speed of APP significantly.

So there is no real limitation on CPU power for APP. It will just take longer with less CPU power.

GPU processing will come in APP soon. I have done the first tests already 😉

Let me know if you can know correctly register your data 😉 using my suggestion to not use the mosaic function.

Mabula

 



   
ReplyQuote
(@sil)
Main Sequence Star
Joined: 9 years ago
Posts: 17
Topic starter  

Been ticked off with home phone line dead so no internet recently (still waiting for telco) and I've been trying to get some info and screenshots to show  on another forum where people are asking about how APP compares in practice to the existing software options etc.

 

I did hit a snag I wasn't expecting with APP on the weekend, again with registration. I wrongly assumed I could do like I can in PixInsight and created a loosely cropped image and use that as a registration target. EG in PI I would register and integrate three random frames to get a cleaner single for me to look for the target object I was interested in capturing, I would them crop widely around this and save it. The I could load in ALL my frames and use this one as a registration target and PI would register all the frames, cropping to match the target and then I integrate afterwards. The benefit is I'm not wasting processing time and storage space dealing with areas of the frame I dont need. 

Trying this in APP though seems i cant work this way, so its wasting time and space for me. Should it work this way or at least on the TODO list? It also makes drizzle scaling worthwhile, doesn't seem any point to me even looking at it at this point 🙁

 



   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 

Hi Steve,

I did hit a snag I wasn't expecting with APP on the weekend, again with registration. I wrongly assumed I could do like I can in PixInsight and created a loosely cropped image and use that as a registration target. EG in PI I would register and integrate three random frames to get a cleaner single for me to look for the target object I was interested in capturing, I would them crop widely around this and save it. The I could load in ALL my frames and use this one as a registration target and PI would register all the frames, cropping to match the target and then I integrate afterwards. The benefit is I'm not wasting processing time and storage space dealing with areas of the frame I dont need.

Fortunately, this really should be possible in APP. At what point does it not work according to you?

at 1) trying to make the new reference frame?

at 2) cropping the frames?

at 3) registering the cropped frames to the new reference? Cropping to the reference will be accomplished automatically if you set the composition mode to reference in 6) so it don't wastes processing time and storage like you indicate.

It also makes drizzle scaling worthwhile, doesn't seem any point to me even looking at it at this point

Can you clarify? I don't think I understand what you are referring to here? APP has a full and complete Drizzle implementation. On a sidenote, drizzle really shouldn't be used for scaling if that's what you are looking for. Then simply scale the integration with the scale setting? The result is probably better (less noisier) than with drizzle integration.

Drizzle is meant to increase sharpness in exchange for more noise and can only be accomplished with

1) dithered data

2) undersampled data

3) lots of data

Kind regards,

Mabula



   
ReplyQuote
(@sil)
Main Sequence Star
Joined: 9 years ago
Posts: 17
Topic starter  

Fortunately, this really should be possible in APP. At what point does it not work according to you?

I had my cropped registration target, in APP on the register step choose it as my target there and it wouldn't proceed. Sorry I didnt note the errors, not really in a Tester frame of mind. I think i got an error eventually related to the pixel dimensions of the target differing from the the lights but dont recall if this was on pressing the register button or if it first says the target also needs star analysis etc and thats where is fails due to pixel dimensions. Either way not sure why it matters the software should scale and crop the lights to match whether the target is calibrated, normalised or same colour space or anything. I proved this years ago to those who claimed panorama stitching was impossible to do using different cameras and focal lengths. Just because it adds a bit of work (something computers are meant to be good at).

Anyway seems APP again doesnt work as I expected and something the improved documentation could solve or prevented. I don't hate APP but I do think it was rushed to market and I can't yet recommend it to others based on how it works out of the box. No software should require forum discussions for someone to be able to use, it should be intuitive enough with good documentation at the very least. Thats my only real main criticism. Your algorithms seem as strong and in some cases even better than PixInsight currently uses, and some of my testing shows improvements but these frustrations could have been before release. whats done is done.



   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 

Hi Steve,

I had my cropped registration target, in APP on the register step choose it as my target there and it wouldn't proceed. Sorry I didnt note the errors, not really in a Tester frame of mind. I think i got an error eventually related to the pixel dimensions of the target differing from the the lights but dont recall if this was on pressing the register button or if it first says the target also needs star analysis etc and thats where is fails due to pixel dimensions. Either way not sure why it matters the software should scale and crop the lights to match whether the target is calibrated, normalised or same colour space or anything. I proved this years ago to those who claimed panorama stitching was impossible to do using different cameras and focal lengths. Just because it adds a bit of work (something computers are meant to be good at).

Regarding your comment in bold: not being able to register due to different pixel dimensions between images will never happen in APP, so I guess you saw something else? APP really has no problem at all with registration of images of different sources, image scales or image dimensions.

To be able to register images to a reference, the reference needs to be star analysed. Only then the software knows the star locations in the reference frame which is essential to be able to register any frame.

Just load your lights and the reference frame and select them all and start star analysis. After Star analysis, manually select you reference frame, disable it and click on register. Then the frames should be registered to your reference 😉

Either way not sure why it matters the software should scale and crop the lights to match whether the target is calibrated, normalised or same colour space or anything. I proved this years ago to those who claimed panorama stitching was impossible to do using different cameras and focal lengths. Just because it adds a bit of work (something computers are meant to be good at).

Okay, I totally agree and APP as well. Please have a look at this video tutorial:

https://www.astropixelprocessor.com/registration-normalization-integration-using-ddc-lnc-mbb/

You will see clearly, that APP can handle even flipped data in the x- or y-axis, besides being able to register data of different image dimensions and image scales.

Furthermore, in 6) Integrate, the composition mode setting allows you to integrate to different field of views, full will show all pixels of all frames, reference will only use the field of view of the reference, and you can even use a crop of the reference for instance to drizzle only the main target 😉

Lastly, you can also directly scale the integration with the scale setting in 6).

My apologies for not having the documentation completed yet Steve ;-( I am working on it, and it will be completed. It's a promise, I consider it very important myself.

Mabula



   
ReplyQuote
Share: