15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes
7th December 2023: added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.
The other day I started a monster processing of about 650 frames. I took nearly 36 hours to complete which lead me to believe that there is a smarter way to do this. The stack of frames were from three different sessions and two different cameras/scopes where one camera was monocrome, the other OSC. For the OSC a quad band filter was used.
All processing up to integration worked as normal, but took many, many hours. That was expected due to the large stack. On Integration tab I selected 1. degree LNC, 3 iterations because of different cameras/scopes/sessions. I gave APP all the computer recourses, but still it took about 24 hours to integrate the frames. That was not expected. (@Mabula, improvimg the speed of the LNC process would not hurt. 😉 )
So is there a smarter way to do this? Like process each session or camera/scope separately and combine them (using LNC) after. Is that possible?
What would be a good process for this?
Well, good news, many parts of APP are being optimised in speed and should already be noticable in 1.076. The statistics behind many of APP's processes are, likely, the most advanced out there and hence require a lot of processing power. APP was made to take advantage of modern computers and simply trying to get the best methods implemented, downside is that it can take a lot of time. We're constantly trying to improve on this ofcourse.
And yes, splitting the session up and processing those first is a very good way. Same with mosaics, first integrating the panels and fully process them and only then start with the mosaic.
I too am expecting to be dealing with very high stack counts. I recently purchased a ZWO ASI6200MC Pro (OSC, with full frame BSI CMOS Sony sensor), and have recently learned how to calculate optimal sub exposure times. For my fast reflective Takahasi astrographs, they turned out to be a LOT shorter than I expected; details here:
https://www.cloudynights.com/topic/690691-subexposure-length-calculations-for-my-new-asi6200mc-pro
I am thrilled to hear about the speed-ups that the latest release offers! I need to upgrade my computer hardware, and since AMD has been very nice to me from an investment perspective, I am happy to use their high end desktop processors such as Threadripper 3970X. Can I presume that APP can scale up to and take full advantage of 32 cores/64 threads? I also see where APP can make use of a powerful GPU, unlike PixInsight. Can this be AMD also, or is only Nvidia supported? I am tempted to wait for "Big Navi", perhaps mid this year, but maybe a Radeon IV or RX 5700XT would do for now.
Any other hardware recommendations (DRAM, SSDs, HDDs, etc.) to best take advantage of APP's computational demands would be appreciated.
All the best,
Kevin
Yes! APP is multithreaded in many of the processing steps, ofcourse we work on this constantly when we see bottlenecks and try to make it the most efficient in every way. A huge dataset is still a huge dataset, we can't break physics so there will always be a penalty for using advanced algorithms. But APP is all about the best quality in the most efficient way possible, this is why it takes more time in development compared to other packages, but I think it's worth it.
You can select the number of cores you want APP to take advantage of, as well as the amount of memory.
A future goal of APP will be to use the GPU for some of the tasks, this is a big one so will take time, but it's something Mabula is working on.
General advice; buy all the memory, use a SSD as the location to do processing on. Buy a good GPU to be future-proof. Also, faster is better. 😉