2023-03-15: APP 2.0.0-beta14 has been released !
IMPROVED FRAME LIST, sorting on column header click and you can move the columns now which will be preserved between restarts.
We are very close now to releasing APP 2.0.0 stable with a complete printable manual...
Astro Pixel Processor Windows 64-bit
Astro Pixel Processor macOS Intel 64-bit
Astro Pixel Processor macOS Apple M Silicon 64-bit
Astro Pixel Processor Linux DEB 64-bit
Astro Pixel Processor Linux RPM 64-bit
Can't register any of my 2000+ subs for M42.
I loaded the subs and calibration files. But can't for the life of me stack the subs (integrate) in APP. All I get is a list of the subs and calibration files with out #stars density, background & dispertion, SNR & noise, FWHM, quality score, Registration RMS -#stars all blank.
That is interesting. I do have to say that stacking that many at once isn't advised, I would divide the data into maybe 200 at a time. With the proper calibration data they will be calibrated nicely and the integrations can then be stacked together again. Which is much faster and less resource heavy.
What happens if you load in 50 subs (just to test), go to tab 6 and press integrate?
As a Stellina user, it's common to have a lot of files. I was thinking about doing that approach (working with a smaller set at a time). My question is at what stage should I be telling it to create files? On the 5 Normalize step? Then when I have all the various normalized files from each session run 6 Integrate then? Just want to make sure I'm understanding the approach correctly.
You just have proper master calibration files that can be used for the images, if you integrate say 100 at a time (or more) you'll get all the benefits of satellite rejection, dithering, etc. You create the various smaller integratons and APP will have saved the integrated file already. You collect those and load those in again as lights. No calibration needed anymore of course, so you then simply integrate the integrations with each other.
Ah, OK. So run 6 Integrate and save the integrated files (deleting the other bits like the final integrated image). Then do that for each session. When ready just load all the integrated files as lights with nothing else and just run.
Just making sure I have the sequence down.
I will reduce the data sets no larger than 200 subs and give it a try. Yes it is a resource heavy endeavor. It has been >10 hours and I'm still running at Step 5 Normalize. I saw a comment that APP 1.082 Beta may have a regression and some folks are having the same problem with integration or not being able to integrate their subs.
It was recommended to load APP 1.083 Beta2 to fix the regression. I am currently running APP 1.083 with 1220 subs to see if they will stack and integrate. This is one helluva SW stress test. 🙄
Vincent Thank You, for the advice.
Well, if you go to 6 and integrate immediately, there will be no other sub-integrations. The end result APP saves, is the file you want, the actual integration. Beta2 does have an issue sometimes, Mabula is reverting that engine back to a previous properly working one.
Just gave it a try and it was so much faster. Night and day. Although I found I get better results by cropping the integrations in each session and then it reduces the cropping required in the final integration. The exposures don't always blend super well at the corners, though. But that could be how the MBB is doing its job and the differences between each set of integration files. Still, much faster.
How does this strategy affect drizzle and upsampling? Do you drizzle process each smaller set, integrate to a larger image, and then re-stack the new upscaled images? Won't that produce a blurrier image than drizzling all of the subs together? What's your suggested workflow?
Dithering etc. will work fine with these "smaller" datasets. Regarding drizzling, that's a good question, but should still work fine on these "smaller" (so 100-200 subs) datasets. The requirements for a good drizzle effect is to have undersampled data, well dithered and loads of data if possible. 100-200 subs is quite a lot of data I'd say. So that should produce a nicely corrected, drizzled end-result per sub-integration that can then still be combined with the other sub-sets.