Can't register any ...
 
Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

Can't register any of my 2000+ subs for M42.

10 Posts
4 Users
2 Likes
514 Views
(@kemosabe)
Molecular Cloud
Joined: 3 years ago
Posts: 2
Topic starter  

I loaded the subs and calibration files.  But can't for the life of me stack the subs (integrate) in APP.  All I get is a list of the subs and calibration files with out #stars density, background & dispertion, SNR & noise, FWHM, quality score, Registration RMS -#stars all blank. 

                                   
                                   
                                   
                                   
                                   
                                   

   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

That is interesting. I do have to say that stacking that many at once isn't advised, I would divide the data into maybe 200 at a time. With the proper calibration data they will be calibrated nicely and the integrations can then be stacked together again. Which is much faster and less resource heavy.

What happens if you load in 50 subs (just to test), go to tab 6 and press integrate?


   
ReplyQuote
(@turtlecat1000)
Red Giant
Joined: 2 years ago
Posts: 63
 

As a Stellina user, it's common to have a lot of files. I was thinking about doing that approach (working with a smaller set at a time). My question is at what stage should I be telling it to create files? On the 5 Normalize step? Then when I have all the various normalized files from each session run 6 Integrate then? Just want to make sure I'm understanding the approach correctly.


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

You just have proper master calibration files that can be used for the images, if you integrate say 100 at a time (or more) you'll get all the benefits of satellite rejection, dithering, etc. You create the various smaller integratons and APP will have saved the integrated file already. You collect those and load those in again as lights. No calibration needed anymore of course, so you then simply integrate the integrations with each other.


   
ReplyQuote
(@turtlecat1000)
Red Giant
Joined: 2 years ago
Posts: 63
 

Ah, OK. So run 6 Integrate and save the integrated files (deleting the other bits like the final integrated image). Then do that for each session. When ready just load all the integrated files as lights with nothing else and just run. 

Just making sure I have the sequence down. 


   
ReplyQuote
(@kemosabe)
Molecular Cloud
Joined: 3 years ago
Posts: 2
Topic starter  

I will reduce the data sets no larger than 200 subs and give it a try.  Yes it is a resource heavy endeavor.  It has been >10 hours and I'm still running at Step 5 Normalize.  I saw a comment that APP 1.082 Beta may have a regression and some folks are having the same problem with integration or not being able to integrate their subs.  

 

It was recommended to load APP 1.083 Beta2 to fix the regression.  I am currently running APP 1.083 with 1220 subs to see if they will stack and integrate.  This is one helluva SW stress test.  🙄

Vincent Thank You, for the advice. 

This post was modified 2 years ago by Robert H. Johnson II

   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Well, if you go to 6 and integrate immediately, there will be no other sub-integrations. The end result APP saves, is the file you want, the actual integration. Beta2 does have an issue sometimes, Mabula is reverting that engine back to a previous properly working one.


   
ReplyQuote
(@turtlecat1000)
Red Giant
Joined: 2 years ago
Posts: 63
 

Just gave it a try and it was so much faster. Night and day. Although I found I get better results by cropping the integrations in each session and then it reduces the cropping required in the final integration. The exposures don't always blend super well at the corners, though. But that could be how the MBB is doing its job and the differences between each set of integration files. Still, much faster.


   
ReplyQuote
(@dsifry)
White Dwarf
Joined: 4 years ago
Posts: 8
 

How does this strategy affect drizzle and upsampling? Do you drizzle process each smaller set, integrate to a larger image, and then re-stack the new upscaled images? Won't that produce a blurrier image than drizzling all of the subs together? What's your suggested workflow? 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Dithering etc. will work fine with these "smaller" datasets. Regarding drizzling, that's a good question, but should still work fine on these "smaller" (so 100-200 subs) datasets. The requirements for a good drizzle effect is to have undersampled data, well dithered and loads of data if possible. 100-200 subs is quite a lot of data I'd say. So that should produce a nicely corrected, drizzled end-result per sub-integration that can then still be combined with the other sub-sets.


   
ReplyQuote
Share: