Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

[Sticky] Combining subs with multiple exposure time

15 Posts
7 Users
1 Likes
10.3 K Views
(@marc_theunissen)
Red Giant
Joined: 7 years ago
Posts: 26
Topic starter  

How can I best combine subs with multiple exposure times?

Say I have 50x30s with a Master calibration files and 20x180s subs (again with master calibration files). What would be best strategy to combine?


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Excellent question Marc,

There are several ways that you can do this and it all depends on what you are trying to achieve.

If you still need to calibrate the data, then I would advise to calibrate both sets of data and save them in 2). If you have RGB data with chromatic aberration save them with the "align channels" mode, which reduces the amount of chromatic aberration significantly in most cases.

Then proceed with your calibrated frames.

You can play with the weights in 6) INTEGRATE

  • integrate with weights: equal, if you want all frames to be weighted equal, this will give you a result which will be subtoptimal in SNR, and suboptimal in sharpness
  • integrate with weights: exposure, if you want the longer exposure frames, which should have higher SNR, to have more weight. The integration should have better SNR but less sharpness
  • integrate with weights: quality, if you want to use APP's quality calculation. The quality parameter is based on noise, stardensity, star size and shape. Usually this gives the best integration for noise and sharpness combined.
  • integrate with weights: SNR, if you want to use the SNR of the frames. This is a really dangerous method. Any deviating gradients between the frames will make the SNR metric totally unreliable. From all the settings, this is the least attractive one. I wouldn't advise to use SNR for weights. Also bad frames with some clouds or bad transparancy will give higher SNR strongly reducing the integration result.
  • integrate with weights: noise, if you aim to have the lowest noise in the end result.
  • integrate with weights: star density, if you want to give more weight to the frames with the highest star density, this usually is a very good parameter to indicate good transparancy.
  • integrate with weights: star shape, if you want the frames with the smallest and roundest stars to have the most weight. This will give you the integration with the sharpest results. This works really well. Frames that have stars that are NOT round are punished a lot, so these frames will have little weight. Star shape means both roundness and size. The smaller and rounder the stars are, the higher the weight of that frame. This is a very nice integration setting if you have some frames without perfect guiding but still want those frames to help reduce the noise in the data.
  • Last option, you can make 2  integrations of the two datasets and combine them using the RGB Combine tool in which you have full control over how the images are combined. You will need to register both integrations first, before loading them into the RGB Combine tool.

 

Scale Independent quality parameters in APP

APP has the parameters star density & relative FWHM which are very helpfull if you combine data of different scales. APP calculates the star size/shape and star density relative to the scale differences between the frames. The scale differences are calculated using the homographies (projective transformations) between the frames.

 

 


   
ReplyQuote
(@marc_theunissen)
Red Giant
Joined: 7 years ago
Posts: 26
Topic starter  

Thanks for this explanation!


   
Mabula-Admin reacted
ReplyQuote
(@chrisjuh)
White Dwarf
Joined: 7 years ago
Posts: 7
 

Hi Mabula,

How to combine dslr images with multiple exposure time? Can I also use the RGB Combine tool?


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

Hi Chrisjuh,

Yes, you can use the RGB combine tool for combining 2 RGB images/stacks/integrations of different exposures, but you need to make sure that these images are registered to each other. Ther RGB combine tool won't do this automatically yet for you. (it's on my todo list).

If you have more than 2 frames, you should really use the normal procedure and play with the integration weights.

A real HDR combination will become possible in the near future using the RGB combine tool as well.

Let me know if this answers your questions,

Kind regards,

Mabula


   
ReplyQuote
(@bengourben)
Molecular Cloud
Joined: 6 years ago
Posts: 3
 

Hello Mabula.

Can I ask you what if I'm combining Dslr subs of different length, but I'm shooting with a h-alpha clip filter? I know how to achieve the desired h-alpha finished stack with subs of equal length, but do I need to do something different if the lengths vary? E.g. I've 20x420 30x600 and 20x540.

Thanks

Jason


   
ReplyQuote
(@paul-from-northern-mi)
Red Giant
Joined: 4 years ago
Posts: 60
 

I am fairly new to Astro Pixel Processor.  I like much of the promise of this software yet I don't understand many of the terms used in Astro Pixel Processor.  I have followed the steps from 0) to 9) typically exercising just the main action within a set of other options sometimes with good results and sometimes not.

I have an unmodified DSLR for my camera.  I did try doing a combination of shorter frames with longer.  I wanted to try limiting the blowout of nebulosity within the Orion Nebula.  This was done with a limited amount of frames.  It is possible that I may not have followed your above instructions above as you intended; yet the picture isn't really clear of what I should and shouldn't do.

I do have 1 immediate question.  When you stated above;

Posted by: @mabula-admin

If you still need to calibrate the data, then I would advise to calibrate both sets of data and save them in 2).

 

In doing this are you saving this as 2 separate projects or are you just piling all of the light exposures in one project?  (I assumed here that I would just add all of my multiple exposure light frames into 1 project; I did this and chose the weights by quality option.)

It looks like later you are suggesting a different alternative to make 2 separate integrations and combine them later.  

Posted by: @mabula-admin

Last option, you can make 2  integrations of the two datasets and combine them using the RGB Combine tool in which you have full control over how the images are combined. You will need to register both integrations first, before loading them into the RGB Combine tool.

I don't know how this would apply in regards to the RGB combine tool or how this work flow might look?

I am sorry for all of my questions; I guess my goal is to get the best benefit from Astro Pixel Processor but as in the case of combining multiple exposure times sometimes the best route isn't obvious.

Is it possible that someone will be doing a basic DSLR tutorial for combining exposure length's?

 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

@bengourben

No you don't, you can combine them all into one integration. True HDR it won't be though if that's what you're after, that's something to come in a later version.

 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 
Posted by: @paul-from-northern-mi

In doing this are you saving this as 2 separate projects or are you just piling all of the light exposures in one project?  (I assumed here that I would just add all of my multiple exposure light frames into 1 project; I did this and chose the weights by quality option.)

I think he means here to calibrate the frames of the different sessions with their own calibration data. So flats are usually taken after each session and should be used on that particular session. Then after that you can continue combining them with other sessions.

A proper manual is in the works, unfortunately I have no ETA myself.

 


   
ReplyQuote
(@paul-from-northern-mi)
Red Giant
Joined: 4 years ago
Posts: 60
 
 

A proper manual is in the works, unfortunately I have no ETA myself.

 

Vincent:  That could be helpful, yet on the other hand it might be hard to communicate well.  Sometimes terms and concepts are easy for the writer to understand but not to someone who is unfamiliar with the basic techniques and how they might influence the linear flow.

I have been listening through some of the video tutorials which are very good  in themselves but sometimes they aren't %100 applicable to what I am looking to do, and other times they just bring up more questions.  I've noticed strange differences in work flow; sometimes they are due to the release levels, sometimes not.

I am thinking that the video tutorial for DSLR Mosaics might be the most applicable for processing the main thread topic (multiple time exposures) yet it was done many versions ago so the question is whether the work flow is still the same.  There also are some oddities that I've noticed; for instance taken from the tutorial:

The different mosaic panels:

135MM:

Panel 1: 7x100s, 3x500s
Panel 2: 5x500s
Panel 3: 6x500s
Panel 4: 5x500s
Panel 5: 1x300s, 5x500s

All 135mm data was calibrated with a Bad Pixel Map and a MasterBias

400MM:

Panel 6 : 10x100s
5x300s
5x600s
Panel 7 : 10x300s
8x600s
Panel 8 : 4x600s
Panel 9 : 12x600s
Panel 10: 7x600s
Panel 11: 3x600s

All 400mm data was calibrated with a Bad Pixel Map, MasterBias, and a MasterFlat

 

I am wondering why the 135mm data doesn't require a Master Flat and the 400mm data does require one???  There may be a simple reason for this but I just don't understand the intent.

 

I have been putting together a chart from a Mac application called Mind Node just to get a better understanding of what is necessary within the work flow.  This chart branches off with different decisions and work flows such as if you are developing a completely new stacking without the masters, or if you already have the masters.  To some people this type of branching decisions map is helpful, yet to some it might not be.  I am also mapping numerous frames of questions encountered within specific areas of the work flow.  It is a work in progress but if it in any way would be helpful I could share it.

The Astro Pixel Processor software is somewhat of a manual itself as it has pop up boxes that describe what a particular function might do, but perhaps not if the function is helpful with a given circumstance.   It doesn't capture the essence or intuition of what a certain type of work flow might give you for a certain set of data.

 

 

 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

If it's helpful to you, it might be helpful to others. You can always send it to Mabula (mabula@astropixelprocessor.com) for him to have a good look at. In the above example, why there wasn't a masterflat for the 135mm data... no idea really. I would always advice to use one though. Perhaps there simply wasn't any for that data, it happens and that's why processing workflows can be different.


   
ReplyQuote
(@paul-from-northern-mi)
Red Giant
Joined: 4 years ago
Posts: 60
 

Thank you Vincent,

I will send this to Mabula as a PDF and see where that goes.  

In general I'd like to gain a better work flow of APP to get better results given altering details.  If this could be transmitted in some way to help others it would only be good.

In specific of course I have been looking to see if APP has a recommended work flow for processing multi length exposures.  (I've stalled out with an issue, but once resolved I intend to continue.)

The way I started my process since reading this forum thread was to go all the way to integration with each individual set of exposure times.  When I have all of this I would start a new project to combine the different integrated images.  This is just how I am understanding the information given here; I might be interpreting this incorrectly.


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Ok, great. Multiple exposure times is possible, but APP doesn't yet have HDR processing (it's on the list though).


   
ReplyQuote
(@euripides)
Main Sequence Star
Joined: 4 years ago
Posts: 18
 

So, the first pic is a result of Ha subs, 2m, 30s & 10s, integrated with weight = quality and of course the trapezium is not visible. But I have data from the core and trapezium in my 10s and 30s subs. 

If I got it right, to achieve a better result, I should follow the same steps and change my Integrate --> Multi-Session options to Integrate per session and then combine those 3 FITS with the Combine RGB tool?

 

combined
10s

   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Good question, so a true HDR option is not yet in, but mixing the Ha with the color might give a slightly better result. I never tried it for that purpose specifically though. What I would do is to split the RGB in separate channels first, register, normalize them with the Ha, finished by loading the channels in the RGBCombine tool and adding the Ha to the red channel, but mildly so (needs experimenting). You may also try to use the Ha as a luminance layer as well to get more detail out of the color image.


   
ReplyQuote
Share: