Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

How do I integrate data from a dual rig and two cameras?

10 Posts
4 Users
12 Likes
2,366 Views
(@tooth_dr)
White Dwarf
Joined: 6 years ago
Posts: 13
Topic starter  

I'm looking advice on the simplest way to integrate all my data together.

I have 2 x ED80s on a dual rig, with an Atik 383L+ on one and a QHY9M on the other.

Both cameras have the Kodak KAF-8300 sensor.

I have the complete set of calibration data for each individual camera - DARKS, FLATS, DARK FLATS, BIAS.

I also like to do mosaics, and use the fantastic mosaic tool to join the panels.  Again I would like to intgrate all my data at the same time.

 

Any ideas welcomed, please provide details or settings.

 

Many thanks

Adam


   
Mabula-Admin reacted
ReplyQuote
 Sara
(@swag72)
Neutron Star
Joined: 7 years ago
Posts: 67
 

Hi Adam,

I use the 'multi session processing' in the load menu.

1) Load in lights from camera 1 - Assign the session as camera 1

2) Load in calibration files from camera 1 and assign them to camera 1

3) Load in lights from camera 2 - assign as camera 2

4) Load in calibration files from camera 2 and assign to camera 2

You will then be able to integrate them together. I find that this works best to integrate each mono channel separately.

Hope that helps.

 


   
ReplyQuote
(@tooth_dr)
White Dwarf
Joined: 6 years ago
Posts: 13
Topic starter  
Posted by: Sara

Hi Adam,

I use the 'multi session processing' in the load menu.

1) Load in lights from camera 1 - Assign the session as camera 1

2) Load in calibration files from camera 1 and assign them to camera 1

3) Load in lights from camera 2 - assign as camera 2

4) Load in calibration files from camera 2 and assign to camera 2

You will then be able to integrate them together. I find that this works best to integrate each mono channel separately.

Hope that helps.

 

Thanks Sara 👍🏼

What settings would you suggest for the integration stage, in terms of outlier rejection, or is it no different to stacking one camera data set. 


   
Mabula-Admin reacted
ReplyQuote
 Sara
(@swag72)
Neutron Star
Joined: 7 years ago
Posts: 67
 

It depends on the numbers of lights I am using as to what rejection process I use. If it's more than 20 I use LAN Winsor and keep the rest of the settings to default.


   
ReplyQuote
(@itarchitectkev)
Neutron Star
Joined: 6 years ago
Posts: 111
 

I have a similar request, but keep running into issues.

My setup is two different scopes, so two different FOV, with a mono camera on one (ASI1600MM) and a OSC colour camera on the other (ASI294MC).

I seem to stumble on step 0) because to process the colour images, I need to Force Bayer CFA. Obviously I don't need to do that for the mono data. Would I have to process these in APP independently, then combine in Photoshop for example (i.e. add HA to RGB, or add a Luminance layer to RGB) or can I do this all in APP - the advantage is that it solves the FOV from the two different scopes (my longer focal length basically "adds more detail" and I crop out the wider FOV shot)?

Kev


   
Mabula-Admin reacted
ReplyQuote
(@tooth_dr)
White Dwarf
Joined: 6 years ago
Posts: 13
Topic starter  
Posted by: itarchitectkev

I have a similar request, but keep running into issues.

My setup is two different scopes, so two different FOV, with a mono camera on one (ASI1600MM) and a OSC colour camera on the other (ASI294MC).

I seem to stumble on step 0) because to process the colour images, I need to Force Bayer CFA. Obviously I don't need to do that for the mono data. Would I have to process these in APP independently, then combine in Photoshop for example (i.e. add HA to RGB, or add a Luminance layer to RGB) or can I do this all in APP - the advantage is that it solves the FOV from the two different scopes (my longer focal length basically "adds more detail" and I crop out the wider FOV shot)?

Kev

 

Hi Kev

I also combine data from mono CCD and DSLR.  I stack the mono data first.  Then I combine the DSLR data separately.  When I'm working with the DSLR data I include one mono frame in the file list.  I keep it checked and go as far as the end of stage 3.  On stage 4 (register) I select the mono frame as the reference frame, and uncheck it.  I then register all the DSLR lights to this mono light, and continue on to the integration stage.  What you then get is both data sets stacked and ready just to layer in PS.


   
Mabula-Admin reacted
ReplyQuote
(@itarchitectkev)
Neutron Star
Joined: 6 years ago
Posts: 111
 

@tooth_dr - that's it. That's the part I was struggling with. I used this technique when I just had the OSC camera and using Ha with that, but this trick allows the FOV to be solved to allow me to combine these in post processing! Thank you.


   
ReplyQuote
(@tooth_dr)
White Dwarf
Joined: 6 years ago
Posts: 13
Topic starter  
Posted by: itarchitectkev

@tooth_dr - that's it. That's the part I was struggling with. I used this technique when I just had the OSC camera and using Ha with that, but this trick allows the FOV to be solved to allow me to combine these in post processing! Thank you.

No problem.  I almost bought Registar, but when I discovered it could be done in APP, I didnt need to buy any other software


   
Mabula-Admin reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 
Posted by: AstroCookbook Kev

I have a similar request, but keep running into issues.

My setup is two different scopes, so two different FOV, with a mono camera on one (ASI1600MM) and a OSC colour camera on the other (ASI294MC).

I seem to stumble on step 0) because to process the colour images, I need to Force Bayer CFA. Obviously I don't need to do that for the mono data. Would I have to process these in APP independently, then combine in Photoshop for example (i.e. add HA to RGB, or add a Luminance layer to RGB) or can I do this all in APP - the advantage is that it solves the FOV from the two different scopes (my longer focal length basically "adds more detail" and I crop out the wider FOV shot)?

Kev

Hi  Kev @itarchitectkev and @tooth_dr,

Indeed, the best way for now is what tooth_dr recommends:

Hi Kev

I also combine data from mono CCD and DSLR. I stack the mono data first. Then I combine the DSLR data separately. When I'm working with the DSLR data I include one mono frame in the file list. I keep it checked and go as far as the end of stage 3. On stage 4 (register) I select the mono frame as the reference frame, and uncheck it. I then register all the DSLR lights to this mono light, and continue on to the integration stage. What you then get is both data sets stacked and ready just to layer in PS.

In this way, you are ensured that the data is registered to the reference.

I have on my ToDo list to make this all possible automatically (meaning, you could load all data), but it is a bit complicated, basically APP will need to differentiate between the monochrome and RGB (multi-channel) data, so normalization still works and you end up with both a monochrome and RGB integration, registered and normalized to each other.

Furthermore, like you experienced, the 0) Raw/FITS settings can be different then for the images loaded. Which needs a further upgrade of the current implementation...

It's on my ToDo 😉

Mabula


   
Starcruiser reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 7 years ago
Posts: 4366
 

@tooth_dr, thanks for helping out 😉 !


   
ReplyQuote
Share: