Share:

The problem with the data upload limit for attachments has been fixed. I have restored it to 30MegaBytes. A recent forum software upgrade was responsible for the changed limit. Please accept my apologies for missing this when upgrading the forum sofware, Mabula.

How do I integrate data from a dual rig and two cameras?  

  RSS

(@tooth_dr)
Brown Dwarf Customer
Joined: 2 years ago
Posts: 9
29/11/2018 5:00 pm  

I'm looking advice on the simplest way to integrate all my data together.

I have 2 x ED80s on a dual rig, with an Atik 383L+ on one and a QHY9M on the other.

Both cameras have the Kodak KAF-8300 sensor.

I have the complete set of calibration data for each individual camera - DARKS, FLATS, DARK FLATS, BIAS.

I also like to do mosaics, and use the fantastic mosaic tool to join the panels.  Again I would like to intgrate all my data at the same time.

 

Any ideas welcomed, please provide details or settings.

 

Many thanks

Adam


ReplyQuote
 Sara
(@swag72)
Red Giant Customer
Joined: 2 years ago
Posts: 67
29/11/2018 7:53 pm  

Hi Adam,

I use the 'multi session processing' in the load menu.

1) Load in lights from camera 1 - Assign the session as camera 1

2) Load in calibration files from camera 1 and assign them to camera 1

3) Load in lights from camera 2 - assign as camera 2

4) Load in calibration files from camera 2 and assign to camera 2

You will then be able to integrate them together. I find that this works best to integrate each mono channel separately.

Hope that helps.

 


ReplyQuote
(@tooth_dr)
Brown Dwarf Customer
Joined: 2 years ago
Posts: 9
29/11/2018 8:56 pm  
Posted by: Sara

Hi Adam,

I use the 'multi session processing' in the load menu.

1) Load in lights from camera 1 - Assign the session as camera 1

2) Load in calibration files from camera 1 and assign them to camera 1

3) Load in lights from camera 2 - assign as camera 2

4) Load in calibration files from camera 2 and assign to camera 2

You will then be able to integrate them together. I find that this works best to integrate each mono channel separately.

Hope that helps.

 

Thanks Sara 👍🏼

What settings would you suggest for the integration stage, in terms of outlier rejection, or is it no different to stacking one camera data set. 


ReplyQuote
 Sara
(@swag72)
Red Giant Customer
Joined: 2 years ago
Posts: 67
30/11/2018 10:28 am  

It depends on the numbers of lights I am using as to what rejection process I use. If it's more than 20 I use LAN Winsor and keep the rest of the settings to default.


ReplyQuote
(@itarchitectkev)
Main Sequence Star Customer
Joined: 2 years ago
Posts: 34
31/01/2019 2:06 pm  

I have a similar request, but keep running into issues.

My setup is two different scopes, so two different FOV, with a mono camera on one (ASI1600MM) and a OSC colour camera on the other (ASI294MC).

I seem to stumble on step 0) because to process the colour images, I need to Force Bayer CFA. Obviously I don't need to do that for the mono data. Would I have to process these in APP independently, then combine in Photoshop for example (i.e. add HA to RGB, or add a Luminance layer to RGB) or can I do this all in APP - the advantage is that it solves the FOV from the two different scopes (my longer focal length basically "adds more detail" and I crop out the wider FOV shot)?

Kev


ReplyQuote
(@tooth_dr)
Brown Dwarf Customer
Joined: 2 years ago
Posts: 9
31/01/2019 2:17 pm  
Posted by: itarchitectkev

I have a similar request, but keep running into issues.

My setup is two different scopes, so two different FOV, with a mono camera on one (ASI1600MM) and a OSC colour camera on the other (ASI294MC).

I seem to stumble on step 0) because to process the colour images, I need to Force Bayer CFA. Obviously I don't need to do that for the mono data. Would I have to process these in APP independently, then combine in Photoshop for example (i.e. add HA to RGB, or add a Luminance layer to RGB) or can I do this all in APP - the advantage is that it solves the FOV from the two different scopes (my longer focal length basically "adds more detail" and I crop out the wider FOV shot)?

Kev

 

Hi Kev

I also combine data from mono CCD and DSLR.  I stack the mono data first.  Then I combine the DSLR data separately.  When I'm working with the DSLR data I include one mono frame in the file list.  I keep it checked and go as far as the end of stage 3.  On stage 4 (register) I select the mono frame as the reference frame, and uncheck it.  I then register all the DSLR lights to this mono light, and continue on to the integration stage.  What you then get is both data sets stacked and ready just to layer in PS.


ReplyQuote
(@itarchitectkev)
Main Sequence Star Customer
Joined: 2 years ago
Posts: 34
31/01/2019 4:04 pm  

@tooth_dr - that's it. That's the part I was struggling with. I used this technique when I just had the OSC camera and using Ha with that, but this trick allows the FOV to be solved to allow me to combine these in post processing! Thank you.


ReplyQuote
(@tooth_dr)
Brown Dwarf Customer
Joined: 2 years ago
Posts: 9
31/01/2019 4:58 pm  
Posted by: itarchitectkev

@tooth_dr - that's it. That's the part I was struggling with. I used this technique when I just had the OSC camera and using Ha with that, but this trick allows the FOV to be solved to allow me to combine these in post processing! Thank you.

No problem.  I almost bought Registar, but when I discovered it could be done in APP, I didnt need to buy any other software


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2111
09/02/2019 6:14 pm  
Posted by: AstroCookbook Kev

I have a similar request, but keep running into issues.

My setup is two different scopes, so two different FOV, with a mono camera on one (ASI1600MM) and a OSC colour camera on the other (ASI294MC).

I seem to stumble on step 0) because to process the colour images, I need to Force Bayer CFA. Obviously I don't need to do that for the mono data. Would I have to process these in APP independently, then combine in Photoshop for example (i.e. add HA to RGB, or add a Luminance layer to RGB) or can I do this all in APP - the advantage is that it solves the FOV from the two different scopes (my longer focal length basically "adds more detail" and I crop out the wider FOV shot)?

Kev

Hi  Kev @itarchitectkev and @tooth_dr,

Indeed, the best way for now is what tooth_dr recommends:

Hi Kev

I also combine data from mono CCD and DSLR. I stack the mono data first. Then I combine the DSLR data separately. When I'm working with the DSLR data I include one mono frame in the file list. I keep it checked and go as far as the end of stage 3. On stage 4 (register) I select the mono frame as the reference frame, and uncheck it. I then register all the DSLR lights to this mono light, and continue on to the integration stage. What you then get is both data sets stacked and ready just to layer in PS.

In this way, you are ensured that the data is registered to the reference.

I have on my ToDo list to make this all possible automatically (meaning, you could load all data), but it is a bit complicated, basically APP will need to differentiate between the monochrome and RGB (multi-channel) data, so normalization still works and you end up with both a monochrome and RGB integration, registered and normalized to each other.

Furthermore, like you experienced, the 0) Raw/FITS settings can be different then for the images loaded. Which needs a further upgrade of the current implementation...

It's on my ToDo 😉

Mabula

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 2 years ago
Posts: 2111
09/02/2019 6:15 pm  

@tooth_dr, thanks for helping out 😉 !

Main developer of Astro Pixel Processor and owner of Aries Productions


ReplyQuote
Share: