Share:
Notifications
Clear all

15th Feb 2024: Astro Pixel Processor 2.0.0-beta29 released - macOS native File Chooser, macOS CMD-Q fixed, read-only Fits on network fixed and other bug fixes

7th December 2023:  added payment option Alipay to purchase Astro Pixel Processor from China, Hong Kong, Macau, Taiwan, Korea, Japan and other countries where Alipay is used.

 

Proccesing images from two setups OSC (1 st cam) + L(2 nd cam) it's possible and how ?

7 Posts
4 Users
1 Likes
1,659 Views
(@grzegorz-zwierzchowski)
Brown Dwarf
Joined: 5 years ago
Posts: 3
Topic starter  

Hi!

I want to use my dual camera setup (the same session on one mount) OSC(ZWO071) with Cannon 200mm for color data,

and ASI1600 with Samyang 135mm for luminance. FOV and scale is slightly different. 4.92''/pix for color and 5.8''/pix for luminance. 

How to combine this for best results (OSC+L) ?

Or try to adjust scale "by hardware"? for example using different focal lengths ASI 071+250mm = 3.94''/pix (for color)

and ASI 1600 + 200mm = 3.91''/pix (for L data).

Thank you and best wishes!

PS. Its this kind of setup promising in therm of quality vs. time ? (aquiring color + L in the same time) 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

A dual setup is always great, especially when acquiring 2 objects at the same time. APP is made to just work for combining different setups even when the FOV is quite different. But if you're close that's probably better, you can integrate the data from the 2 setups like you normally do and then combine those results again with each other.


   
ReplyQuote
(@copper280z)
White Dwarf
Joined: 3 years ago
Posts: 6
 
Posted by: @vincent-mod

A dual setup is always great, especially when acquiring 2 objects at the same time. APP is made to just work for combining different setups even when the FOV is quite different. But if you're close that's probably better, you can integrate the data from the 2 setups like you normally do and then combine those results again with each other.

How exactly do you do the bolded part? I'd like to do this, and I think I know what I want to happen, but I'm not positive or sure it's the correct way to do it. I would like to use the captured luminance data in the RGB frames in combination with the mono-L data.

 

I think what I want to do is to split my OSC data into a synthetic L channel + RGB at the individual sub level, then integrate the mono-L subs with the synthetic-L subs. Integrate the RGB subs as normal, then use the LRGB combine tool at the end. However, it appears that this workflow is not possible, the "Split channels" tool in the Calibrate tab only spits out RGB, which makes me think this is not the intended workflow.

 

Does the LRGB combine tool automatically create a synthetic-L channel from the RGB data, and add it to the provided L data?


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

For regular data, you can just load those final integrations in again and then register, normalize etc them like usual. You may want to manually set the reference frame in such a case.

Regarding your specific case, I wonder if combining synthetic luminance with actual luminance is a good idea. Real luminance data is likely better? I'll have to ask Mabula to be sure there. But yeah normally the split channels option is there for RGB data, to be able to combine that type of data with L, Ha, OIII etc.

For what you want here is a small explanation from Mabula (pretty old already but still applies I think):

First of all, you can easily create a synthetic lum of your data, there are several ways to accomplish this:

  1. register, normalize and integrate all subs of all channels into 1 integration (tab 6, top option)
  2. use the RGB combine tool to create a Luminance layer from the separate channels by giving the channels separate luminance weights and different multipliers. Usually, you need to multiply H-alpha very strongly compared to the broadband channels. A multiplier of 5 is very normal.

Save this luminance to a separate file.

Then create a new composite, like an LHaRGB, and use the luminance channel that you just created


   
ReplyQuote
(@steveibbotson)
Red Giant
Joined: 7 years ago
Posts: 31
 

The way I have done it is to put all of the data from one rig in session 1 and all the data from  the other rig in session 2.

I have a permanent set up so unless I alter things or ther has been a problem the cali frames are from the library.

 


   
ReplyQuote
(@copper280z)
White Dwarf
Joined: 3 years ago
Posts: 6
 
Posted by: @vincent-mod

For regular data, you can just load those final integrations in again and then register, normalize etc them like usual. You may want to manually set the reference frame in such a case.

Regarding your specific case, I wonder if combining synthetic luminance with actual luminance is a good idea. Real luminance data is likely better? I'll have to ask Mabula to be sure there. But yeah normally the split channels option is there for RGB data, to be able to combine that type of data with L, Ha, OIII etc.

For what you want here is a small explanation from Mabula (pretty old already but still applies I think):

First of all, you can easily create a synthetic lum of your data, there are several ways to accomplish this:

  1. register, normalize and integrate all subs of all channels into 1 integration (tab 6, top option)
  2. use the RGB combine tool to create a Luminance layer from the separate channels by giving the channels separate luminance weights and different multipliers. Usually, you need to multiply H-alpha very strongly compared to the broadband channels. A multiplier of 5 is very normal.

Save this luminance to a separate file.

Then create a new composite, like an LHaRGB, and use the luminance channel that you just created

I got some L data and tried this, it worked pretty well. Finding a good weighting factor is tricky, I ended up using the noise value. I tried SNR, quality, noise, and equal, noise ended up looking the nicest. I had some pretty heavy walking noise in one of the OSC datasets, which may have thrown off the SNR calculation, which was my first try. I suspect, but am not positive, that integrating both datasets at the same time would make it easier or maybe less subjective for weighting based on quality/SNR/noise/etc.

In my case I have an APS-C Mirrorless camera (Fuji), and an asi183mm all on the same telescope. The mono data is much better on a per-time basis, but being able to combine them is nice, I get the big FoV of the APS-C and can use mono data to clean up areas as I deem necessary. This is an awesome capability to have.

 


   
ReplyQuote
(@vincent-mod)
Universe Admin
Joined: 7 years ago
Posts: 5707
 

Nice! So I talked with Mabula and he mentions that it kind of depends when to mix synthetic with real luminance. But it can certainly help as long as you see a noise improvement, it should be fine to do actually.


   
ReplyQuote
Share: