Share:
Notifications
Clear all

2022-05-29: APP 2.0.0-beta2 has been released !

Release notes

Download links per platform:

windows 2.0.0-beta2

macOS x86_64 2.0.0-beta2

macOS arm64 M1 2.0.0-beta2

Linux DEB 2.0.0-beta2

Linux RPM 2.0.0-beta2

Multiple questions on Workflows with OSC camera and different filters


(@mfc)
Molecular Cloud Customer
Joined: 2 years ago
Posts: 5
Topic starter  

Hi, and thanks for the continual improvements to APP – I am now using it for almost all my processing 👍. Learning how to use MBB & LNC to fix banding issues from different sessions made a huge improvement and Star reducer is fantastic! I still however have a series of questions, and there is so much information available in the Forum that I haven’t been able to locate answers, so have listed them below 😉. I hope you have time to point me to where I can get answers / tutorials for the following:

  1. I use an OSC camera (ASI071MC Pro), normally with an IR filter. I sometimes instead use an L-Pro filter to reduce moonlight or light from my cottage. How should I load the different frames? All as RGB, or the L-Pros as something else? What is the workflow afterwards? I tried loading the L-pro shots as “LPS”(assuming this means Light Pollution Suppression), but then APP won’t stack these together with the RGB files – instead giving me a separate LPS stack. How do I best process them further?
  2. When doing the above, do I need to process the frames from the OSC as individual channels or can I use the processed multi-channel stacked image in Combine RGB by loading it as all of R, G & B?
  3. I use a manual filter drawer, so assume the FITS data won’t be updated with filter information. Or is there some way I can register it?
  4. I also have data taken without filters – so the stars are noticeably more bloated. Can I process these together with the above, or should I e.g. process them prior, and perform a ‘star reducer’ process on them to make the star size comparable to the L-Pro or IR-filtered ones, then use Combine RGB? Or just throw them away?
  5. How do I decide when to throw data away- e.g. oblong stars due wind in a nebula shot? Is it better to ditch these or let APP use them in order to utilise the extra data for the nebula?
  6. Where can I find out more about the Integration Output Map functions – esp’y Drizzle/MBB weight map?
  7. I use the ASI Air Pro auto flat function for taking flats and dark flats – but notice that it gives me around 30-40K ADU and when processed I get very light corners- it looks like reverse vignetting. Am I correct in assuming that I need to reduce the exposure to get closer to a neutral-vignette result? Or is it just as easy to accept the default values and correct vignetting in post-processing?
  8. My scopes are a Skywatcher ED80 refractor and a 9.25” SCT. If I mix data from these (more likely after I get a Hyperstar, so their FOV will be fairly similar), is that when I need to select ‘Flip descriptors x/y’ & DDC?

ReplyQuote
(@vincent-mod)
Quasar Admin
Joined: 5 years ago
Posts: 4830
 
Posted by: @mfc
  1. I use an OSC camera (ASI071MC Pro), normally with an IR filter. I sometimes instead use an L-Pro filter to reduce moonlight or light from my cottage. How should I load the different frames? All as RGB, or the L-Pros as something else? What is the workflow afterwards? I tried loading the L-pro shots as “LPS”(assuming this means Light Pollution Suppression), but then APP won’t stack these together with the RGB files – instead giving me a separate LPS stack. How do I best process them further?

An L-pro is a broadband filter basically, it blocks certain parts of the spectrum but still quite large parts around the wavelenghts of Ha, OIII, SII etc. These can be loaded in regularly and you could tag them as LPS indeed (it's just a tag anyway). If you want to stack them together with regular RGB, you can also tag them as that or treat them as different data and indeed making separate integrations for both. You can then load those integrations in again as lights and then integrate those integrations into 1. I think i would just use them as regular RGB as well.

  1. When doing the above, do I need to process the frames from the OSC as individual channels or can I use the processed multi-channel stacked image in Combine RGB by loading it as all of R, G & B?

In that case you want to add narrowband data I assume? Because that's what combineRGB is for. The L-pro isn't narrowband (unless I'm mistaken here).

  1. I use a manual filter drawer, so assume the FITS data won’t be updated with filter information. Or is there some way I can register it?

I think you can normally set the filter in the capture software, if you do that the software will (or should, not all do) add the filter tag into the header. You'll have to do that manual then as well.

  1. I also have data taken without filters – so the stars are noticeably more bloated. Can I process these together with the above, or should I e.g. process them prior, and perform a ‘star reducer’ process on them to make the star size comparable to the L-Pro or IR-filtered ones, then use Combine RGB? Or just throw them away?

You can combine any data if you want. The workflow becomes a bit more complex when adding narrowband data, but it's perfectly possible. If that data helps you get a nice result is something you'll have to test I think.

  1. How do I decide when to throw data away- e.g. oblong stars due wind in a nebula shot? Is it better to ditch these or let APP use them in order to utilise the extra data for the nebula?

I normally, given I have enough data (so at least 40 frames or so), just stack everything in APP and if I know there is bad data in it, tell it to integrate maybe 95-90% (also depending on how much you think is necessary).

  1. Where can I find out more about the Integration Output Map functions – esp’y Drizzle/MBB weight map?

@mabula-admin is going to start on a manual where things like this will be explained as well. That manual is a long time in the making, but it will start taking shape in the coming months. If you really need more info on it now I can ask Mabula to answer here as well.

  1. I use the ASI Air Pro auto flat function for taking flats and dark flats – but notice that it gives me around 30-40K ADU and when processed I get very light corners- it looks like reverse vignetting. Am I correct in assuming that I need to reduce the exposure to get closer to a neutral-vignette result? Or is it just as easy to accept the default values and correct vignetting in post-processing?

Taking flats can be a bit complex, I advice to start a new thread on the forum for specifically this question and showing some examples, where you add the info on how you're exactly taking the flats etc. Thanks.

  1. My scopes are a Skywatcher ED80 refractor and a 9.25” SCT. If I mix data from these (more likely after I get a Hyperstar, so their FOV will be fairly similar), is that when I need to select ‘Flip descriptors x/y’ & DDC?

Not necessarily, APP is able to match very different FOV's and still integrate those, they don't have to be very similar even. But, for cropping purposes etc it is nice to have I think.

 

Hope that answers a bit? 🙂


ReplyQuote
(@mfc)
Molecular Cloud Customer
Joined: 2 years ago
Posts: 5
Topic starter  

@vincent-mod Thanks Vincent, as usual your answers are both comprehensive and understandable. It sounds like I am not too far off the mark in my approach - all I need is practice (and some talent!). 😉

I'll pursue the Flats issue on the ASI Air FB forum as I am sure I'm not the only one experiencing it. Once I get something worthwhile I will update here. 

Rgds, Mark


ReplyQuote
Share: