Best method for ups...
 
Share:
Notifications
Clear all

Mar 28 2026 APP 2.0.0-beta40 will be released in 7 days.

It did take a long time to have the work finished on this and it  will have a major performance boost of 30-50% over 2.0.0-beta39 from calibration to integration. We extensively optimized many critical parts of APP. All has been tested to guarantee correct optimizations. Drizzle and image resampling is much faster for instance, those modules have been completely rewritten. Much less memory usage. LNC 2.0 will be released which works much better and faster than LNC in it's current state. And more, all will be added to the release notes in the coming weeks...

Update on the 2.0.0 release & the full manual

We are getting close to the 2.0.0 stable release and the full manual. The manual will soon become available on the website and also in PDF format. Both versions will be identical and once released, will start to follow the APP release cycle and thus will stay up-to-date to the latest APP version.

Once 2.0.0 is released, the price for APP will increase. Owner's license holders will not need to pay an upgrade fee to use 2.0.0, neither do Renter's license holders.

 

Best method for upscaling

13 Posts
2 Users
5 Reactions
10.8 K Views
 xiga
(@xiga)
Red Giant
Joined: 8 years ago
Posts: 34
Topic starter  

Hi Mabula

I have some data of M13 I shot with my 80ED and Nikon D5300. I would like to upscale it to increase the image scale a bit and was wondering if you could suggest the best method of doing so in APP. 

A Drizzle with x2 resize perhaps? Any other settings to think about? What about Bayer Drizzle? 

I presume any of these would be better than simply doing a bicubic resize? 



   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 

Hi @xiga,

Simple upscaling needs to be done in integration with the scale parameter. Increase it to 1.3-1.6 to start with for more resolution in the integration. Upscaling really needs to be done with the lanczos filter . Using another data-resampling/interpolation filter will only make the result less sharp as it could be with lanczos..

Regarding, Drizzle ot Bayer Drizzle, only use this if your data meets the following criteria :

  1. Lots of data !!! to deal with noise inherent from the drizzle integration (drizzle is a big noise injector)
  2. your data is undersampled
  3. your data is well dithered per frame

If your data meets these criteria, by all means, try drizzle/bayer drizzle, otherwise

  • it will probably not accomplish more than regular upscaling with lanczos interpolation for sharpness and
  • it will definitely be more noisy due to the drizzle technique.

 

It's safe to say that drizzle is used way too often in cases where the above criteria are not satisfied, resulting in integrations that are not as good as they can be.

So indeed, don't do a bicubic resize, bicubic isn't your choice for the sharpest result.

Let me know if this helps 😉

Mabula



   
xiga reacted
ReplyQuote
 xiga
(@xiga)
Red Giant
Joined: 8 years ago
Posts: 34
Topic starter  

Thank you Mabula.

My data is 20 * 7min subs, all dithered aggressively. Image scale is 1.57" so I might not get much out of a Drizzle or Bayer Drizzle stack. I was thinking of using it just for the core though, where the signal is extremely high, so I might still give it a go.

Ps - any idea when we are likely to see the colour preserving stretch functionality being added? Also, is it possible to save an image with no stretch but with some saturation applied? 



   
ReplyQuote
 xiga
(@xiga)
Red Giant
Joined: 8 years ago
Posts: 34
Topic starter  

Hi Mabula

I think i've spotted something a bit strange. Maybe it's just something simple i have done wrong, or perhaps i've discovered something more unusual, i don't know, but i'll let you decide that 🙂

So i went ahead and did another stack using a Scale Factor of 1.6. Everything looks fine in APP, when i view both the original stack and the scaled one, they both look identical, but when I save both files and then move over to using Maxim DL (my only option at the moment for doing Deconvolution - but i will move to doing it in APP whenever it gets added) then only the non-scaled one is useable. 

So what is happening is, the non-scaled one opens in Maxim fine, and looks just as it did in APP, and i can run deconvolution. But the scaled version when opened in Maxim, now has a much wider histogram and consequently the stars are totally blown-out, and i can no longer run deconvolution. It's definitely not a screen stretch issue either, as i have it set to the maximum amount just as with the non-scaled stack.

So is this normal behaviour or not? I would have expected the scaling to only affect the resolution, but it appears that it has affected the data as well. Or have i done something stupid here? (probably the most likely option, lol). 

I will show some screen grabs below, and link the 2 stacks for you to look at as well. Apologies in advance for the size of the files! :-0

ps - I didn't use the exact same settings in both stacks (by mistake). In the scaled stack, I think i turned off LNC, as it was adding a lot of time to the stacking. Hopefully that doesn't have anything to do with it. 

Scaled
Non Scaled

https://1drv.ms/u/s!AhhWC3D3zU7BnjLcHTIm0XAhjCLT

https://1drv.ms/u/s!AhhWC3D3zU7BnjMj6WfH4TnYmQ6V



   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 
Posted by: xiga

Thank you Mabula.

My data is 20 * 7min subs, all dithered aggressively. Image scale is 1.57" so I might not get much out of a Drizzle or Bayer Drizzle stack. I was thinking of using it just for the core though, where the signal is extremely high, so I might still give it a go.

Ps - any idea when we are likely to see the colour preserving stretch functionality being added? Also, is it possible to save an image with no stretch but with some saturation applied? 

Hi @xiga,

Sure, you can always try 😉 it alwasy depends on the actual data, so it's hard to tell if you benefit from it. With only 20 frames, I would suggest not to use to small drizzle droplets, because then the result will probably be very noisy.

Color preserving stretching is soon to be added, I am working right now on image drawing using OpenGL, so directly using the GPU of the videocard if available. Once that is finished, I intend to upgrade the preview filters with color preserving mode and also implement it using OpenCL, also using the GPU.

A separate saturation module should be added indeed, now it's only possible if you stretch.

Cheers,

Mabula



   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 
Posted by: xiga

Hi Mabula

I think i've spotted something a bit strange. Maybe it's just something simple i have done wrong, or perhaps i've discovered something more unusual, i don't know, but i'll let you decide that 🙂

So i went ahead and did another stack using a Scale Factor of 1.6. Everything looks fine in APP, when i view both the original stack and the scaled one, they both look identical, but when I save both files and then move over to using Maxim DL (my only option at the moment for doing Deconvolution - but i will move to doing it in APP whenever it gets added) then only the non-scaled one is useable. 

So what is happening is, the non-scaled one opens in Maxim fine, and looks just as it did in APP, and i can run deconvolution. But the scaled version when opened in Maxim, now has a much wider histogram and consequently the stars are totally blown-out, and i can no longer run deconvolution. It's definitely not a screen stretch issue either, as i have it set to the maximum amount just as with the non-scaled stack.

So is this normal behaviour or not? I would have expected the scaling to only affect the resolution, but it appears that it has affected the data as well. Or have i done something stupid here? (probably the most likely option, lol). 

I will show some screen grabs below, and link the 2 stacks for you to look at as well. Apologies in advance for the size of the files! :-0

ps - I didn't use the exact same settings in both stacks (by mistake). In the scaled stack, I think i turned off LNC, as it was adding a lot of time to the stacking. Hopefully that doesn't have anything to do with it. 

Scaled
Non Scaled

https://1drv.ms/u/s!AhhWC3D3zU7BnjLcHTIm0XAhjCLT

https://1drv.ms/u/s!AhhWC3D3zU7BnjMj6WfH4TnYmQ6V

Hi @xiga,

I am downloading the integration right now.

LNC shouldn't affect deconvolution at all, so that is not an explanation for MaximDL to fail on the scaled version.

The data resampling algorithm might influence it, but I can see that you used Lanczos3, so I think that shouldn't be a problem as well.

Any scale operation will influence noise, noise normally is decreased in the result if you downscale for example. Perhaps this is somehow upsetting MaximDL's implementations, I can only guess at this point, because I don't use/have the application.

I will look at both integrations in APP though and verify if they are to be compared as expected. I would expect the results to look the same with the same stretch applied, only different for noise in the scaled version with 1.6x more pixels in both width and height, so it will probably have a bit nicer stars.

Mabula

 



   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 

Hi @xiga,

In APP all looks like expected. I can only assume this has to do with how MaximDL internally works.

In APP I see this with DDP on auto, looks almost identical:

scale 1.0 zoom
scale 1.6 zoom

Several things affect how the integrations will look:

  • The LNC function has an effect because it attempts to make the illumination locally compatible in all layers of the stack. This influences the background level of the integration
  • The scaling has it's own but small effect
  • The outlier rejection has an effect. I can see that you used 5 iterations with kappa LNMWC, this is very aggresive and very likely destructive for your data. With these settings you will remove possibly 25% of the good signal as well, besides the outliers... so I recommend to use a higher kappa and definitely less iterations. The result should then have significant lower noise.

Let's look at a big star with a very big zoom as well, clearly the upscaling works like expected, the stars are rounder, but nothing changed like the star's absolute size or intensity profile, we see more resolution and that's it :

scale 1.0 zoom starShape
scale 1.6 zoom starShape

I also did a star analysis on both integrations, (load as lights, or use the star map image viewer mode), I would expect that the FWHM average star size in the scaled version is 1.6x times the FWHM average star size in the unscaled integration... which it clearly is, also showing that APP has no problem with star analysis or determination of the Point Spread Function (PSF, which would be needed for an actual deconvolution) in both versions:

left: scale 1.0x

right: scale 1.6x

scale 1.0 starMap
scale 1.6 starMap

Having seen this, it's clear that you need to apply an auto stretch function to look at both stacks. The exact same stretch will not work, since both stacks have very different background values, due to the mentioned factors.

So if you apply the exact same stretch in MaximDL to both stacks, I am not surprised one looks very odd.

But for deconvolution it shouldn't matter, it should be done on the unstretched/linear data. If MaximDL can perform deconvolution on the 1.0x version and not on the 1.6x version, I can only assume this is a MaximDL implementation setting or issue, because visually, everything looks perfect, and analytically, the FWHM's are what we expect, and APP has no problem with analysis.

Kind regards,

Mabula



   
xiga reacted
ReplyQuote
 xiga
(@xiga)
Red Giant
Joined: 8 years ago
Posts: 34
Topic starter  

Thank you Mabula for this very descriptive and analytical response! 

I totally agree that in APP everything looks fine when viewing the scaled stack. That matches what i was seeing too. Unfortunately it would appear that the problem then lies with Maxim (hardly your fault!) as i wasn't applying any screen stretch at all to either stack in Maxim (i had it set to Max Val which should cover the entire linear range with no stretching at all). Somehow Maxim just doesn't like the scaled-up stack, for whatever reason 🙁

Unfortunately i can't give up Deconvolution, it's just too important. So tell me, what if i were to just stack normally in APP  (no scaling), then run Deconvolution on it in Maxim, and then import it back into APP and use the 'Batch Resize' option under the Tools section to do a lanczos-3 scaling of the deconvolved stack. Would this get me to roughly the same place, or does the scaling need to be done during the stacking for it to work properly?

ps - The reason i used such a strong level of outlier rejection was because a few of the subs had some particularly bright satellite trails. I tried various combinations of iterations and kappa, but anything less than 5 and 2 did not remove the trails effectively, they were always very visible. Tbh, even with 5 and 2 i could still see a couple of them faintly, and i had to manually fix them in post-processing. So if you know of any other tricks i could try i'd be very interested to hear them 🙂



   
Mabula-Admin reacted
ReplyQuote
 xiga
(@xiga)
Red Giant
Joined: 8 years ago
Posts: 34
Topic starter  

Btw - here is the final processed version, which used no scaling. I would like to make use of the scaling though, as based on your post above it would appear that i could definitely gain some resolution by doing so (and even better if i could run deconvolution on the scaled version and then downscale it afterwards):

M13 v4


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 
Posted by: xiga

Thank you Mabula for this very descriptive and analytical response! 

...

Unfortunately i can't give up Deconvolution, it's just too important. So tell me, what if i were to just stack normally in APP  (no scaling), then run Deconvolution on it in Maxim, and then import it back into APP and use the 'Batch Resize' option under the Tools section to do a lanczos-3 scaling of the deconvolved stack. Would this get me to roughly the same place, or does the scaling need to be done during the stacking for it to work properly?

Well, it could get you to almost the same place i guess, but that will depend on the the quality of deconvolution in Maxim I think. You cna certainly try it. If the deconvolution has artefacts, then upscaling won't be nice. If there are no artefacts, then the upscaling should be nice as well 😉

ps - The reason i used such a strong level of outlier rejection was because a few of the subs had some particularly bright satellite trails. I tried various combinations of iterations and kappa, but anything less than 5 and 2 did not remove the trails effectively, they were always very visible. Tbh, even with 5 and 2 i could still see a couple of them faintly, and i had to manually fix them in post-processing. So if you know of any other tricks i could try i'd be very interested to hear them 🙂

The issue is that 20 frames is quite little to perform outlier rejection well. With 40 or more frames it already works much better. What you can try to do is, integrating with median instead of average. That should help you better in this case.

I will look into further improving the outlier rejection algorithms as well 😉

Cheers,

Mabula



   
xiga reacted
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 
Posted by: xiga

Btw - here is the final processed version, which used no scaling. I would like to make use of the scaling though, as based on your post above it would appear that i could definitely gain some resolution by doing so (and even better if i could run deconvolution on the scaled version and then downscale it afterwards):

M13 v4

@xiga,

That's a great Hercules Globular cluster 😉 ! Excellent, i like it, nice colors and nice details.

Did you try the convolution and afterwards upscaling?

Mabula



   
xiga reacted
ReplyQuote
 xiga
(@xiga)
Red Giant
Joined: 8 years ago
Posts: 34
Topic starter  

Thanks Mabula.

I didn't try Deconvolving and then Upscaling. After thinking about it a bit more, i don't think it made much sense. It really needs to be done in the order of Upscale->Deconvolve->Downscale to probably see any real improvement. 

But nonetheless it is still interesting to see the benefits of just doing a Lanczos3 upscaling, as your post above showed. I was really surprised to see the improvement in resolution at close quarters, especially in the shape of the stars, so i think i will be a bit braver now in shooting objects that need a bit more FL than my 80ED can provide. Up to now i haven't really bothered, but i think in future i will still have a go anyway, and try upscaling the stack to gain a bit of extra reach and resolution, so thanks for showing me this! 

Oh, i also realised how to fix the upscaled stack not showing correctly in Maxim. The non-scaled stack showed as having a Maximum Value of about 0.69 in the Stretch Screen, whereas the scaled stack was showing a Maximum Value of about 0.32. So i just manually over-wrote the value back to 0.69 and it brought it back to normal. Of course, i then realised that Maxim crashes when trying to do Deconvolution on an upscaled DSLR image (which is a very big image in fairness) so i'm back to the drawing board in any case!

Just as a side note, i seem to recall that DSS's default Kappa Sigma stacking setting might be 5 iterations and a Kappa of 2, which according to you is quite strong and will overly hurt good data. I can totally understand that from your perspective you want to help us preserve our data as much as possible, hence the desire to use lower settings, however, speaking as someone who rarely achieves 40 subs on any target (i usually only manage a maximum of 20, if i'm lucky) i think you should have another look at the advice tip which suggests needing over 20 subs in order to use Average over Median. I think it  should definitely be more than 20, possibly somewhere between 20 and 40, but you're the expert so i'll leave that to you 😉

ps - The image above didn't use any rescaling. I did however improve on it slightly, so here is the actual final version:

M13 v5


   
ReplyQuote
(@mabula-admin)
Universe Admin
Joined: 9 years ago
Posts: 5056
 

Hi @xiga,

But nonetheless it is still interesting to see the benefits of just doing a Lanczos3 upscaling, as your post above showed. I was really surprised to see the improvement in resolution at close quarters, especially in the shape of the stars, so i think i will be a bit braver now in shooting objects that need a bit more FL than my 80ED can provide. Up to now i haven't really bothered, but i think in future i will still have a go anyway, and try upscaling the stack to gain a bit of extra reach and resolution, so thanks for showing me this!

Yes, if you don't have a lot of frames, I would always use regular upscaling with Lanczos over drizzle due to the noise injection from drizzle integration. Off course if your data meets the drizzle requirements:

  1. Lots of data
  2. well dithered
  3. under-sampled

then by all means, try drizzle because it should give you a sharper result. If you don't meet the drizzle requirements, then don't use it, because you will only achieve a noisier integration when compared to lanczos upscaling.

Oh, i also realised how to fix the upscaled stack not showing correctly in Maxim. The non-scaled stack showed as having a Maximum Value of about 0.69 in the Stretch Screen, whereas the scaled stack was showing a Maximum Value of about 0.32. So i just manually over-wrote the value back to 0.69 and it brought it back to normal. Of course, i then realised that Maxim crashes when trying to do Deconvolution on an upscaled DSLR image (which is a very big image in fairness) so i'm back to the drawing board in any case!

Excellent, won't Maxim load the data just as it is? Do you always need to fiddle with some stretch parameters then? The integrations from APP are linear, provided linear data was integrated, so you shouldn't apply any operation on the data before performing deconvolution. Perhaps, Maxim will not adjust the scale automatically like you indicate, but I can only guess here.

Just as a side note, i seem to recall that DSS's default Kappa Sigma stacking setting might be 5 iterations and a Kappa of 2, which according to you is quite strong and will overly hurt good data. I can totally understand that from your perspective you want to help us preserve our data as much as possible, hence the desire to use lower settings, however, speaking as someone who rarely achieves 40 subs on any target (i usually only manage a maximum of 20, if i'm lucky) i think you should have another look at the advice tip which suggests needing over 20 subs in order to use Average over Median. I think it should definitely be more than 20, possibly somewhere between 20 and 40, but you're the expert so i'll leave that to you 😉

Okay, thank you for the feedback. I will have a look at the information in the tooltips 😉

Excellent Hercules Globular cluster and pretty nice detail in the smaller galaxy in the center as well,

Mabula



   
ReplyQuote
Share: