What is the best wa...
 
Share:
Notifications
Clear all

2022-05-29: APP 2.0.0-beta2 has been released !

Release notes

Download links per platform:

windows 2.0.0-beta2

macOS x86_64 2.0.0-beta2

macOS arm64 M1 2.0.0-beta2

Linux DEB 2.0.0-beta2

Linux RPM 2.0.0-beta2

What is the best way to combine multiple sessions over time other then redoing all every time?


(@lanties)
Main Sequence Star Customer
Joined: 5 years ago
Posts: 43
Topic starter  

I know APP makes it very easy to throw all in the "pot" at once and get a great outcome.  This question is also because of very little experience and even less understanding of following recipes.

I have been doing multiple sessions of multiple targets over the last couple of weeks. Every morning I want to see, for example, what the additional Ha frames in B33 have added to the stack and every day I redo all by adding more and more sessions. I know I can reuse the master flats and dark-flats created for the different sessions. 

But what would be the best way to achieve the best results in the shortest possible time?

 

This topic was modified 4 years ago by lanties

ReplyQuote
(@elgol)
Main Sequence Star Customer
Joined: 5 years ago
Posts: 83
 

I would stack the frames of one setting, then stacking theses socalled panels each time for intermediate result. you need not redo them eyery time, just add the new stacked frames. works great. the processing is different though. mabula explains this is his latest mosaic tutorial, see part 1 to 3


ReplyQuote
(@lanties)
Main Sequence Star Customer
Joined: 5 years ago
Posts: 43
Topic starter  

Thank you @elgol. Will go through the tutorial 


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 5 years ago
Posts: 3180
 

Hi @lanties & @elgol,

Yes I would agree, for intermediate results, simply stack the integrations per session. That should show you how noise is dropping from adding more frames.

To make your final integration of a certain field of view (so not a mosaic), it is advised to stack all single calibrated frames though 😉 since that will improve the result.

Mabula

This post was modified 4 years ago by Mabula-Admin

ReplyQuote
(@dciobota)
Brown Dwarf Customer
Joined: 4 years ago
Posts: 13
 

Mabula, I have a question about this.  I am trying to integrate 370 frames, and even though according to APP I have enough memory and disk space, I get a java buffer underflow error during the pixel integration step.  I didn't capture the screen at that point, sorry.

So, I am trying to do what was suggested above, integrate each night with the integrations from the previous nights.  However, you said above that for best results I need to integrate all lights at once, which apparently I cannot do.  Can you explain the difference between integrating results each night then integrating all those together vs integrating all frames at once?  How does all at once improve results?

Thanks in advance.

PS: I hope maybe in the future there will be more explanatory errors rather than java errors.  I have no idea currently what causes it and it takes many hours to integrate all those frames to get to the failure point.

 


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 5 years ago
Posts: 3180
 

Hi @dciobota,

Apologies for the inconvenience.

Let me guess, are you on MacOS ?

On what kind of drive are you working ? Internal or external ? And with what Partition Format (NTFS, FAT32, ExFAT, MacOS file system) ?

I do know of 1 case where this error will be thrown on MacOS:

It will happen if you are integrating on an external drive with a partition table like NTFS or ExFAT.

MacOS natively can't deal flawlessly with these partition tables, unlike what Apple claims, and you will run into this error unfortunately.

If this is the case for you, then you have several options to hopefully prevent this issue from coming up:

1) use dedicated software do mount NTFS, ExFAT drives like

Microsoft NTFS for Mac by Paragon

https://www.paragon-software.com/home/ntfs-mac/#

This will enable you to properly read/write to NTFS drives from MacOS, MacOS itself can't do this properly.

2) Reformat your drive with the MacOS partition table format. Then the issue will definitely not happen again.

 

Please let me know you drive type and partition table and if you have encountered the same problem that I have described here.

To be clear, the issue that I have described here is on the Operating System level and not in APP's code.

Kind regards,

Mabula

This post was modified 4 years ago 2 times by Mabula-Admin

ReplyQuote
(@dciobota)
Brown Dwarf Customer
Joined: 4 years ago
Posts: 13
 

Hi,

Actually the pc I use is a windows 10 pro machine.  The hard drive is an external usb ssd drive with ntfs as the filesystem.  So I'm at a loss, the suggestions do not apply as I already use what you suggest I think.

But can you answer me the other question, as why it's not as good integrating separate nights then integrating those together?  Because of the volume of subs I take I know I will hit a limit no matter how much memory or disk space I have.  So I will be forced to integrate "chunks" of frames then integrating those chunks together.

Also, I watched the mosaic tutorial and I'm still confused as to how APP autodetects previous integrations.  The only way I've been able to integrate multiple night integrations was to disable the "autodetect masters and integrations".  I thought enabling that (which is the default) is how you integrate multiple nights.  

So anyways, my big question is this: what am I losing in terms of final image quality if I integrate batches of subs first then integrate those results together, vs integrating all the subs at once (which in my case doesn't seem to always be possible)?

Thank you for your help.

Daniel


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 5 years ago
Posts: 3180
 

Hi @dciobota,

The hard drive is an external usb ssd drive with ntfs as the filesystem. So I'm at a loss, the suggestions do not apply as I already use what you suggest I think.

Hmm, that's nasty indeed. I think the failure would occur because there is a connection problem between windows and the external drive for some reason. Can you share which brand and model the external SSD drive is? Is it by any chance an encrypted drive, do you have encryption enabled?

I have an external SSD drive myself with NTFS so I will test a large integration myself using the external drive.

Regarding your other question,

Also, I watched the mosaic tutorial and I'm still confused as to how APP autodetects previous integrations. The only way I've been able to integrate multiple night integrations was to disable the "autodetect masters and integrations". I thought enabling that (which is the default) is how you integrate multiple nights.

So anyways, my big question is this: what am I losing in terms of final image quality if I integrate batches of subs first then integrate those results together, vs integrating all the subs at once (which in my case doesn't seem to always be possible)?

please check this answer in another thread:

https://www.astropixelprocessor.com/community/main-forum/archiving-subs/#post-5006

Kind reegards,

Mabula


ReplyQuote
(@dciobota)
Brown Dwarf Customer
Joined: 4 years ago
Posts: 13
 

Hi there, apologies for the late reply.

The disk drive is a Micron, this is exactly the one:

https://www.amazon.com/gp/product/B01LB05TOO/ref=oh_aui_search_asin_title?ie=UTF8&psc=1

It's housed in an Innateck USB housing:

https://www.amazon.com/gp/product/B00JQTO8TU/ref=oh_aui_search_asin_title?ie=UTF8&psc=1

I've never had communications or transfer issues with that drive, it's been my main drive for sometime now.  Oh, also it's not encrypted, not indexed, but I do have Windows compression turned on.  That shouldn't make any difference tbh.  Btw, I do java programming for a living myself, if you need any kind of help on my side.

So, I also looked at the other thread that explains issues when using too few frames to integrate and the effect on the outlier rejection algorithm.  This shouldn't be a problem in my case, as I take dozens of frames every night.  The process that I've come up with is to integrate every night separately, then integrate those integration results into one final integration.  It currently seems the only viable way for the number of frames I have.

  But it didn't answer the other question I had, and that is how to get APP to recognize previous integrations as part of the stacking?  If I choose the default to autodetect integrations, this doesn't seem to work.  But if I turn the autodetect off it does.  Am I doing something wrong, or is this the way to do it?

Thank you for your help.

Daniel


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 5 years ago
Posts: 3180
 

Hi Daniel @dciobota,

The disk drive is a Micron, this is exactly the one:

https://www.amazon.com/gp/product/B01LB05TOO/ref=oh_aui_search_asin_title?ie=UTF8&psc=1

It's housed in an Innateck USB housing:

https://www.amazon.com/gp/product/B00JQTO8TU/ref=oh_aui_search_asin_title?ie=UTF8&psc=1

I've never had communications or transfer issues with that drive, it's been my main drive for sometime now.  Oh, also it's not encrypted, not indexed, but I do have Windows compression turned on.  That shouldn't make any difference tbh.  Btw, I do java programming for a living myself, if you need any kind of help on my side.

Thank you for sharing the details.

Yes I agree, windows compression will not be the issue. I still suspect the issue to be IO related on Operating System level. Maybe there is a way in Java to prevent a problem like this, I will need to check.

So being a Java programmer yourself, for your information the integration data is being written & read using ByteBuffers. I have written my own MemoryToFileMapper. Now the buffer underflow exception:

https://docs.oracle.com/javase/8/docs/api/java/nio/BufferUnderflowException.html

is caused when a get reaches the buffer's limit. This should never happen in my implementation, because the buffer sizes are set before the integration starts, based on available memory and then they remain fixed. The length of the gets are also fixed to the length of these buffers. It really seems that the OS has messed with the buffers when it shouldn't.

I have run very long integrations and others do as well on internal drives, without ever seeing the buffer underflow exception. It only seems to happen with external USB drives as far as I know, so I think our investigation should be done in that direction to solve the problem.

I have an external Samsung SSD drive of 2TB. I will try a big integration on that drive tomorrow and I will report back 😉

Do you have experience with ByeBuffers in Java?

But it didn't answer the other question I had, and that is how to get APP to recognize previous integrations as part of the stacking? If I choose the default to autodetect integrations, this doesn't seem to work. But if I turn the autodetect off it does. Am I doing something wrong, or is this the way to do it?

No, that is exactly right, to load integrations as light, you need to turn that autodetect off. I don't like this myself as well, so I will try to make this easier and more user friendly in the next release.

Mabula


ReplyQuote
(@dciobota)
Brown Dwarf Customer
Joined: 4 years ago
Posts: 13
 

Hi Mabula,

Thank you about the integrations part, I'm glad I'm doing that correctly.

As to bytebuffer, from past experience I have noticed that depending on the OS, sometimes a file is padded with bytes to align with certain byte boundaries.  Usually it's 4 bytes, sometimes it's 8 byes.  It could be more, I don't know, but I would assume a safe and sane limit would be 32 bytes.  So, when you calculate the buffer required, just adjust the size so it's a multiple of 32.  Maybe that would fix the underflow issue.  I sure hope so.

Another thing, I noticed you mentioned issues with the MacOS.  Another thing that you may already be aware of is whether the system is big endian or little endian.  Don't know if you account for that, and I don't know if there is any endian difference between say Win10 32 bit, Win1064 bit and MacOS.

Hope this helps.

Daniel


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 5 years ago
Posts: 3180
 

Hi Daniel @dciobota,

Thank you for your thought on this, I am using 4k multiples for the byte buffers. I will double check to make sure that they are multiples of 32, but I think they already are.

If the endian was the issue, then it would always fail and not be dependant on the drive being an external one or not I would think. The actual writting is done with Java 8 put & gets as well.

Anyway, thanks for the suggestions and I will look at  my code in all detail to make sure I am not missing something here.

Mabula

 


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 5 years ago
Posts: 3180
 

Hi Daniel @dciobota,

I am working on your Buffer Underflow issues at the moment to get this solved properly. I am running a big integration now (400 OSC frames) on MacOS on an external ExFat SSD drive (Samsung T5 SSD) to see if I can get the buffer underflow exception.

Regarding the Bytebuffers, I am using Direct Bytebuffers here. I will try non-direct Byte Buffers and will also test if there is a significant speed difference in my implementation. My suspicion currently is that Direct ByteBuffers could be the issue here. The Direct ByteBuffers are outisde of the Java Heap and need a block of memory on the Operating System level. Perhaps in some cases the Operating System will alter/destroy/fragment this memory block which would lead to the error I think.

If non-direct Buffers are used, the JVM will still use cached Direct bytebuffers, but the JVM will manage this and will make sure they are okay as I understand. 

So if non-direct buffers don't give a clear performance penalty, I will try non-direct buffers in the next version to see if that solves the problem.

Anyway, I hope I can at least trigger the bug myself today or tomorrow, because up until now, I have never encountered the Buffer Underflow exception, which lead me to think that it's an issue not part of APP, but of the OS of the user that encounters the issue.

Can you share your thoughts on this?

Oh, I think I can definitely increase the buffers with several times 32bytes to make that we don't have an issue there.

update: initial tests indicate, that non-direct buffers perform exactly the same in my integration implementation. Perhaps this is not strange, since I am processing the data in blocks in the JVM. So the read/write operations are not done completely outside of the Heap. memory from the JVM needs to be written to the disk, and read operations from the disk needs to be stored in memory of the JVM again....

Thanks,

Mabula

This post was modified 3 years ago by Mabula-Admin

ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 5 years ago
Posts: 3180
 

Dear Daniel @dciobota,

Yesteday and today, I have run several very big integrations on a SSD external drive, biggest was about 700 OSC frames and represented an integration of nearly 400GBs of data. I did not encounter the Bufferunderflow Exception once. I really think the error could occur due to something happening in memory on the Operating System level. Therefore I have made several adjustments:

  1. the ByteBuffers are altered to be Non-Direct ByteBuffers instead of Direct ByteBuffers. There is no performance penalty for this in my code 😉 and I think this could prevent the error from happening. A Direct ByteBuffer needs a contagious block of bytes in OS memory and I suspect this sometimes gives a problem. For instance, when the integration task, takes a long time and you are working on other things while you let APP run.
  2. on direct put() and get() operations on the Read and Write Buffers, I have now added code to properly catch possible BufferUnderflow & BufferOverflow exceptions in IO operations on the Bytebuffers, which should ensure that you are not bothered again with those and that the integration task can still continue and finish properly.

So if measure 1) was not a solution, measure 2) will still catch the bug and your integration should still continue and finish without errors 😉

https://www.astropixelprocessor.com/community/release-information/astro-pixel-processor-1-072-preparing-next-release/

Kind regards,

Mabula

This post was modified 3 years ago 3 times by Mabula-Admin

ReplyQuote
(@dciobota)
Brown Dwarf Customer
Joined: 4 years ago
Posts: 13
 

Hi Mabula,

Thank you for unlocking my account, my memory is not what it used to be lol.  Great news about the updates you've done, very much look forward to the next releases.

Keep up the good work.

Daniel


ReplyQuote
(@mabula-admin)
Quasar Admin
Joined: 5 years ago
Posts: 3180
 

Hi Daniel @dciobota,

APP 1.072 has been released 😉 Can you let me know if the Buffer Underflow error is completely gone now when you integrate on your external drive?

The error was impossible to duplicate with my hardware apparently which does indicate that the error is due to hardware or a IO/memory/driver problem on the OS level. Anyway, I have taken steps to make sure that the error, if it still happens, is properly caught.

Kind regards,

Mabula


ReplyQuote
Share: