30 July 2020 - APP 1.083-beta1 has been released introducing Comet processing! This 1st beta has comet registration. The stable release will also include special comet integration modes.
9 July 2020 - New and updated video tutorial using APP 1.081: Complete LRGB Tutorial of NGC292, The Small Magellanic Cloud by Christian Sasse (iTelescope.net) and Mabula Haverkamp
2019 September: Astro Pixel Processor and iTelescope.net celebrate a new Partnership!
Can APP use external diskdrives instead the systems HD?
I bought a relative small HD in my Mac because of working in the Cloud.
I've trouble in making enough free memory on my Mac for APP because I don't want to disturb my other work on my computer to much.
There are nowadays very fast external SSD. Is it possible in the near future to work with APP on an external SSD?
You can already, but it will depend on how well this works. The external drive should have the proper filesystem at least. Memory though is still needed for APP, doing extensive processing requires quite a bit of memory. I wouldn't advice to do a lot of other work during processing.
I have a MacBook with more than enough disk space. Still, I use an SSD on my Linux system and the images are written directly to that. Then I disconnect the SSD and connect it to my MackBook and process the images directly on that. Works like a charm. I am not sure if APP uses swap space on the SSD or still on the "local' hard disk though. I guess either Vincent @vincent-mod or Mabula @mabula-admin would be able to tell us.
FWIW I never got fuse working properly on my MackBook, so I formatted the SSD with HFS and disabled journaling on it. Then Linux can mount it and write to it as well.
as far as I noticed it write a *.dat file (not remember if the extension is really *dat or whatewer) in the working directory.
Me too I'm used to load images from a NAS but set the working directory locally on my pc.
@vincent-mod : do you think a RAM disk set as working dir will speed up the process a little bit ?
@lvigano I never used a RAM disk to be honest. If that is stable and seen as a regular disk by the system it might work. But I can imagine it being less reliable maybe, would be nice to test and also to check the speed versus SSD.
ciao @vincent-mod : I'm quite used to it in work environment (and also with PI at home) and it's really stable ... of course you have to move files to a physical HDD before shutting down the system 🙂
As soon I have time i'll power up a VM on AWS or google with 96 gb of memory and test the i\o of the tmpfs.
I would avoid RAM disk. A RAM disk is very fast, temporary storage utilising the RAM, taking it away from being used by the CPU. APP is a Java application. Java likes RAM. You are better giving APP more RAM than using a RAM disk for your Working Directory. It doesn't scale well either. I've iterated through many disks and recently upgraded to a 1Tb NVMe due to recent integrations going over 512Gb! That's not tenable for a RAM disk.
Adding a USB 3 disk is a good approach if opening up a PC/Mac isn't feasible. Adding NVMe is better if you can internally. I'd avoid USB2.0 based SSDs, although obviously would still work - just not utilising the full speed of SSD, choked by USB 2's bandwidth.
Overall though, whilst disk speed is noticeable when you go from spinning disks to solid state, the biggest bottleneck becomes the CPU - processing images is CPU intensive, where RAM for the processing is your friend here.
Ah yes, I looked over the scaling part, that is a good point indeed.
finally I managed to create a VM on AWS with these specifications:
- m5.8xlarge with 128G RAM and 32 core Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz plus SDD storage space;
- it's a basic EC2 device, so is not using 2nd or 3rd Xeon generation
- Debian 10 with KDE Plasma and XRDP (linux because it's cheaper and doesn't have fragmentation problems)
- it cost about 1,5$/hour
Basically the output of the test are the following (w\r a 5G file using linux "fio"):
TEST: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [m(1)][13.1%][r=4572KiB/s,w=4656KiB/s][r=1143,w=1164 IOPS][eta 00m:53s]
Jobs: 1 (f=1): [m(1)][23.0%][r=4164KiB/s,w=3864KiB/s][r=1041,w=966 IOPS][eta 00m:47s]
Jobs: 1 (f=1): [m(1)][32.8%][r=4040KiB/s,w=3980KiB/s][r=1010,w=995 IOPS][eta 00m:41s]
Jobs: 1 (f=1): [m(1)][42.6%][r=3672KiB/s,w=3836KiB/s][r=918,w=959 IOPS][eta 00m:35s]
Jobs: 1 (f=1): [m(1)][52.5%][r=3632KiB/s,w=3520KiB/s][r=908,w=880 IOPS][eta 00m:29s]
Jobs: 1 (f=1): [m(1)][62.3%][r=3804KiB/s,w=3724KiB/s][r=951,w=931 IOPS][eta 00m:23s]
Jobs: 1 (f=1): [m(1)][72.1%][r=4036KiB/s,w=3396KiB/s][r=1009,w=849 IOPS][eta 00m:17s]
Jobs: 1 (f=1): [m(1)][82.0%][r=3612KiB/s,w=3716KiB/s][r=903,w=929 IOPS][eta 00m:11s]
Jobs: 1 (f=1): [m(1)][91.8%][r=3576KiB/s,w=3576KiB/s][r=894,w=894 IOPS][eta 00m:05s]
Jobs: 1 (f=0): [f(1)][100.0%][r=3512KiB/s,w=3604KiB/s][r=878,w=901 IOPS][eta 00m:00s]
TEST: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [m(1)][80.0%][r=568MiB/s,w=567MiB/s][r=145k,w=145k IOPS][eta 00m:02s]
Jobs: 1 (f=1): [m(1)][100.0%][r=568MiB/s,w=568MiB/s][r=145k,w=145k IOPS][eta 00m:00s]
The tmpfs was about 40G, so still plenty of RAM to be used by the system.
@itarchitectkev and @vincent-mod : I agree with you about the scalability issue when creating a tmpfs but please consider that a NVMe is not a really cheap option if you don't have a MB withe NVMe support. Moreover, as per the SDD disk, you still have problems with the maximum write cycle you can perform (for example Samsung EVO 970 is guarantee for 5 years or 150TBW)
In my humble opinion, I think that there is not a single solution which fit for every case :-); in my case - with a maximum of 20G required for integration - I think that the tmpfs will absolve it's job in a very good way and, let me say, a VM in AWS it's really the win choice!
Moreover it will be very interesting to see the results of the same fio command on a NVMe HHD ... but I don't have it 🙂
True, there is no single solution so it's very nice to see your efforts here, thanks for sharing!