2023-09-28: APP 2.0.0-beta24 has been released !
Improved application startup, fixed application startup issues, upgraded development platform to Oracle GraalVM JDK21
We are very close now to releasing APP 2.0.0 stable with a complete printable manual...
Astro Pixel Processor Windows 64-bit
Astro Pixel Processor macOS Intel 64-bit
Astro Pixel Processor macOS Apple M Silicon 64-bit
Astro Pixel Processor Linux DEB 64-bit
Astro Pixel Processor Linux RPM 64-bit
Hi there - has anyone out there had success in installing the .deb or .rpm files onto a VM in Google Cloud Platform or indeed any Cloud provider? He keen to understand what VM config you use e.g. which OS build - Ubuntu, Redhat etc etc and if there are any instructions on how to install the software once you have the VM stood up.
Ciao, successfully run on GCP with the following specification:
- e2-standard-32 (32 vCPUs, 128 GB memory)
- one persistent HDD debian-10-buster-v20201216 (50G - for OS and programs)
- one persistent SDD HDD (400G - for elaboration)
- one datastore bucket (always on) for uploading lights, bias and so on
Please consider that I use the VM only for "stacking images" and I start it only when I need. Elaboration (like light pollution, color calibration and so on) is performed on my home pc.
Best,
Luciano
I'm using APP successfully in AWS. Using the default Amazon Linux 2 machine image. I'm currently working on a tech presentation about that. Once it is done I'm happy to share my results. I also did some testing to find out how the processing time is on different machine types and sizes to be able to optimize the costs. It seems that at a certain point additional vCPUs do not bring any benefit as processing times are even get longer instead of shorter.
Another problem that you need to solve is the upload of the images. If you have several GB of data that takes hours to upload you may not get any benefit out of the faster processing time because of the long upload. I was able to create some scripts to upload my images (using astroberry.io on a Raspberry Pi 4 with my EQ mount and an ASI camera) during the imaging session using the scripting possibilities of KStars/Indi. But since November I had no night of clear skies to test this setup, so I'm still waiting for the test.
Once everything works and is shareable I'll put everything in a github repo under an open source license.
Once everything works and is shareable I'll put everything in a github repo under an open source license.
That's awesome, I love those special Github projects.
@vincent-mod speed is subjective 😆 😆
I just started a run, so I can give you some info with some real numbers.
Maybe it will be nice to have some sort of benchmark tool inside APP (like some others sws 😉 )
Best,
Luciano
Maybe it will be nice to have some sort of benchmark tool inside APP (like some others sws 😉 )
Maybe we can just put together some data sets as a community (looking for users here to share their data) for some "benchmarking".
Like
dataset with 50 lights, 10 calibration frames, all FIT images
dataset with 200 lights, 20 calibration frames, all FIT images
dataset with multisession data
datasets with images in various camera RAW formats.
We can create an open "Excel" sheet in Google Spreadsheet aggregating the processing times with related specs of the computers. I already started this for the AWS EC2 instances I was testing.
But having dedicated datasets would help making things more comparable.
@mkeller0815 I totally agree with you.
As soon as the current elaboration is finished I can share bias (30) + darks (51) + lights (247) gathered with a Canon 6D (1.30 exp time / ISO 800).
Where we can put all the data sets ?
Best,
L.
You can put them on a Google Drive folder and create a "readonly" link for sharing. We can then collect all the links to the datasets in a separate thread together with the spreadsheet.
A benchmark inside APP, not sure if that can be made super reproducible, but total time taken for each step might be nice already.
@mkeller0815 Here is the link:
https://www.dropbox.com/sh/3cpv3n2tsv2dqed/AABSvemwxEBxDWZktfvCes_Na?dl=0
bias (30) + darks (51) + lights (247 - exp. time 1.30'')
Camera: Canon 6D
File format: RAW (CR2)
Elab time is: 1h 38min
Let me know if you need more details about the VM
@vincent-mod is there a way to save the complete log file ? I just tried to copy the content of the "console" windows but it's truncated ... just to be sure to have the start and the end time calculated in the same way for everyone 🙂
Best,
Luciano
If you have several GB of data that takes hours to upload you may not get any benefit out of the faster processing time because of the long upload. I was able to create some scripts to upload my images (using astroberry.io on a Raspberry Pi 4 with my EQ mount and an ASI camera) during the imaging session using the scripting possibilities of KStars/Indi.
That's is really interesting 🙂
I manage the upload step on the field with a simple python script:
from google.cloud import storage
from google.cloud.storage import Blob
import glob
import os
import time
from tqdm import tqdm
filename = []
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="XYZ.json"
path_file = "blablabla"
client = storage.Client(project="My Project XYZ")
bucket = client.get_bucket("raw-datastore/folderA")
number_of_files = len(os.listdir(path_file))
xyz = (26*number_of_files)
print("Uploading " + str(number_of_files) + " files")
def upload_file():
for my_file in glob.glob(path_file + "File_Extension"):
print('\n' + 'Uploading file: ' + my_file +'\n')
progress_bar.update(26)
blob = bucket.blob(my_file)
blob.upload_from_filename(my_file)
time.sleep(1)
with tqdm(total=xyz, unit='MB', unit_scale=True, unit_divisor=1024) as progress_bar:
upload_file()
print('###### DONE ######')
@vincent-mod is there a way to save the complete log file ? I just tried to copy the content of the "console" windows but it's truncated ... just to be sure to have the start and the end time calculated in the same way for everyone 🙂
Best,
Luciano
Should be possible to select everything, maybe when it's busy it's not? I'll talk with Mabula (when he's less busy with the next version) to have an export button.
ps. In your Python code (have no experience with it yet), you import Google storage.. is this Firebase related?
I just let the integration finish and then try to select ... maybe the window has some "circular scrolling buffer" and with more than 200 lights it will end saturated 🤔
Anyway, simple solution: do one step at time and grab the output. No problem at all 🙂
Google storage is imported because I need to connect to Google Bucket storage, which is the always on "disk" ... please consider that I'm not a developer at all (I'm a network engineer and I do know python, but I don't know all the class I imported), so it is more like a "try and try" process to made it work 🙂
Well, still great stuff here. 🙂 So yeah, I'll mention to Mabula it would be nice to have a better export feature for the log. Thanks!
I think for the start it would be enough to have the start and the end time. If you start the process, then you see the time in the log. I usually just note it somewhere and then at the end of the process I check the log for the time of the last action to get my end time.
For comparable benchmark results we should also note the settings that should be done for a specific dataset. It does not need to provide the best results but should make use of typical functions to utilize the computer. It might need some days, because I need to finish some other tasks, but I would start testing with Lucianos data with my local machines and also with some AWS VMs. Once I get that sorted out I would start a new thread for sharing the test results and providing some documentation how to do the tests.