Search result for: GPU
Page 1 / 2
Next
| # | Post Title | Result Info | Date | User | Forum |
| RE: Windows 11 crash at final integration step APP 2.0.0 beta 38 | 45 Relevance | 9 months ago | George | Microsoft Windows | |
| ... Java VM: Java HotSpot(TM) 64-Bit Server VM Oracle GraalVM 24.0.1+9.1 (24.0.1+9-jvmci-b01, mixed mode, tiered, jvmci, jvmci compiler, compressed class ptrs, g1 gc, windows-amd64)# Problematic frame:# J 20058 jvmci com.ariesproductions.statistics.b.a(IIII)[D astropixelprocessor@2.0.0-beta38 (660 bytes) @ 0x0000027b0807e6a8 [0x0000027b0807d200+0x00000000000014a8]## No core dump will be written. Minidumps are not enabled by default on client versions of Windows## If you would like to submit a bug report, please visit:# # --------------- S U M M A R Y ------- ... | |||||
| Upgrade of APP to use the Vulkan (Khronos) and Metal (MacOS) APIs | 41 Relevance | 7 years ago | Mabula-Admin | Development priorities | |
| ... group for both the graphics (showing your images) in APP and also for performing parallel calculations on the GPU. In addition, APPLE has already stopped support for OpenCL with MacOS Mojave. OpenCL is a language for performing calculations directly on the GPU, which I have been testing for some months now. The testing has made very clear that we need to start using the GPU as much as we can to speed up processing significantly in APP in all tasks that can be separated in multiple tasks to be performed in parallel. In complicated calculations, ... | |||||
| GPU is not being utilized | 26 Relevance | 4 years ago | Dane Roemer | General | |
| I have a GTX 1070 GPU on my desktop computer which is running APP. In the NVidia Control Panel I have created a profile for APP and forced the OpenGL renderer to my 1070 GPU. Despite this, APP will not use the GPU. In APP at the top left I see in green letters "openGL4". How do I get it to use the GPU? Attachment : APP not using GPU.JPG Attachment : APP not using GPU (2).JPG Attachment : APP has GPU but not using it.JPG | |||||
| RE: Is there GPU rendering implemented yet? Considering an eGPU | 21 Relevance | 6 years ago | Anonymous 174 | General | |
| Not yet no, this is a big operation. We first switch to a new Java version. Vulkan support is something we want to have, but that's rather complex. So for now it doesn't add a lot of speed to have a beefy GPU, I also think that when we have that support, a regular GPU will already proof to be faster. So my suggestion would be to wait for that investment. Please also first check with us by then if the external GPU is possible, just to avoid issues there. | |||||
| RE: Use GPU for heavy graphical calculations | 21 Relevance | 6 years ago | samuel dysart | RFCs - Request for changes | |
| I have an old bitcoin miner with multiple GPU. Will the GPU update support multiple GPU processing? | |||||
| RE: Recommendation for a new setup | 18 Relevance | 3 years ago | Mabula-Admin | Apple MacOS | |
| Hi Joerg @einneuer, The GPU definitely helps witht the OpenGL hardware accelerated image viewer 😉 data processing in parallel using the GPU will come for sure as well. This is not implemented yet, but we have already done testing with certain GPU calculations. I can not tell yet when the first modules with GPU calculations will be introuduced into APP, but it should not take more than 1 year from now. We really need to have this as part of APP going forward. Mabula | |||||
| RE: What would be the best hardware for APP? | 18 Relevance | 7 years ago | Chas | General | |
| Mabula - if you implement this, can you give us an option to disable GPU processing? I'm super intrigued with the remote linux server build strategy for an APP processing server, but don't want to have a big GPU card on that remote server. So if you implement this without deterministic logic whether a GPU on the machine can support it, could be problematic ya? | |||||
| RE: APP no longer starts | 36 Relevance | 3 years ago | Mabula-Admin | Microsoft Windows | |
| Let us first solve this startup problem 😉 In addition, rewriting things in CUDA will take a lot of work and this is definitely not friendly for anyone which does not have a Nvidia GPU. I feel it is better to use a GPU method that will work on all GPUs. Then everyone can profit from invested work and time on GPU processing. Or does CUDA work these days on non-nvidia chips? GPU processing should also work on GPUs that are attached to the CPU as well I feel like on Intel CPUs. We did tests with aparapi and we know it will speed certain tasks up greatly once ... | |||||
| RE: Use GPU for heavy graphical calculations | 35 Relevance | 9 years ago | Mabula-Admin | RFCs - Request for changes | |
| Thank you Rob, The necessary libraries for GPU calculations on the GPU cores besides the CPU cores are already compiled into APP, so work will start soon on this 😉 For some calculations I intend to spread the work over the GPUs & CPUs, but I'll need to make smart concurrency implementations to get this working correctly on all systems. It would enable image drawing and the preview filter to work much friendlier and hopefully faster. Multi core tasks like registering, star analysis, normalization, integration, data interpolation (lanczos etc) could ge ... | |||||
| RE: M1 Native Support | 18 Relevance | 4 years ago | jochen scharmann | Apple MacOS | |
| Hi, Congratulations to Your M1 Max, this really flies. I have the binned 24 core GPU version which seems to be a sweet spot. Even GPU optimized sogtware rarely loads the 24 GPU cores by 100%. From other threads here I picked up the APP code would need to be optimized for GPU acceleration (on all plaforms), which may only target simple, but lots of simultaneos calculations. This is not coded yet , but as stated in the thread, it might be one of the next steps on Mabula's list (2B confirmed) I´d be also keen if APP could be optimized for ANE use as well ... | |||||
| RE: Very basic problem with interface - AMD RYZEN incompatibility ? | 18 Relevance | 5 years ago | Stephen T | General | |
| OK, apologies; my mistake... I thought this issue was limited to systems running NVIDIA RTX cards. | |||||
| RE: State of GPU acceleration implementation? And also Star removal... | 17 Relevance | 3 years ago | Jarno | General | |
| I agree on the GPU acceleration, pretty much every program doing anything with images uses GPU acceleration. Why would you leave all that calculating power unused? | |||||
| RE: State of GPU acceleration implementation? And also Star removal... | 17 Relevance | 3 years ago | George | General | |
| Premium price is why I insist. I enjoyed the trial but the processing time took about as long and in some cases longer than DSS. Once CUDA or GPU acceleration for processing images, not just star removal, becomes a standard supported feature I will definitely make the purchase. | |||||
| State of GPU acceleration implementation? And also Star removal... | 17 Relevance | 4 years ago | George | General | |
| Hi. I'm new to this forum but I've used the APP trial. It took a bit of learning but I did like it. I noticed it brought out more signal in the final image than DSS did. I want to purchase it but I do really want GPU acceleration to work before I jump in. Also saw some mentions of an internal star detection algo. Could that be used to remove stars like Starnet++ or StarXterminator? Send all that to the GPU as well? | |||||
| RE: Very basic problem with interface - AMD RYZEN incompatibility ? | 18 Relevance | 5 years ago | bcolyn | General | |
| @stephent all button-drawing is faster than I can see on my (desktop) 1070 GTX regardless of using the Direct3D pipeline or not. Most laptops have an integrated and a discrete GPU, with the discrete GPU rendering indirectly (via the IGP). On my laptop (which does not glitch btw, Intel CPU and 1060 GTX) APP uses the integrated graphics by default unless explicitly overridden in the Nvidia control panel to force the dGPU (the default IGPu works fine btw). Disabling GPU acceleration for the UI (Java2D) does not impact the actual image processing speed, that ... | |||||
| RE: Is there GPU rendering implemented yet? Considering an eGPU | 17 Relevance | 4 years ago | Matt Thompson | General | |
| On the off chance are there any rough dates for GPU support? I.e. Q2 2022? Im just trying to work out if I should upgrade my Cpu now or wait for GPU support 🙂 | |||||
| RE: Question on GPU processing | 17 Relevance | 6 years ago | Anonymous 174 | General | |
| Well, I have no idea to be honest. 🙂 I'm not sure how much more efficient the GPU is to make up for the fact yours is a lot less powerful to more modern ones. That will be something to just test I think, I'm assuming you can then still select not to use a GPU whenever it's supported (I have no actual view on the feature path). | |||||
| Question on GPU processing | 17 Relevance | 6 years ago | Lammertus | General | |
| Hi all, Since I am using an old laptop with an NVIDIA GT520M graphics, it only has 48 Cuda cores which is close to nothing in terms of nowadays graphic cards. Once APP uses GPU processing, would I notice the effect of speed improvement with this small amount of Cuda cores? Just for interest!! Thanks for any insight ( and hopefully some expected timeframe for the future release with GPU processing implemented ) Stay well, Mert | |||||
| Is there GPU rendering implemented yet? Considering an eGPU | 17 Relevance | 6 years ago | Jason Kurth | General | |
| I saw an announcement over a year ago about this but not sure if it was ever implemented. I have an i7 mac mini and could add an external GPU, but it doesn't make sense if GPU is not supported yet. | |||||
| RE: Use GPU for heavy graphical calculations | 17 Relevance | 6 years ago | Sebastien85 | RFCs - Request for changes | |
| Hello, For laptop owners, GPU support will change a lot. I have a Macbook pro 15" (8 cores) with an e-GPU (AMD Vega 56), for example the difference is really huge for video processing (DaVinci Resolve) I await this functionality in APP with great interest 😊 | |||||
| RE: Use GPU for heavy graphical calculations | 17 Relevance | 6 years ago | Matteo Monico | RFCs - Request for changes | |
| @mabula-admin Is APP now using the GPU? Im considering to buy a new laptop as my surface pro 2017 with 4 GB RAM is very slow with APP and integration takes me almost 3 h... so would a laptop with good GPU make a big difference? | |||||
| Memory question + GPU acceleration | 17 Relevance | 6 years ago | Yves van den Broek | General | |
| I'm building a new PC for a remote observatory that also will do the stacking, as we will be using the new 60MP QHY I will use an 8-core i7 (is there any benefit going i9? ) So question on memory 32 GB seems like a minimum, but I want to be sure that it will not run out, is there any benefit to over dimension, will APP use for example 64GB ... Second, we heard about GPU acceleration since a year now, what GPU would benefit the most AMD or NVIDIA? It all depends on the libraries used ... so any hint would be much appreciated. /Yves | |||||
| RE: GPU Support | 17 Relevance | 8 years ago | Mabula-Admin | General | |
| Hi Eric, I have already done some testing with how to enable GPU processing in the past couple of weeks. The main part now is to rewrite code ,module by module so it can be processed directly from java to openCL. This means that the GPU support will work for any videochip/processor that supports openCL(that's practically all). All modules that can benefit will then be rewritten step by step 😉 Mabula | |||||
| RE: M1 Native Support | 14 Relevance | 4 years ago | Anonymous 174 | Apple MacOS | |
| That is a very good question to which I don't have an answer, it's very new technology at the moment so software is still catching up. I can imagine that if that simply means, faster GPU processing, then yes in the future it would benefit APP greatly. But GPU support is not yet available. The issue with the Mac system is that you can't upgrade internally, so that does mean you'd have to plan on something that is a bit unknown still. If you have the funds, I would choose more GPU cores, not only because of APP. | |||||
| Question on GPU usage | 12 Relevance | 5 months ago | Lammertus | General | |
| Hi Mabula, I know this has been asked many times and likely also for a long time now. I would like to ask if there are still plans to "upgrade" APP towards the usage of GPU's? ( huge speed increase likely ) With kind regards, Lammertus de Vries | |||||
| GPU Support | 17 Relevance | 8 years ago | Eric | General | |
| Mabula I know you're hard a work getting the next release ready, but I wanted to bring up the question of whether you've considered using the power of modern GPUs to accelerate processing and perhaps allow the inclusion of even more advanced features to optimize our data? I am not a programmer or developer, but have been using various GPU-optimized software packages for a long time now - SETI@Home, Photoshop, and others that benefited massively from the power of the GPU. As you saw in your testing, the new version is now CPU limited instead of storage lim ... | |||||
| RE: New Mac Studio M2 - benchmarking vs my old Mac Pro 2013 12-core D700 (and beta 26 vs 28 side note) | 14 Relevance | 2 years ago | Paul Muller | Apple MacOS | |
| ... is probably the most surprising, I am not going to waste too much time on it, but it does make me scratch my head! Intel 12-core vs M2 Max - personally I am not super surprised, I have long suspected that raw CPU performance hasn't increased THAT much over the last 10 years. Certainly there are more cores, dedicated silicon for handling SIMD tasks and encoding/decoding, GPUs etc, but given that clock speeds seem to be the upper boundary on modern CPUs and the IPCs for a single CPU, the idea that an older 12-core (24 thread) vs a modern 10-Pcore+2-Ecore CPU ... | |||||
| RE: APP performance increases with increasing RAM and CPUs | 14 Relevance | 4 years ago | jochen scharmann | General | |
| ... during Analyze Stars and Integration. Also, the increased RAM, as well as the great memory bandwidth and the unified memory architecture really help a lot. I used to stack a few hundred 24 Mpix shots as well and had the process usually run overnight (8-10 hrs)with the i7. It now completes within 2-3 hours (much slower if I use LNC which is still single core only, hence it only scales by 1.7 x single core performance or so) 🙁 Some calculations had become possible with M1 Max only e.g. higher degree LNC). As APP is CPU only, you might save money using the ... | |||||
Page 1 / 2
Next