I truly enjoy many aspects of Luminar 4. But when can we expect Luminar to properly leverage modern CPU technology?
This is the root cause for many of the issues related to responsiveness and reliability. I make that statement as a data-informed theory, and have data from CPU-Z and HWMonitor if anyone is interested.
I REALLY want to make the jump from Adobe to Luminar for lots of reasons but the lack of snappy preview, semi-frequent crashes, slow switching between images, and glacier-slow RAW-JPEG export forces me to use Lightroom for large batch processing. Ask a friend to do some basic editing while you watch the clock and your favorite hardware monitoring program over the shoulder and it's plain to see that Luminar workloads are not appropriately managed on modern machines. I feel like Luminar 4 is missing the basic fundamental principles of modern software development in the post 2016 multi-core CPU environment. Meanwhile, Adobe CC has figured out not just how to hyperthread, but they're offloading tasks to the GPU and it shows in execution.
Consider an enthusiast processing files from one shoot or maybe a long weekend... ~1000 images on a computer with 8 cores @ 3.5 ghz and other components that are relatively modern and 'enthusiast-grade.' The amount of time it takes to scroll through the library, reject, grade, and screen the throw-away images is outrageously unacceptable. The 'load time' when switching between images in editing is like nails on a chalk board when you're trying to stay focused on a task. JPEG export time is a joke. Stability is an issue. One of these symptoms is manageable. The combined result is keeps me going back to Adobe for large batch processing which seriously complicates workflow and file management.
Adobe has done an excellent job of spreading light workloads out over multiple threads which increases reliability and performance, especially when paired with the new processors that have come out in the last 6-9 months.
Grab 50 unedited RAW files, open in Lightroom, and export to 50% JPEG. Do the exact same thing in Luminar. Compare the times. Compare the CPU workloads and temperatures. It's a reasonable apples-to-apples comparison of input to output and workload management.
I can work around or accept the things I hate about Adobe. I can't make Luminar faster and more reliable. Please, lets talk timelines for software evolution, multi-threading, and some major changes to Luminar architecture.
(Team Skylum, please dot not mention AI sky replacement anywhere in your response)
Please sign in to leave a comment.