Skip to main content

⚠️ Please note that this topic or post has been archived. The information contained here may no longer be accurate or up-to-date. ⚠️

Export Speed / overall performance - Apple M Processors

Comments

28 comments

  • Timotheus Theisen

    Also experiencing this issue and would love to utilize the incredible power of the M machines!

    1
  • Raymond Harrison

    I agree that they can continue to improve resource utilization. However, I’ve seen no massive real-world differences in export time with my own use cases across raw files from 24mp to 150mp. What timing differences are you seeing? Mostly I find them on par. Also, just FYI, there are other posts in the feature request section for this that have existing votes so maybe consider adding your vote there.

    1
  • Timotheus Theisen

    Hi Raymond, the problem (to me) is that C1 currently only utilizes a few percent of an Ms computing power, which is a shame

    2
  • Stefan Grossjohann

    I tried to find other feature requests. But it didn't show me anything with "speed" or "apple". For work I often have to export 1000 images. There was hardly any difference between the M1 and the M3 Max. There are various tests online with Lightroom and Capture One.

    0
  • Raymond Harrison

    One Feature Request I'd put in is for them to have a proper feature request system and to dump this one (it's truly awful). Here's one request that only has a vote and was more generic: https://support.captureone.com/hc/en-us/community/posts/15225674094877-Performance-GPU-and-CPU-core-usage

    I don't think anyone disagrees that they could make better use of the cores. Even Capture One. It's not necessarily true that flooding the cores makes it faster, of course. But, as far as youtube performance comparisons (or others), I find that for my own use cases and my own testing (which for me is all that matters), the performance was on-par, really regardless of the use of cores (and in at least one case, C1 was making better use but it has been a few months since running the tests so I don't recall specifics). The only metric I cared about (and care about now) is the time it takes to do a task. Would I love to see C1 make more efficient use of system resources? I'm all in :-). 

    0
  • Stefan Grossjohann

    Overall - your searching results are much better than mine. That topic I logged and one year ago. Nothing happened.

    I spread my link to all my colleagues in the business, so I have start the discussion here… maybe I will do an actual test with Lightroom to get more facts

    0
  • Raymond Harrison

    If you can, that’s a good way to go and you may indeed find that Lr is faster for your specific use cases. But you can only find that out by trying :-)

    0
  • Stefan Grossjohann

    TEST Export 1248 RAW Files 100% JPG / 50MB each file

    Lightroom 
    CPU 90%
    GPU 90%

    6 min 44 sec. (100% total means 1% = 4,04 sec)

    Capture One 
    CPU 13%
    GPU 70%

    23 min 47 sec. (353%)

    means that capture one ist 253% slower compare to Lightroom and this means C1 waste a lote of performance 

    0
  • Eric Valk

    Stefan, to complete the data points, could you confirm which Mac you used for these tests (M3Max I see, must be the MacBookPro 16inch?).
    Were all the raw files on the main drive?
    Are these all Canon CR3 files? Which Camera body?

    1
  • Raymond Harrison

    Great test! That’s why it’s important to do the tests ourselves. Totally different for me for around 800 ish raws from fuji, phase, nikon and leica files exporting full size tiffs and jpegs concurrently. Roughly the same timings for both programs both pre-edit exports and multi-layer (c1) / rough analogs in Lr.

    1
  • Stefan Grossjohann

    Apple MacBook Pro 16’
    M3 Max
    36GB RAM

    Canon R5 CR3 RAW files located on the main drive.

    0
  • Raymond Harrison

    I totally bollocksed my test description above. My memory is obviously crap :-). This is what posted over in Paul Reiffer's FB group...pictures are over there if interested. Just search on my name and you'll find it.

    ------------

    Performance, Core Utilization and why your own platform matters.
     
    There has been a fair amount of discussion lately here, in Walter's forums, and the Capture One feature request forums too, around core utilization. That is, is Capture One optimized to take advantage of all of the cores available to it? The short answer is "probably not" but the other side of the coin is "what does performance look like on your machine for common tasks given how you set things up"? And what about, "how does it compare to Lr"?
     
    Short answer is C1 on my machine does pretty well regardless of how the cores are utilized.
    On my machine, with Lr set to use GPU for export (from custom) and Capture One set for preview generation for 4k (3840 - recommended), preview generation optimized for speed (new in 16.3), I did the following experiment on an M1 Max (2021) maxed out - 10 core CPU (8 performance, 2 efficiency cores), 32 core GPU, core "Neural Engine", 400GB/s memory bandwidth, 64GB integrated memory, 4TB storage.
    I used 609 images across Fuji (24MP), Nikon (45MP), Phase (100 & 150MP) and I performed an import+preview generation test and two export tests: (1) 2048 long edge JPEGs and (2) 2048 long edge JPEGs + 16 bit uncompressed TIFFs.
    Caveat: Adobe ACR/Lr does not support the Phase IQ3100 achromatic digital back so the Lr total times were for 484 images, 125 less than Capture One. I normalized to images/second.
    General observations: Lr flooded CPU cores, C1 and Lr used the GPU more or less similarly - though the Mac GPU and CPU history windows don't have specific numbers and I'm assuming a [0,100] scale on both.
    For Lr, 484 images, the import/preview generation time was 4 minutes, 50 seconds (290s) or 1 image/.60 seconds.
    For C1, 609 images, the import/preview generation time was 5 minutes, 59 seconds or 1 images/.59 seconds
    Verdict - roughly the same. Lr really didn't utilize GPU but did on the CPU cores, C1, very little on the CPU but heavily on the GPU.
    For export scenario (1) (2048 long edge, JPEG = 80% quality, no scaling, same ISO profile, etc: Lr 7 minutes, 12 seconds (432 seconds, .9 images / sec), C1 5 minutes, 37 seconds (337 seconds, 0.55 images / sec).
    EDIT: No edits applied in either scenario. I'll look at ways to test that later (possibly :-)).
    For export scenario (2) (in addition to export scenario (1) added full size TIF 16 bit, etc):
    For Lr export for 484 files (x2) for JPEG+TIFF, the total time was 17 minutes, 52 seconds (1072 seconds) or ~0.90 images/second. Caveat for Lr - I had to export sequentially because either it was a pilot error on my part (totally possible) or the long standing Lr bug of the multi-export not working on mixed file types was coming into play.
    For C1, 609 images (x2) over 27 minutes, 26 seconds (1646 seconds) at 0.74 images / second.
    Verdict: On my machine, in this scenario, C1 did a bit better. Perhaps different on your machine.
     

    -------

     

    0
  • Stefan Grossjohann

    Thank you for your test results. Which version of Lr did you used? The version I used yesterday had no import function anymore (or I didn’t found it) it was possible to browse through the folders and export directly without creating previews. So maybe Adobe worked on the performance since you did the test?

    0
  • BeO
    Top Commenter

    Hi Raymond,

    For export scenario (2) ... Verdict: On my machine, in this scenario, C1 did a bit better. 

    LR exports 0.9 images/second, C1 only 0.74, so shouldn't your verdict be the other way around?

    1
  • Stefan Grossjohann

    we should first know which Lightroom version. because i think his test results are much older than mine... otherwise it doesn´t make sense at all to talk about tests with old versions...

    0
  • BeO
    Top Commenter

    You are right, the versions matter.

    Btw., what also matters (potentially) are the raw file formats.

    I remember long ago, prior the takeover of mirrorless cameras, Canon raw files have had a different processing pipeline (in the C1 version of that time) than other cameras' raw files, and not every step in the processing pipeline was executed by the GPU and maybe not even in a parallel fashion, for Canon files. I think they were slower, but don't quote me on that.

    Maybe such differences in the processing still persists with newer Canon file formats and C1 versions.

    0
  • Stefan Grossjohann

    Maybe your are right. The Canon R5 was released in mid-2020. so almost 4 years. If that made any difference, it would be just as embarrassing as the Apple M support.

    Today I also tried auto adjustments for all files and C1 used 25% CPU and close to 0% GPU. So it seems to me a performance problem at all.

    0
  • Raymond Harrison

    @BeO Yes! LrC would have edged out C1 by about 5 minutes total had it been able to run the full set. It would have (presumably) taken about 22 or 23 minutes vs ~28 for C1. Funny story, while I have a degree in mathematics, I'm actually horrible with real numbers :-). I obviously (and embarrassingly) need to revisit the results. Damaged ego aside, my particular takeaway was that any performance differences weren't massive one way or the other despite larger differences in GPU/CPU cores. The other takeaway was that yes, C1 can and should make better use of the resources. 

    Stefan Grossjohann It was whatever the current LrC was in Q4 last year.

    0
  • Stefan Grossjohann

    I used Lightroom CC and not classic. Maybe that’s the difference?

    0
  • Stefan Grossjohann

    Now I did the same export with Lightroom Classic (13.1.) and it was surprisingly slow.

    It took 23 min 40 sec. which is similar the same as C1 but for the same result it used my computer more and seemed to block it more. CPU 60% GPU 65%

    just to be sure I did the same export again with Lightroom CC -  6 min 34 sec. CPU 90% GPU 90%

    and YES it makes a difference if you export with or without adjustments. These export times are without adjustments.

    Just for completeness. I also tried Adobe Camera RAW which seemed to be like Lightroom Classic and Canon Photo Professional which was the worst.

    So it's totally difficult to compare Adobe and Capture one.

    Fact: Why doesn´t use Capture One CPU AND GPU for export? 

     

    0
  • Raymond Harrison

    Really interesting the difference between your findings on the two applications, thank you!

    1
  • BeO
    Top Commenter

    Your last screenshot, Stefan, looks to me as if both the CPU and GPU are being used.

    Anyway, I doubt that the C1 processing pipeline has embedded "do nothing" instructions, so, at any time, some hardware components will be doing something, maybe copying some data from one register to another, or whatever. The same is certainly true for LR or any other batch processing software.
    It just happens to be that we don't have a monitor application which shows us this "something" in a nice graphic, so we are focused on those parts of the process that we can observe, i.e. CPU and GPU utilization diagrams. There is also the question how these diagrams measure what they show (or what they actually show, and don't show, respectively).

    There is always one step in every processing which is a bottleneck (weak link), i.e. the next step is waiting for the output of this bottleneck step, and is somewhat idle while waiting. In the end, the overall time is what counts, and the task for developers seeking to improve performance is to identify and rectify such bottlenecks. The problem is, this is very much dependent on the exact resources and their (performance) abilities (and other parameters (Fuji X, Canon, Nikon files, megapixels, etc.)) and this may vary between different hardware (M) models.

    There is certainly also the aspect of how good C1 processing is tailored to parallel processing on that specific macOS hardware, for those operations which can theoretical be run in parallel, there might possibly be room for improvements.

    And in comparison to other software, e.g. LR, the aspect of the implemented algorithms. In the end, I am looking at an image and I want it to look like a C1 processed image, unless performance is more relevant to me than how the image looks (which it isn't).

    It would be good though if C1 (the company) would publish specific benchmarks, especially for the rather limited number of Apple hardware variations, or some of them, answering questions like "would process x (e.g. export, import, or slider movements)" benefit from more gpu cores, performance cores, main memory or whatever, under which parameters (e.g. camera files), and by how much?".

    Only my personal view, everyone else's mileage may vary.

    0
  • Stefan Grossjohann

    When you are shooting 3000 files a day and your art director wants to take them for his layout at the same day after the production, you starting to invest in the most powerful hardware to mange these requirements. And when you feel no difference to your old hardware, you start to think about what’s going wrong. Capture one want’s to be super professional for the business. So as a paying customer, I want to get this professional support. I know nobody who works professionally with windows and capture one. Apple M existing since three years. So there was a lot of time to work on it… otherwise they should change their advertising to a consumer / hobby level. No excuses…

    0
  • BeO
    Top Commenter

    For what it's worth: I fully agree (your last post, and for the overall performance, not necessarily each and every detail), and my post did not contain any excuse.

    0
  • Stefan Grossjohann

    Sorry the "excuses" was more in general. Nothing personally. For a better comparison, I now tested the “Raw Power” software and Apple Photos.

    RAW Power: 14 min 39 sec.

    Apple Photos: 14 min 01 sec.

    it´s incredible how Lightroom CC reached this performance 

    0
  • BeO
    Top Commenter

    Having approx. 250k customers and the vast majority running Mac machines I really think they should put more effort into benchmarking, publishing results and specific recommendations, investigating and improving performance on a couple of inhouse Apple test machines. (I wouldn't expect the same for Windows machines as their are too many vendors and variations, though I am a Windows user myself).

    1
  • Stefan Grossjohann

    you are totally right. hopefully they will keep an eye on performance 

    0

Please sign in to leave a comment.