PERFORMANCE ON M2 TERRIBLE !!!! – FIX PLEASE!
THE PERFORMANCE OF CAPTURE ONE ON BRAND NEW MAC M2 MAX IS APPALLING!!!! I HAVE MAXED OUT MACBOOK PRO AND THE PROGRAM IS WORKING LIKE ON WINDOS 95!!!
YOU GUYS NEED TO DO SOMETHING ABOUT IT, IT'S UNBELIEVABLE HOW BAD IT IS ON 6 GRAND 96GB RAM MACBINE MACHINE.
THANKS
-
craig stodola Brian Jordan We're all working towards identifying an issue and trying to get it fixed. Let's take a step back for a second.
I still can't see that you've submitted a support request, Craig. So I've made one for you from one of your comments.
Hopefully the support team will get what they need and they can continue to work with R&D to see where the issue lies. Cheers!0 -
Can confirm we're actively looking into what you provided Andrew Nemeth – thank you.
0 -
In the spirit of contributing to the community, would it be possible for C1 to provide a test set and script we could use to perform? I'm thinking along the lines of, e.g., 500 RAW and JPEG images of various sizes and a written script with several tasks (imports, exports, but possibly also moving sliders, copying changes to all pictures, merging a panorama,.. etc.) – This could even be scripted in AppleScript so there is not much need for manual work by the tester and it would provide valuable data for dev based on a wide range of configurations.
I am including other tasks than just imports and exports here because it seems to me that individual manipulations are more relevant for many users if those are buggy on some SoC machines than raw export speeds (even if, for some of us, the latter may equally be an important bottleneck).
I'd be happy to perform those tests with screen recordings on my two machines (I have a 2015 5k iMac 27" with 40 GB RAM and a 2022 M2 Macbook Air with 24 GB RAM).
Just my 2c
2 -
Erik V Sounds interesting, thanks for the suggestion. I'll raise it with our release manager.
1 -
We have raised this with the R&D team using the information provided from support requests in this thread. Let us know if you are experiencing similar behavior and thanks for the help on this one!
0 -
We've managed to reproduce this behavior in-house. It has been assigned to an R&D team to work on.
0 -
Erik V - are we suggesting Art Is Right is falsifying his data during a 1000 45MP image export, especially when compared to Lightroom? Are we also suggesting my screen capture is falsified, indicating poor core usage during export? Art's performance numbers for Capture One Pro aligns with my 24MP experience - though, to be fair, my performance is bit a lower than Art's as I have exposure, WB, noise reduction, and contrast adjustments (which may or may not slow down the export performance).
Art's performance for LR appears to align with Adobe user experiences. ...There is room for significant C1 export improvement here - for the software developers to utilize multiple core, GPU and RAM efficiency when performing these tasks. Adobe's LR CC performance already shows this path.
I submitted a screen capture video of my slider delay. You may not think the delay is a big deal, as you're used to it being jumpy. But when you've used C1 on a Windows 10 Pro workstation, where the sliders are just silky smooth with no delay, they don't jump at all... it's very evident Apple's performance is not optimized.0 -
craig stodola Hi Craig, I am not implying anything with regard to your or ArtisRights' facts – as should be clear from my previous posts. I understand your frustration – and those of others who suffer abmissal performance in some parts of the app. I myself am postponing decisions on buying an M2 - or even M3 - desktop because of the lack of clear communication – see further.
What I did suggest is that C1 preps a scripted package that allows us to perform controlled and uniform tests with a set of files (a few hundred would be enough, different formats) and a procedure to setup the environment. The resulting screen recordings would be sent back, together with all relevant system info. ArtisRight is performing quite a number of tests, but from a dev perspective, you need many more, e.g., individual image manipulations involving the sliders, as is shown by your data. Such an approach would mean C1 can have test data that are not only standardized but also prepped by them therefore generating irrefutable intel from dozens of configurations.
I have been involved in large development projects for two decades. From that experience, I know developers – and project managers and quality control people – need to be fed unquestionable, detailed, and structured data.
Preparing such a package would require somewhere between 10 and 20 man-days. (Including research, scripting, copywriting, project management etc.) Ideally, there would also be a user poll to collect information on the most prominent problems and even a few Zoom interviews with a few power users, including ArtisRight. This is a limited effort that would result in a gold mine of data, resulting in better prioritization, faster bug fixing, and thus higher customer satisfaction. As I've said in earlier posts in this thread, non-communication and uncertainty are worse than realistic expectation management. A clear and concise dashboard in the support section might do wonders, and it's not that difficult to set up.
3 -
Eric has a good point. The power of any analysis is dependent of the number of trials run. If a large cohort of users run the test package, the vast majority should get the results expected by Capture One, assuming the beta testing was robust enough to test the elements of the test package. Exceptions should fall into two categories, obvious problems affecting all users or those affecting a few. In the case of the few, a standardized test should show a common thread with a problem (e.g., hardware and software interactions). Perhaps beta testing should include this sort of systematic testing.
1 -
What's the "expected result" by Capture One. Right now, the expectation is slow GPU only acceleration. Everyone exports 500 files to full res JPEGs that takes 8-12 minutes, then what? That's still 2x slower than 1000 files for Adobe Lightroom.
Showing a standardized test vs. a competitor shows more of what's wrong with the Capture One rather than regurgitating the same expectations we've expected since 2019. Little to no multicore CPU acceleration.
We've already shown the sliders are jumpy. Like I said. If you're used to jumpy sliders, and you don't know what a smooth slider response feels like (Mac vs. PC), how do you quantify, much less qualify your experience?
Testing C1 against itself with little to no other expectations is pretty fruitless. It's literally "spinning the same wheels", and expecting different results.-3 -
Regarding "expectations," I was referring to Capture One telling us candidly what to expect of its product and what they expect to impact performance.
There are two issues. One is general limitations affecting all or nearly all users using similarly configured hardware. These are due to inherent limitations of the software, like Adobe exporting JPEGs 4 times faster than Capture One. You certainly don't need to run tests on a large number of computers to see the difference in speed; ArtisRight's report is plenty robust for this. Other sources show similar results. It's physics.
The other issue is due to individual problems a limited number of users have reflected in the posts on this site. These problems are often hard to diagnose and affect relatively few users. For these, standardized testing would be more likely to show if there was a common thread.
0 -
The proposed approach for standardized testing is equally applicable to LR (by C1 itself on a battery of machines, obviously) as it is on C1 (with the help of some willing community members). Of course, the AppleScript needs to be adapted for the UI differences. As I mentioned in a previous post, this approach is not because current user reporting is wrong or unusable but to avoid internal discussions – I have seen this happen in dev teams – and speed up the resolution of issues. It would also benefit the development team to have such a methodology for beta testing purposes internally and the user base.
What I am curious about is the support for intel-based machines. I also mentioned this in one of my previous posts. Maybe C1 could communicate how long they will support intel on new significant releases (previously dubbed 21, 22, 23, etc. but now not adhering to that naming convention anymore).
Another point is whether continuing support for intel-based Macs limits the performance of Apple SoC due to architectural reasons and legacy code.
Even if this is not the case, I would imagine that developing and maintaining code for two hardware platforms requires substantial additional overhead that otherwise could be used to optimize the Apple SoC branch and speed up bug resolution. Since the neural engine is not on the intel platform, making use of it – or not – is another big differentiator that would require two quite different code bases. With the increasing use of machine learning, I guess it would be very inefficient not to use the Apple SoC capabilities on that front...
0 -
They already maintain code for three different hardware platforms with different operating systems, Intel based Windows PCs, Intel based Macs, and Apple Silicon Macs. If Mac continues to be the computer of the majority of Capture One users, might the Windows PCs users also have cause for concern? Isn't it likely that when the cost of maintaining the code for users of a certain computer offsets the profit margin for doing so, development will stop for those computers?
0 -
Jerry C You have a valid point, but Apple is pushing people to abandon intel. Note that I'm still using a Late 2015 quad-core i7 GHz 40 GB 5k iMac – and I'm quite happy with it. I'm just saying that it might not be a bad idea at some point to let go of Intel-based Macs for future major updates so C1 is free of legacy code AND can better leverage the Apple SoC features that intel lacks. Maintaining just one Apple platform is, after some point, not going to negatively impact revenue (from intel Macs) as many users will move onto Apple SoC. But it might indeed have a large impact if the Apple SoC version blows LR out of the water because it can be fully utilized, thereby gaining – and keeping – market share.
0 -
To add to my previous point, I'm holding off switching to a subscription model and buying a powerful M2 or M3 machine because of the reported bugs. Not going to spend 3-4k if I hardly see the difference with my 8-year-old Mac or, as reported here, if performance might be worse or buggy in some respect. It's frustrating to be stuck in this place where I don't want to go to LR but cannot fully benefit from C1...
0 -
Eric Vandeveld wrote:
Not going to spend 3-4k if I hardly see the difference with my 8-year-old Mac
That's what I'm finding. I had a 2018 Mac Mini INTEL i7 hexacore and TB3 linked to an eGPU with an AMD 6700 (8GB) graphics card. 64GB of RAM.
I now have a 2023 Mac Studio M2 Max: 12 Cores & 38 Graphics cores. 64GB of RAM.
Cannot see any C1 Pro performance improvement in the new machine. C1 barely touches the 12 cores and most things I do are the same speed as the 2018 Mini =/
Other programs… there is a huge performance boost. eg. PTGui, which I use for stitching 100-200 MP panoramas, runs at DOUBLE the speed.
And yes, I downloaded the recent C1 Pro v16.3.2.32 released yesterday.
0 -
Strange, that wasn't me posting that. It was someone named Erik V, yet it's attributed to me? I've never posted here!
PS. C1 works great on my M2. :D
1 -
Sorry about that. I put Eric V in for the quote and it suggested your name.
0 -
Ah, that makes sense. I was a bit concerned that I was hacked for a minute. Thanks for letting me know. :) Good luck with getting your issue fixed.
1 -
Maybe I should have said it this way. If Capture One can support so many different operating systems on so many devices, maybe they can code and chew gum at the same time. Improving the code for Mac Silicon should not depend on the elimination of the Intel version. The coding issues were there before Mac developed its own chips and will persist until Capture One finds it more profitable to adjust the code that to leave it the same. "Nothing happens until the pain of remaining the same outweighs the pain of change" (Arthur Burt).
As to my experience with Mac Silicon, I have a 24 core CPU 60 Core GPU Mac Studio M2 Ultra with 128GB RAM. I bought it to compensate for Capture One's coding gaps. It does to a large extent. If I scrub the exposure slider back and forth, there is activity on all cores. With exporting, there is substantial activity on all cores, but exporting could be improved a lot and the momentary blurriness when moving sliders at 100% magnification could be eliminated.
0 -
and the momentary blurriness when moving sliders at 100% magnification could be eliminated.
So you suffer from this even with a high-spec M Mac? I thought it might only affect Windows users...
0 -
I have an Intel MacBook Pro 2019 running Ventura and Capture One 16.3.2. 16.3.1 ran well. 16.3.2 seems to cause high CPU usage sporadically.
0 -
BeO, So you suffer from this even with a high-spec M Mac? I thought it might only affect Windows users...
Yes, since version 20, although it diminished with version 16 and is hardly noticeable with the M2 Mac Studio Ultra. It is irritating to see, but has little practical effect on usage.
Are the new features requiring so much more computational overhead that exposes the consequences of lack of optimization? Is "hardware acceleration" an oxymoron? You have to wonder about this when the most common advice for some problems is to turn off hardware acceleration. To me, this needs attention before adding any more features, but features drive sales.
0 -
As a data point (not to deprecate anyone elses experience) I observe no jerkiness on my system.
I am using Ventura 13.6.1 OS on a Mac Studio with M1Max, 10 CPU cores and 32 GPU cores, 64GB of RAM, Catalogs (referenced) and Images on external SSD (USB connection).
Dual Monitor, dual CO window configuration, viewing monitor is a BENQ SW2700 @ 1440x2550
Using CaptureOne 16.3.1, I chose the largest image I have, which is a 151MP IIQ image from a Phase One IQ4 back (downloaded).
I also tried my other image files tiffs, RW2, ORF with no issue.
Using ISTAT menus, I observe that when I rapidly move the Exposure slider from back and forth from min to max , there is a burst of GPU activity up to about 40%, and there is also a burst of CPU activity to maybe 20%. No jerkiness.
Nothing untoward here, I'm happy with the performance.
0 -
When I said the jerkiness was hardly noticeable on my system, which also is a dual monitor setup, I really meant "hardly." I do not always see it with 45MB CR3 RAW images, and not at all with JPGs and TIFs. I too have a big increase in GPU activity and CPU activity on all cores when scrubbing a slider back and forth. I do not see any difference when hardware acceleration is on or set to auto. This is noticeable when browsing from one image to another (other than the next one in a sequence because that is preloaded) as a very brief, but noticeable delay (under a quarter second) in displaying the image in full resolution. None of this is something I would pay extra for to make it completely go away.
I think the big message is that jerkiness or transient blurriness is system dependent and probably also depends on other demands being put on the CPU/GPU. That said, functioning hardware acceleration should make a difference, especially on older systems that depend on it.
Also, when hardware is set to auto, what about it is automatic. Does this it is used when needed and not otherwise. How would this be determined. If so, would the hardware acceleration test fail if it is automatically off or is this just a bug is version 16.3?
0 -
One thing that I've noticed in regards to performance issues on Macs and 4K screens is simply the size of the preview window. For example, size down your Capture One application window until the working area of your main image is roughly 6 inches by 4 inches (the idea here is something much smaller than you would obviously want to work in). I've noticed the smaller the working area the smoother and more responsive the sliders are. I even captured a screen recording for support to show this issue.
Also, under your display settings if you reduce the scaling of your screen to minimum - where everything looks super small - this also helps. To me this suggests it's something with scaling of the preview image and interface.
Adjusting the image preview size under preferences does very little to fix this issue.To me the performance lag seems to be in issue with scaling. I have submitted a couple of tickets over the last year or since this also happens on my 2019 Mac pro with dual Vega graphics cards. So far it seems to only have gotten slightly better.
Has any one else noticed this? Give it a try and see if you can reproduce it!0 -
I can confirm that a large part if not the biggest issue is when using my Apple 6k Pro Display XDR.
I am now on a new 14" MacBook Pro M3 Max and I have lagging slider issues when connected to my 6k display and no lag on my 14" display. so the issue for me is primarily due to screen resolution.
I hope this helps with a fix.
0 -
I just submitted another report request for this ongoing issue.
I would encourage others to do so if they haven't already.The only reason I don't move away from capture one is that it has a few unique features that other programs don't. If they would just improve the performance of the interface - they would have a very solid program.
0 -
I also face issues when running C1 on my Pro Display XDR. An interesting observation is that if you change the Preferences → General → Hardware Acceleration Display to Never, the framerate of the "slider" refresh becomes much faster.
0 -
Hello everyone, I would now like to take part in the discussion. I also just created a support ticket. For fashion and ecom shoots we produce 1000-2000 images every day. We are doing this since years and have to export them. I'm currently working with the M3 Max and I was hugely disappointed that the performance of the M1 Pro wasn't noticeably different. It's sad that after 3 generations the chips are still not being used properly. As a professional user, I expect this from high-priced software. And it's a bad joke that we have to discuss when to export 1000 images.
I haven't read every single post, so I hope this is not a double posts: When I start turning of the GPU Support, it starts using the CPU. So it seems, that there is no combination of CPU and GPU usage.
Spec:
Sonoma 14.3, Apple M3 Max, 36GB Ram, Capture One 16.3.4.5
I did a TEST Export 1248 RAW Files 100% JPG / 50MB each file
Lightroom
CPU 90%
GPU 90%6 min 44 sec. (100% total means 1% = 4,04 sec)
Capture One
CPU 13%
GPU 70%23 min 47 sec. (353%)
means that capture one ist 253% slower compare to Lightroom and this means C1 waste a lote of performance (maybe in all functions...)
1
Please sign in to leave a comment.
Comments
93 comments