Exporting benchmarks v2
There is a thread with attempt to collect CO export benchmark statistics on different machines
So everyone could use this information to build/buy a good computer for CO workflow.
I suggest to improve this bench with fixed set of RAW files, so the collected bench data would be more accurate: it will depend mostly on computer specification (CPU and GPU first), but not on RAW files type (Canon, Nikon, Sony and all other RAW types affect export speed).
So here is the benchmark algorithm:
-
Download the following zip with RAW files and unpack them to some folder
50 photos made with Canon 5D mk3, size about 1.3GB. Theese photos were made by myself so no property rights are violated ๐
-
Start CO and ensure that GPU acceleration is enabled: menu Edit - Preferences - General tab - Hardware acceleration - Processing set to Auto (if it was set to "Never" before, you need to restart CO so changes are applied)
- Open RAW files that were downloaded on step 1 and wait until they are fully imported (previews are built by CO).
-
Set up the following export parameters:
Format: JPEG
Quality: 100%
ICC Profile: sRGB IEC61966-2.1
Resolution: 300 px/in
Scale: Fixed 100%
Open with: None
โ๏ธ It is very important to set up export with theese parameters! Otherwise the bench results won't be reliable! -
Make the first export (GPU accelerated) noting the duration of this process.
At least one export process is needed. However it would be great if you'll be able to make it 2 or 3 times, because results may vary depending on HDD caching, computer background processes and others. Take the shortest time duration - it is the first bench result.
-
Disable GPU acceleration: menu Edit - Preferences - General tab - Hardware acceleration - Processing set to Never. Restart the CO.
-
Make the second export (no GPU acceleration) in a similar way as in step 5.
That will be the second bench result.
-
Make a post at this thread with the following format:
- Computer type (PC/Mac) and model (if any) and OS version
- CPU+GPU - benchmark time 1
- CPU only - benchmark time 2
- CPU model
- GPU model
- CO version
You can also make the second benchmark for TIFF export format - it would be good addition to previous JPEG benchmark. Current version of CO (11.0) is limited in perfomance when exporting to JPEG format due to internal algorithms. So TIFF export would show better results in some circumstances, consuming more hardware power. Theese TIFF export parameters are needed to be set up:
. . .
Here is a myself-written Windows utility that will help to easily calculate benchmark results: it analyzes all files in selected folder and automatically calculates time span between opening of first file for writing and last file modifying.
. . .
Suggestions for benchmark improvements are very welcomed!
Hope my post doesn't look to rigorous ๐
The only goal of this thread is to help each other with selecting hardware components for best CO workflow experience.
-
garrison wrote:
What do you think about adding 'format type' column - tiff or jpeg (see my post above)?
Done! (I hope I didn't mess it up, of to sleep now!)0 -
maybe should have put out for a 1080ti over my 1080 ๐ but interesting to see the gains in the tiff times gnwooding wrote:
In light of what everyone else has posted on TIFF results being more accurate for testing GPU's I decided to run a couple of additional tests.
I left my i7 5820k OC at 4.4GHz and I used a 12GB RAM disk to do the tests on (to ensure no disk bottleneck).
Single GTX 1080ti : 24s
2 x GTX 1080ti : 16.5s
What I observed is that when doing TIFF files at least my CPU usage is 100% across all cores so I am definitely CPU limited.
With a single GTX1080ti GPU usage peaked at 75%.
When using both cards GPU usage only peaked at about 45% on each card and it was less constant. I also notice that the cards only bother running at 1.5GHz since they only see a light load (in games and other GPU computational applications the GPU core runs at 2GHz on my cards).
So clearly even when exporting TIFF files you need a very powerful CPU to take advantage of a powerful GPU.0 -
gnwooding wrote:
So clearly even when exporting TIFF files you need a very powerful CPU to take advantage of a powerful GPU.
Well, that depends. It this case with a lot fairly small files, the CPU is very busy indeed feeding the GPU(s). If processing say 100 mp IQ3 files, you would not see the same load on CPU, as the number of files per second are less. The bottleneck also moves around depending on MP, you could say.0 -
i9-7980xe overclocked to 4,2ghz
amd vega fe
c1 11.0.1
win10 1703
JPEG
45sec GPU
82sec CPU
TIFF
19sec GPU
64sec CPU
...it was interesting to see that there is very small difference compared to i9-7900x.
Storage is NVME Optane 900p and MegaRaid 9460-16i with SATA SSD and RAID10 SATA. Results were same with Optane 900p and SATA SSD.
update: if i overclock mesh from 24 to 30, tiff and jpeg cpu time drops to 77sec and 56sec.0 -
NNN636355020530937144 wrote:
i9-7980xe overclocked to 4,2ghz
amd vega fe
c1 11.0.1
win10 1703
JPEG
45sec GPU
82sec CPU
TIFF
19sec GPU
64sec CPU
...it was interesting to see that there is very small difference compared to i9-7900x.
Storage is NVME Optane 900p and MegaRaid 9460-16i with SATA SSD and RAID10 SATA. Results were same with Optane 900p and SATA SSD.
update: if i overclock mesh from 24 to 30, tiff and jpeg cpu time drops to 77sec and 56sec.
Nvme and Sata SSD are the same? Was thinking of getting a 1TB samsung 960 Pro for my Hot Files cause of higher performance, using a 960GB SSD right now.
I'm testing PrimoCache right now for my HDDs that i use now and then, with a SSD as a cache drive.
Was also using a Ram Disk for Read and Write but the Performance does only go up by 1%.
Will post my System Today, with a P5000 Quadro GPU, < as fast as 1080. (ECC enabled).0 -
Tom, did you consider adding an identical SSD in RAID0?
You'll get twice the speed and storage space.
(I'd backup the important files to another location in case of a failure, which I never had but just in case)0 -
WPNL wrote:
Tom, did you consider adding an identical SSD in RAID0?
You'll get twice the speed and storage space.
(I'd backup the important files to another location in case of a failure, which I never had but just in case)
For me, SSD raids are outdated, a Samsung Nvme M.2 SSD in a PCI-E slot gives me up to 3500 MB Read and 2100 MB Write.
Two Sandisk 960GB will give me 1000 MB Read an Write.
And Nvme's SSDs are made for Paralell Workload, Sata SSDs are not.0 -
____
Win 10, 64Bit, 1709
Capture One 11.0.1
CPU: 7820X / GPU: Quadro P5000
CPU+GPU: 43 Sec. (Jpeg) (GPU ECC on)
CPU: 78 sec. (1:18 Min)(Jpeg)
CPU+GPU: 24 Sec. (Tiff 8 bit) (GPU ECC on)
CPU: 60 Sec. (Tiff 8 bit)
CPU: 63 sec. (Tiff 16 bit)
Time Measured with Adobe Bride, Input Time last image minus Input time first image.
SSD Sandisk Extreme Pro 960GB (Input & Output)
Mainboard MSI Gaming Pro Carbon
64GB Ram, Gskill 2666 (15.15.15.35-Latency)
Samsung 960 Pro 512GB, OS PCI-E NVMe SSD.
I7 7820X, 4.6/4.3/4.3/4.3/4.3/4.3/4.3/4.6 (Ghz Cores Boost)
(Max 150 Watt Max Pull alowed (Short and Long Duration)(2.5 Ghz Mesh)
Max Temp during Bench (72รยฐC , Core 7 <- Hottest/ 55รยฐC, Core 0 <-Coldest)
Cooler: Prolimatech Genesis (Liquid Ultra, as Thermal Compound)
GPU Max 54รยฐC, Max Load 82%, did hit Performance Limit.
Measured with HWinfo.
Switched From Vega FE to P5000 last week, the Vega was the most Buggy GPU i've ever had, did have Problems also with my Two Firepro W8100, after 4 Years of Trying Radeon i will never go back to it.
The P5000 is Rock Stable and i did not hear it.0 -
NNN636355020530937144 wrote:
i9-7980xe overclocked to 4,2ghz
amd vega fe
c1 11.0.1
win10 1703
JPEG
45sec GPU
82sec CPU
TIFF
19sec GPU
64sec CPU
...it was interesting to see that there is very small difference compared to i9-7900x.
Storage is NVME Optane 900p and MegaRaid 9460-16i with SATA SSD and RAID10 SATA. Results were same with Optane 900p and SATA SSD.
update: if i overclock mesh from 24 to 30, tiff and jpeg cpu time drops to 77sec and 56sec.
Definitely seeing a line of diminishing returns here. Wonder if it's at 10 or 12 cores. The kid spending on a 16 core Threadripper for C1P11 might be a little disappointed. Still interested in seeing what happens with his build.0 -
7820x
8GB Sapphire Radeon RX 580 Nitro+
Tiff CPU+GPU = 25.8s0 -
Windows PC (Win7)
C1 version - 11.0
CPU - Intel Core i7 4770 (4-core, 8-threads, 3.40 GHz)
GPU - NVidia GTX 660
JPEG
CPU+GPU - 1:24 (84s)
CPU only - 2:33 (153s)
The GPU speed up factor is about 1.82x.
TIFF
CPU+GPU - 1:05 (65s)
CPU only - 1:56 (116s)
The GPU speed up factor is about 1.78x.0 -
CraigJohn wrote:
Definitely seeing a line of diminishing returns here. Wonder if it's at 10 or 12 cores. The kid spending on a 16 core Threadripper for C1P11 might be a little disappointed. Still interested in seeing what happens with his build.
It seems that currently, i9-7900x with 1080ti is best combination for C1. It's interesting that even all cores are at 100% still the difference between 7900x and 7980xe is so small (in CPU only benchmarks). Maybe with CPU+GPU tests multiple GPU's on 7980xe will give better results...0 -
Tom-D wrote:
Switched From Vega FE to P5000 last week, the Vega was the most Buggy GPU i've ever had, did have Problems also with my Two Firepro W8100, after 4 Years of Trying Radeon i will never go back to it.
The P5000 is Rock Stable and i did not hear it.
I did have exactly opposite experience, moving from Quadro (last 10-15 years on Quadro) to Vega FE (with past FirePro cards I did also have lot of problems). What kind of problems you had?0 -
Tom-D wrote:
For me, SSD raids are outdated, a Samsung Nvme M.2 SSD in a PCI-E slot gives me up to 3500 MB Read and 2100 MB Write.
Two Sandisk 960GB will give me 1000 MB Read an Write.
And Nvme's SSDs are made for Paralell Workload, Sata SSDs are not.
I see, thanks for the explanation! ๐0 -
Did anybody try what the effect is when disabling the integrated graphics on their Core-iX ... ?
I went from 50 (ON) seconds to 40(,2ish) (OFF) seconds.
I had it turned of and figured letting the IG help it might be faster but the opposite seems to be the case.
Tested several times and it was no one-time-event.
Edit: Of course I mean only the CPU's with integrated graphics.0 -
StephanR wrote:
Here the results with my extra program:
TIF uncompressed 8 Bit -> open with to jpg CPU+GPU - 27s
TIF uncompressed 8 Bit -> open with to jpg CPU only - 88s
As you can see the speed with GPU is only 1s slower then only the TIF conversion.
Even the CPU only test is faster (all cores are running around 100%) as the jpg conversion in CO1
And as you can see my AMD 280x is very old in comparison to a NVidia 1080.
curious if you run your program what is the file size KB etc.. vs running the same image at %100 through C1 ?
if its smaller KB kinda curious what % on the scale out of C1 makes them the same size and then what are the times for that %
hope that makes sense what I am asking ๐0 -
NNN636355020530937144 wrote:
Tom-D wrote:
Switched From Vega FE to P5000 last week, the Vega was the most Buggy GPU i've ever had, did have Problems also with my Two Firepro W8100, after 4 Years of Trying Radeon i will never go back to it.
The P5000 is Rock Stable and i did not hear it.
I did have exactly opposite experience, moving from Quadro (last 10-15 years on Quadro) to Vega FE (with past FirePro cards I did also have lot of problems). What kind of problems you had?
Thats funny.
Im working in 10bit (Photoshop), after turing 10 bit on a color in Bridge (Loading Bar), HWInfo (selected data) and Photoshop (Menu) turned to orange instead of blue.
After installing Adrenalin driver 17.12.1(2) i could not install other drivers without this failure.
Even my FirePros were affected.
And my second monitor does flicker with the Vega Fe, then after switching to Quadro and using the Vega FE @ home the flickering was also present on my gaming PC.
It went to RMA and now i've my money back.
Now with the P5000 all is working fine, and i'm able to reinstall the drivers without turning 10 bit off, @ Radeon this caused often the failure of colors but i could fix this by reinstall the driver turn it off and then turn it on after install.0 -
Tom-D wrote:
Thats funny.
Im working in 10bit (Photoshop), after turing 10 bit on a color in Bridge (Loading Bar), HWInfo (selected data) and Photoshop (Menu) turned to orange instead of blue.
After installing Adrenalin driver 17.12.1(2) i could not install other drivers without this failure.
Even my FirePros were affected.
And my second monitor does flicker with the Vega Fe, then after switching to Quadro and using the Vega FE @ home the flickering was also present on my gaming PC.
Just to clarify, did you have this problem with the Enterprise drivers also?
I tested same thing on my workstation and did not have any problems with the Enterprise drivers. I have not tried Adrenalin drivers.
I've got only one hiccup on my 2-3months of use, once after update Photoshop was unable to use GPU acceleration and disabled it during startup. I restarted photoshop and activated GPU acceleration and then it worked fine.
I have not used dual monitors as C1 slowed down noticeably when I tested briefly (now running single 4k screen)0 -
WPNL wrote:
Did anybody try what the effect is when disabling the integrated graphics on their Core i7/i9 ... ?
I went from 50 (ON) seconds to 40(,2ish) (OFF) seconds.
I had it turned of and figured letting the IG help it might be faster but the opposite seems to be the case.
Tested several times and it was no one-time-event.
Somewhat grounded theory that the bottleneck in your system is the CPU and throttling it with its iGPU enabled is what's bringing your numbers up.0 -
NNN636355020530937144 wrote:
Tom-D wrote:
Thats funny.
Im working in 10bit (Photoshop), after turing 10 bit on a color in Bridge (Loading Bar), HWInfo (selected data) and Photoshop (Menu) turned to orange instead of blue.
After installing Adrenalin driver 17.12.1(2) i could not install other drivers without this failure.
Even my FirePros were affected.
And my second monitor does flicker with the Vega Fe, then after switching to Quadro and using the Vega FE @ home the flickering was also present on my gaming PC.
Just to clarify, did you have this problem with the Enterprise drivers also?
I tested same thing on my workstation and did not have any problems with the Enterprise drivers. I have not tried Adrenalin drivers.
I've got only one hiccup on my 2-3months of use, once after update Photoshop was unable to use GPU acceleration and disabled it during startup. I restarted photoshop and activated GPU acceleration and then it worked fine.
I have not used dual monitors as C1 slowed down noticeably when I tested briefly (now running single 4k screen)
I didn't use the Gaming drivers, they don't have a Deep Color Option for OpenGL, Gaming Drivers do offer Deep Color for Direct X.
Never Never install Adrenalin Pro driver, bevore all was fine.
Our Second Workstation, same as mine but with FirePro W8100, does also deactivate the GPU now and then in Photoshop, Lightroom makes also now and then Problems (Freeze up, hanging).
We will change to P4000 on our Second Workstation.0 -
WPNL wrote:
Did anybody try what the effect is when disabling the integrated graphics on their Core i7/i9 ... ?
I went from 50 (ON) seconds to 40(,2ish) (OFF) seconds.
I had it turned of and figured letting the IG help it might be faster but the opposite seems to be the case.
Tested several times and it was no one-time-event.
I don't believe the Skylake X CPUs have integrated graphics.0 -
Sorry to sound like a goofball here ... but how are people collecting these very precise times? Are you just using a simple stop watch or is there something in C1 that reports this? 0 -
6BQ5 wrote:
Sorry to sound like a goofball here ... but how are people collecting these very precise times? Are you just using a simple stop watch or is there something in C1 that reports this?
I just use my phone timer and click together and when I see it stop click again ๐ why I joked about finger lag ๐ but reality is should not be more than a second and if you do it 3x you get a good avg idea and mine I just round to the second for good enough
way back in the day I did a bunch of LR tests to compare the sliders being useable and I did video capture and used the time code ๐ which is a solid way of doing things
on say 100 files I can always look at the create time on the first and last file and do the math again close enough
at least me I am looking at this knowing their will be a second or so error and its a good ballpark idea of how hardware is working etc..0 -
6BQ5 wrote:
Sorry to sound like a goofball here ... but how are people collecting these very precise times? Are you just using a simple stop watch or is there something in C1 that reports this?
I use Adobe Bride, i collecting the Time when the File was created, I take them off from each other.
Totay i tested a NVMe SSD for Bench & Testing, Read 3000 MB Write 2400 MB, there wasn't a single benefit from running the Benchmark and using the Software, i got exact the same Times for Jpeg and Tiff, so safe your money.
Drive: Corsair MP500 120GB.0 -
WOW! I'm wildly shocked an NVMe wouldn't offer writing speed benefit.
You have the 7820x as well, don't you?0 -
CraigJohn wrote:
WOW! I'm wildly shocked an NVMe wouldn't offer writing speed benefit.
You have the 7820x as well, don't you?
Yes i have, two cores 4.5ghz, 6 Cores 4.3ghz.
Limited to 150watts, 140watts is stock.0 -
CraigJohn wrote:
WPNL wrote:
Did anybody try what the effect is when disabling the integrated graphics on their Core i7/i9 ... ?
I went from 50 (ON) seconds to 40(,2ish) (OFF) seconds.
I had it turned of and figured letting the IG help it might be faster but the opposite seems to be the case.
Tested several times and it was no one-time-event.
I don't believe the Skylake X CPUs have integrated graphics.
Excuse me. I've rephrased my post to prevent further confusion ๐0 -
Chad Dahlquist wrote:
6BQ5 wrote:
Sorry to sound like a goofball here ... but how are people collecting these very precise times? Are you just using a simple stop watch or is there something in C1 that reports this?
I just use my phone timer and click together and when I see it stop click again ๐ why I joked about finger lag ๐ but reality is should not be more than a second and if you do it 3x you get a good avg idea and mine I just round to the second for good enough
way back in the day I did a bunch of LR tests to compare the sliders being useable and I did video capture and used the time code ๐ which is a solid way of doing things
on say 100 files I can always look at the create time on the first and last file and do the math again close enough
at least me I am looking at this knowing their will be a second or so error and its a good ballpark idea of how hardware is working etc..
OK, here goes!
I have a mid-2011 iMac with a 21.5" screen and 20 GB of RAM.
I downloaded the benchmark images, imported them into C1 v11, and processed them almost according to instructions at the beginning of the thread. The instructions said to use the "sRGB IEC61966-2.1" ICC profile. This profile is available in my (very long) pull-down menu. I used what I always use, "sRGB Color Space Profile".
Exporting the 50 images from my managed catalog to my desktop took 3 minutes and 50 seconds. Maybe 49 seconds when counting finger lag.
C1 does not seem to support the GPU inside my computer so this sounds like 100% CPU.0 -
6BQ5 wrote:
OK, here goes!
I have a mid-2011 iMac with a 21.5" screen and 20 GB of RAM.
I downloaded the benchmark images, imported them into C1 v11, and processed them almost according to instructions at the beginning of the thread. The instructions said to use the "sRGB IEC61966-2.1" ICC profile. This profile is available in my (very long) pull-down menu. I used what I always use, "sRGB Color Space Profile".
Exporting the 50 images from my managed catalog to my desktop took 3 minutes and 50 seconds. Maybe 49 seconds when counting finger lag.
C1 does not seem to support the GPU inside my computer so this sounds like 100% CPU.
Not necessarily. My 2009 Mac Pro with the Video Card took 2 minutes and 19 seconds. It took over 7 minutes with CPU only...
I'd say your integrated GPU was working...0 -
CraigJohn wrote:
6BQ5 wrote:
OK, here goes!
I have a mid-2011 iMac with a 21.5" screen and 20 GB of RAM.
I downloaded the benchmark images, imported them into C1 v11, and processed them almost according to instructions at the beginning of the thread. The instructions said to use the "sRGB IEC61966-2.1" ICC profile. This profile is available in my (very long) pull-down menu. I used what I always use, "sRGB Color Space Profile".
Exporting the 50 images from my managed catalog to my desktop took 3 minutes and 50 seconds. Maybe 49 seconds when counting finger lag.
C1 does not seem to support the GPU inside my computer so this sounds like 100% CPU.
Not necessarily. My 2009 Mac Pro with the Video Card took 2 minutes and 19 seconds. It took over 7 minutes with CPU only...
I'd say your integrated GPU was working...
I have a message in C1 under the Hardware Acceleration pull-down menus for Display and Processing that says, "Hardware acceleration doesn't work". That line is also a link that takes me to Phase One's tech support website explaining which GPUs are and are not supported.
Just for kicks I changed the setting from Auto to Never and I got the same time, 3 minutes 50 seconds.0
Post is closed for comments.
Comments
135 comments