Exporting benchmarks v2
There is a thread with attempt to collect CO export benchmark statistics on different machines
So everyone could use this information to build/buy a good computer for CO workflow.
I suggest to improve this bench with fixed set of RAW files, so the collected bench data would be more accurate: it will depend mostly on computer specification (CPU and GPU first), but not on RAW files type (Canon, Nikon, Sony and all other RAW types affect export speed).
So here is the benchmark algorithm:
-
Download the following zip with RAW files and unpack them to some folder
50 photos made with Canon 5D mk3, size about 1.3GB. Theese photos were made by myself so no property rights are violated 😊
-
Start CO and ensure that GPU acceleration is enabled: menu Edit - Preferences - General tab - Hardware acceleration - Processing set to Auto (if it was set to "Never" before, you need to restart CO so changes are applied)
- Open RAW files that were downloaded on step 1 and wait until they are fully imported (previews are built by CO).
-
Set up the following export parameters:
Format: JPEG
Quality: 100%
ICC Profile: sRGB IEC61966-2.1
Resolution: 300 px/in
Scale: Fixed 100%
Open with: None
❗️ It is very important to set up export with theese parameters! Otherwise the bench results won't be reliable!
-
Make the first export (GPU accelerated) noting the duration of this process.
At least one export process is needed. However it would be great if you'll be able to make it 2 or 3 times, because results may vary depending on HDD caching, computer background processes and others. Take the shortest time duration - it is the first bench result.
-
Disable GPU acceleration: menu Edit - Preferences - General tab - Hardware acceleration - Processing set to Never. Restart the CO.
-
Make the second export (no GPU acceleration) in a similar way as in step 5.
That will be the second bench result.
-
Make a post at this thread with the following format:
- Computer type (PC/Mac) and model (if any) and OS version
- CPU+GPU - benchmark time 1
- CPU only - benchmark time 2
- CPU model
- GPU model
- CO version
You can also make the second benchmark for TIFF export format - it would be good addition to previous JPEG benchmark. Current version of CO (11.0) is limited in perfomance when exporting to JPEG format due to internal algorithms. So TIFF export would show better results in some circumstances, consuming more hardware power. Theese TIFF export parameters are needed to be set up:
. . .
Here is a myself-written Windows utility that will help to easily calculate benchmark results: it analyzes all files in selected folder and automatically calculates time span between opening of first file for writing and last file modifying.
. . .
Suggestions for benchmark improvements are very welcomed!
Hope my post doesn't look to rigorous 😊
The only goal of this thread is to help each other with selecting hardware components for best CO workflow experience.
-
6BQ5 wrote:
CraigJohn wrote:
6BQ5 wrote:
OK, here goes!
I have a mid-2011 iMac with a 21.5" screen and 20 GB of RAM.
I downloaded the benchmark images, imported them into C1 v11, and processed them almost according to instructions at the beginning of the thread. The instructions said to use the "sRGB IEC61966-2.1" ICC profile. This profile is available in my (very long) pull-down menu. I used what I always use, "sRGB Color Space Profile".
Exporting the 50 images from my managed catalog to my desktop took 3 minutes and 50 seconds. Maybe 49 seconds when counting finger lag.
C1 does not seem to support the GPU inside my computer so this sounds like 100% CPU.
Not necessarily. My 2009 Mac Pro with the Video Card took 2 minutes and 19 seconds. It took over 7 minutes with CPU only...
I'd say your integrated GPU was working...
I have a message in C1 under the Hardware Acceleration pull-down menus for Display and Processing that says, "Hardware acceleration doesn't work". That line is also a link that takes me to Phase One's tech support website explaining which GPUs are and are not supported.
Just for kicks I changed the setting from Auto to Never and I got the same time, 3 minutes 50 seconds.
Make sure you have the latest GPU drivers installed. And then delete the files in /Users/Shared/Capture one/ImageCore/ and relaunch CO. It will take a while for CO to recompile the kernel. Wait until it is done and try converting raws again.0 -
I have one file in that directory. It's 208 bytes big and it's called ICOCL_all.xml. It's contents are :
------
<?xml version="1.0" encoding="utf-8"?>
<query>
<DeviceQueryResultCode>73</DeviceQueryResultCode>
<DeviceQueryResult>Unsupported OpenCL Device</DeviceQueryResult>
<FoundDevices>1</FoundDevices>
</query>
------
There weren't any other fancy looking files in there.
I re-ran the export just now and got a slightly faster time. 3 minutes and 45 seconds.
I'm not sure if a Mac has the ability to manually update drivers. Isn't that all built into OS X?0 -
After deleting that file, did you select 'auto' in the settings? 0 -
WPNL wrote:
After deleting that file, did you select 'auto' in the settings?
Yes, I did.0 -
6BQ5 wrote:
I have one file in that directory. It's 208 bytes big and it's called ICOCL_all.xml. It's contents are :
------
<?xml version="1.0" encoding="utf-8"?>
<query>
<DeviceQueryResultCode>73</DeviceQueryResultCode>
<DeviceQueryResult>Unsupported OpenCL Device</DeviceQueryResult>
<FoundDevices>1</FoundDevices>
</query>
------
There weren't any other fancy looking files in there.
I re-ran the export just now and got a slightly faster time. 3 minutes and 45 seconds.
I'm not sure if a Mac has the ability to manually update drivers. Isn't that all built into OS X?
According to that XML file you have unsupported GPU by C1 indeed0 -
garrison wrote:
6BQ5 wrote:
I have one file in that directory. It's 208 bytes big and it's called ICOCL_all.xml. It's contents are :
------
<?xml version="1.0" encoding="utf-8"?>
<query>
<DeviceQueryResultCode>73</DeviceQueryResultCode>
<DeviceQueryResult>Unsupported OpenCL Device</DeviceQueryResult>
<FoundDevices>1</FoundDevices>
</query>
------
There weren't any other fancy looking files in there.
I re-ran the export just now and got a slightly faster time. 3 minutes and 45 seconds.
I'm not sure if a Mac has the ability to manually update drivers. Isn't that all built into OS X?
According to that XML file you have unsupported GPU by C1 indeed
😭 😭0 -
NNN636355020530937144 wrote:
i9-7980xe overclocked to 4,2ghz
amd vega fe
c1 11.0.1
win10 1703
JPEG
45sec GPU
82sec CPU
TIFF
19sec GPU
64sec CPU
...it was interesting to see that there is very small difference compared to i9-7900x.
Storage is NVME Optane 900p and MegaRaid 9460-16i with SATA SSD and RAID10 SATA. Results were same with Optane 900p and SATA SSD.
update: if i overclock mesh from 24 to 30, tiff and jpeg cpu time drops to 77sec and 56sec.
...went to shop and bought another Vega FE. Couldn't test properly as my 1kW PSU was not able to keep up with the setup anymore.
7980xe @ 4,2 overclock dual Vega FE was able to render the pictures basically instantly without any lag on 4k screen. Tiff export times went to 13sec. 10 bit works, but couldn't get crossfire to work but anyway C1 used both GPU's. Report back when I get new PSU for futher tests...0 -
Chad Dahlquist wrote:
StephanR wrote:
Here the results with my extra program:
TIF uncompressed 8 Bit -> open with to jpg CPU+GPU - 27s
TIF uncompressed 8 Bit -> open with to jpg CPU only - 88s
As you can see the speed with GPU is only 1s slower then only the TIF conversion.
Even the CPU only test is faster (all cores are running around 100%) as the jpg conversion in CO1
And as you can see my AMD 280x is very old in comparison to a NVidia 1080.
curious if you run your program what is the file size KB etc.. vs running the same image at %100 through C1 ?
if its smaller KB kinda curious what % on the scale out of C1 makes them the same size and then what are the times for that %
hope that makes sense what I am asking 😊
For example the first file 0064.cr2:
CO1 JPG exports at 100% a file with 13 036 092 Bytes
My extra program uses also quality level 100 and the file has 11 116 665 Bytes
So the size is a bit smaller, must be something between 99% and 100% in CO1 as CO1 99% gives as file with 10 296 000 Bytes0 -
interesting that the file size is quite a large drop off for small % 😊 not doubting as it could sound like
I should play with some files and compare lr and C1 % for fun and put some of the difference filters on them in PS when I get time I love this kinda stuff 😊
I know I tend to work in just tiff files and even more so now with windows not being able to do thumbs of PSB or PSD files native 😊
thanks 😊
the whole timing thing is a fun puzzle for sure seeing dif hardware combos etc.. quite fun to see read 😊StephanR wrote:
Chad Dahlquist wrote:
StephanR wrote:
Here the results with my extra program:
TIF uncompressed 8 Bit -> open with to jpg CPU+GPU - 27s
TIF uncompressed 8 Bit -> open with to jpg CPU only - 88s
As you can see the speed with GPU is only 1s slower then only the TIF conversion.
Even the CPU only test is faster (all cores are running around 100%) as the jpg conversion in CO1
And as you can see my AMD 280x is very old in comparison to a NVidia 1080.
curious if you run your program what is the file size KB etc.. vs running the same image at %100 through C1 ?
if its smaller KB kinda curious what % on the scale out of C1 makes them the same size and then what are the times for that %
hope that makes sense what I am asking 😊
For example the first file 0064.cr2:
CO1 JPG exports at 100% a file with 13 036 092 Bytes
My extra program uses also quality level 100 and the file has 11 116 665 Bytes
So the size is a bit smaller, must be something between 99% and 100% in CO1 as CO1 99% gives as file with 10 296 000 Bytes0 -
curious if you run your program what is the file size KB etc.. vs running the same image at %100 through C1 ?
if its smaller KB kinda curious what % on the scale out of C1 makes them the same size and then what are the times for that %
hope that makes sense what I am asking 😊
For example the first file 0064.cr2:
CO1 JPG exports at 100% a file with 13 036 092 Bytes
My extra program uses also quality level 100 and the file has 11 116 665 Bytes
So the size is a bit smaller, must be something between 99% and 100% in CO1 as CO1 99% gives as file with 10 296 000 Bytes
As far as i know, Nvidia cards tend to have smaller File sizes compared to Radeon, i've done a test in 2014, GTX Titan vs R9 290X and it turned out that the Files (Tiff in this case) of the Titan was smaller.
Also on a Pixel level, the images of Nvidia have less contrast, those of Radeon appear to be sharper, you only can see this on 200% (and higher), but lately i've done a Test, Vega FE vs P5000, an on the P5000 was less noise and the Cromatic abberration was removed, but i don't know if i changed the Files of this wedding (different denosing, removing of Cromatic abberation, adding noise for a more lifelike look) (1200 images), i was testing the cards for, before i got the P5000.0 -
I know in screen draw or showing what is on the screen ATI used to be better than Nvidia but that was years ago ?
take a image and turn off your GPU and then one on and compare there is for sure a difference
for me the non gpu is more contrast not sure that might lead into a sense of sharpness ? but the file size is identical
I would think the file size would be the same but the rendering might change a bit ?
at %100 most images I could not tell some of my animal ones in the fur I could see it and not sure which one I liked better hahahahaah but for sure more contrast without the GPU on
throwing both in PS and set to difference I have to throw a pretty radical curve to see where it is and its almost all edge all leaning toward making light pixels lighter with the couple images I tested it on 😊
so the thing is does Nvidia make less contrast? my gpu off I get more so might be 😊
question now to me is does the AMD and no GPU look more alike in contrast ? that might be fun to play with more and find out
I only have a older R9 380 I think it is in my mac pro 😊
also I wonder if I play with contrast settings a touch to add to the Nvidia how it ends up looking compared and for the nice smoother looking which I often prefer would I run the GPU off ever for a dif look or would say + setting on the clarity or contrast give me almost the same dif 😊 hmmmmmmmmmmmmmmm0 -
Chad Dahlquist wrote:
I know in screen draw or showing what is on the screen ATI used to be better than Nvidia but that was years ago ?
take a image and turn off your GPU and then one on and compare there is for sure a difference
for me the non gpu is more contrast not sure that might lead into a sense of sharpness ? but the file size is identical
I would think the file size would be the same but the rendering might change a bit ?
at %100 most images I could not tell some of my animal ones in the fur I could see it and not sure which one I liked better hahahahaah but for sure more contrast without the GPU on
throwing both in PS and set to difference I have to throw a pretty radical curve to see where it is and its almost all edge all leaning toward making light pixels lighter with the couple images I tested it on 😊
so the thing is does Nvidia make less contrast? my gpu off I get more so might be 😊
question now to me is does the AMD and no GPU look more alike in contrast ? that might be fun to play with more and find out
I only have a older R9 380 I think it is in my mac pro 😊
also I wonder if I play with contrast settings a touch to add to the Nvidia how it ends up looking compared and for the nice smoother looking which I often prefer would I run the GPU off ever for a dif look or would say + setting on the clarity or contrast give me almost the same dif 😊 hmmmmmmmmmmmmmmm
I said "on a Pixel level", you cant see this "higher contrast" without zooming in to 100% and above.0 -
MSI GS60 2PC Ghost-231 US (laptop)
Win10 b1709
C1 version - 11.0
CPU - Intel Core i7 4710HQ (4-core, 8-threads, 2.50 GHz)
GPUs - NVidia GTX860M + Intel HD Graphics 4600
JPEG
CPU+GPUs - 1:38 (98s)
CPU only - 2:52 (172s)
The GPUs speed-up factor is about 1.76x.
TIFF
CPU+GPUs - 1:16 (76s)
CPU only - 2:26 (146s)
The GPUs speed-up factor is about 1.55x.0 -
Tom-D wrote:
I said "on a Pixel level", you cant see this "higher contrast" without zooming in to 100% and above.
yeah I heard ya 😊
more interesting and curious though and would be curious about the amd still compares to Nvidia and no gpu:)0 -
I would think if you need to zoom in beyond 100% just to notice that minor of a contrast level, 99.99% of the clients won't notice. 0 -
Windows PC (Win10 b1709)
C1 version - 11.0
CPU - Intel Xeon E5-2630L (6-core, 12-threads, 2.00 GHz)
GPU - NVidia GT710
JPEG
CPU+GPU - 3:20 (200s)
CPU only - 3:22 (202s)
The GPU speed up factor is about 1.01x.
TIFF
CPU+GPU - 2:43 (163s)
CPU only - 2:50 (170s)
The GPU speed up factor is about 1.04x.0 -
PC, Windows 10 x64
CPU+GPU ~ 57 sec
CPU only ~ 126 sec
CPU AMD Ryzen 7 1700
GPU AMD R9 380x
CO version 11.010 -
PC-Windows 10 64 Pro
C1 version - 11.1.0.1
CPU - Intel i7 7700K OC 5 GHz
GPU - NVidia GeForce 980 Ti
JPEG
CPU-GPU - 0:44.7
CPU – 1:47.9
TIFF
CPU-GPU – 0:37.3
CPU – 1:26.50 -
Could any one with a vega/fiji GPU do the Benchmark with 3 added layers with some enhancements?
The last wedding i did process, @ export, i did also measure the Time an did get 1,25 sec for one image, so i think maybe the quadro P5000 is a bit slower when it comes to compute more data. I'd like know how the Radeons handle this.0 -
PC, Windows 7 64-bit
C1 version 11.1.1
JPEG
CPU+GPU: 84 sec
CPU : 148 sec
TIFF 16 bit uncompressed
CPU+GPU: 65 sec
CPU : 106 sec
Hardware:
CPU - Intel Core i7-5820K (6 cores, 12 threads, 3.30 GHz)
GPU - NVidia GeForce GTX960 - OpenCL benchMark : 0.235608 - driver version 398.36
Remarks
i7-5820K - 3.30 GHz
6 cores, 12 threads
32 GB RAM
Disks:
- Win7 and C1: NVMe Samsung SSD 950 PRO
- Session, raw files and output folder: Samsung SSD 850 EVO
UPDATE: After updating the driver from 364.51 to 398.36 (different day, computer restart in between etc.) the JPGs are 10 seconds faster (84 instead of 93), tiffs only 2 secs. The opencl benchmark from the imgcore.log file however shows a slightly worse value 0.235608 instead of 0.231168)0 -
Wow, very helpful, and feeling like the noob I am!! 0 -
Windows PC (WIN 10 Pro 64bit)
CPU+GPU : 70 sec
CPU Only : 245 sec
CPU - AMD FX8300
GPU - AMD Radeon HD7870
C1 version 11.2
Windows PC (WIN 10 Pro 64bit)
CPU+GPU : 65 sec
CPU Only : 245 sec
CPU - AMD FX8300
GPU - AMD Radeon HD7870 + HD5850
C1 version 11.20 -
Windows PC (WIN 10)
JPG: CPU+GPU 2:13
TIFF: CPU+GPU 1:59
JPG: CPU Only 3:28
CPU - intel 4790k
GPU - AMD Radeon R9 280X
C1 version 11.0
Windows PC (WIN 10)
JPG: CPU+GPU 50s
TIFF: CPU+GPU 33s
CPU - intel 4790k
GPU - Nvidia RTX 2070
C1 version 11.00 -
Windows PC (10)
C1 version - 11.0
CPU - i7-7820x @ 4.9Ghz air cooled Noctua NH-D15, Ram 64GB @ 3466 MHz, Mesh 3000 Mhz.
GPU - Quadro P2000 x 2
JPEG
CPU+GPU - 45s
CPU only - 60s
TIFF
CPU+GPU - 12s
CPU only - 40s
As others have noted JPEG processing doesn't leverage GPU as much as TIFF.
Got lucky with the i7-7820x, runs cool, average CPU package temperature during benchmarking was 65C. Prior to de-lidding and liquid metal it was 80C which is fine for processing raw files but was getting into the mid 90s rendering video.0 -
1st setup:
Mac (MBP 15' 2018), 10.14.
Intel 2.2 GHz Core i7 (8750H, 6 Cores), 32 GB RAM
Intel UHD Graphics 630 + Radeon Pro 555X (4096 MB) + eGPU Asus Strix (with Radeon RX Vega 56)
CO v11
JPEG
CPU+GPU - 0:45 (45s)
CPU only - 1:42 (102s)
TIFF
CPU+GPU - 0:19 (19s)
CPU only - 1:22 (82s)
__________________________________________
2nd setup:
Mac (MBP 15' 2018), 10.14.
Intel 2.2 GHz Core i7 (8750H, 6 Cores), 32 GB RAM
Intel UHD Graphics 630 + Radeon Pro 555X (4096 MB)
CO v11
JPEG
CPU+GPU - 1:02 (62s)
CPU only - 1:42 (102s) - the same
TIFF
CPU+GPU - 0:40 (40s)
CPU only - 1:22 (82s)
__________________________________________
I cannot switch of the Intel UHD Graphics 630, so my CPU-only benchmarks are with this graphic solution.0 -
Whitesnake just want to say thanks for taking the time and posting
really been wanting to get a new macbook and was so curious about the speeds 😊
my built pc rips of course but I want a new macbook 😊 I am a OS X guy way more than a PC and wanted something on location big time
especially with the eGPU !0 -
I'd love to see what the 8700K and the new 8-core i7 CPUs could do.
Would also like to know what the new Mac Mini with an eGPU can do. Heard the poor thermal paste is really holding the performance of the Mac Mini back.
But I am tempted by the new MacBook Pro now... Just need more money. 😊0 -
V12 with R9 Fury
GPU: 0:34
TIFF: 0:24
Edit:
upgraded to 12 Engine (forgot to do that)
GPU: 0:310 -
Chad Dahlquist wrote:
Whitesnake just want to say thanks for taking the time and posting
really been wanting to get a new macbook and was so curious about the speeds 😊
my built pc rips of course but I want a new macbook 😊 I am a OS X guy way more than a PC and wanted something on location big time
especially with the eGPU !
You‘re welcome.
I think the slowest 6-Core CPU is the best choice, since it is cooler and the whole system is much more silent and there is less thermal throtteling. My eGPU-setup was chosen for the best performance but it should be also silent. I connected the fans of the case to the Vega 56, so they only run if the Vega-fans are spinning. A Vega 64 would be faster, but consumes much more energy and isn‘t as silent. I think more cores in a CPU doesn‘t make the deal, it‘s all about RAM and the GPU since during rendering the CPU isn‘t used 100%.0
Please sign in to leave a comment.
Comments
135 comments