Skip to main content

⚠️ Please note that this topic or post has been archived. The information contained here may no longer be accurate or up-to-date. ⚠️

12 image CPU - GPU exporting benchmarks.

Comments

94 comments

  • Permanently deleted user
    CraigJohn wrote:
    This is nuts.

    The 5820K with an old GTX 680 performs at a fairly similar level to the overclocked Ryzen 1700 with a 1080ti, which kinda performs at the same clip as the new Intel 7820x + Vega 64 Frontier. I note this, as the CineBench r15 CPU scores vary quite a bit amongst them.
    ....
    This is just fascinating to me. 😊


    Capture One still cues files like the average dog fetches a ball. The OpenCL engine is not miraculous. GPU's are unicorns. Drivers are written like the bible. Add on top a plethora of other platform variables.

    CPU's are scaling at about 30% a year in overall performance. GPU's maybe 20%. Capture One perhaps 10%.

    You can count spec and benchmarks all day long, but if the hardware and software are not tailored to one another, the cows will come home before you get those seconds to scale linearly with your money.
    0
  • craig stodola
    gusferlizi wrote:
    You can count spec and benchmarks all day long, but if the hardware and software are not tailored to one another, the cows will come home before you get those seconds to scale linearly with your money.



    This is understandable, I get this - Just noting the differences. ....which is why I wanted to put this benchmarking thread together. πŸ˜‰
    0
  • Robert Whetton
    gusferlizi wrote:
    CraigJohn wrote:

    CPU's are scaling at about 30% a year in overall performance. GPU's maybe 20%. Capture One perhaps 10%.
    Processing time nearly halved when I upgraded to V10 for my 7D2 files...
    0
  • Permanently deleted user
    Bobtographer wrote:
    gusferlizi wrote:

    CPU's are scaling at about 30% a year in overall performance. GPU's maybe 20%. Capture One perhaps 10%.
    Processing time nearly halved when I upgraded to V10 for my 7D2 files...


    I stretched a bit for sarcasm.

    CraigJohn wrote:
    gusferlizi wrote:
    You can count spec and benchmarks all day long, but if the hardware and software are not tailored to one another, the cows will come home before you get those seconds to scale linearly with your money.

    This is understandable, I get this - Just noting the differences. ....which is why I wanted to put this benchmarking thread together. πŸ˜‰


    Sure...
    0
  • MadManAce
    gnwooding wrote:
    So I have made the following changes to my PC:
    I have upgraded my GTX680 to a Asus Strix 1080ti
    I have now overclocked my i7 5820k to 4080MHz (it could go higher but I just did a quick auto overclock in the BIOS)
    I have upgraded my windows to 10
    I have updated capture one to 10.2

    For interest sake my Samsung mzvlw256hehp-000H1 NVME drive has a sequential read of 3467MB/s and write of 1222MB/s using CrystalDiskMark with a 1GiB file size.

    I now get the following results:

    5D mk III, 12 images
    CPU-jpeg - 21 sec
    GPU-jpeg - 12 sec
    GPU-tiff - 5 sec

    A7R II, 12 images
    CPU-jpeg - 42.5 sec
    GPU-jpeg - 21 sec

    P1 XF 100MP, 12 images
    CPU-jpeg - 92 sec
    GPU-jpeg - 48 sec

    All 36 images at once
    CPU-jpeg - 154 sec
    GPU-jpeg - 78 sec
    GPU-tiff - 29 sec

    When using the GPU acceleration CPU usage is about 30% on average I would guess, GPU is very spiky between 0 and 75%,
    not sure where the bottleneck is - the rather low GPU and CPU usage kind of points somewhere else. Looking at the resource monitor there does not appear to be excessive activity to the disks (reading the raw files from one SSD and writing the JPEG to another seems to make no difference for me).

    I know how similar these results are to MadManAce's but I did them at least 3 times and got the same results each time.

    Here are screenshots of the usage

    https://www.dropbox.com/s/0kbzzk5bg5ko8 ... e.png?dl=0
    https://www.dropbox.com/s/4wd881h3rx0mr ... e.png?dl=0




    Looks like I need to push my overclock 😊

    Just kidding, I am happy where I am at, my CPU temps are nice and cool with a minimal Vcore push over stock. But given we are the only ones that as of now used the same images, it may be possible we are near the ceiling, we will not know until someone with an Intel 7900X or AMD 1950X (with similar GPU) runs a test with the same images.

    From what I am seeing, when using a modern high-count multicore CPU and powerful GPU together, neither exceeds 30% usage for an extended period. Unless PhaseOne can make improvements to tap into that remaining unused power in the GPU (if possible), I donÒ€ℒt think we will see any major leapfrogs in speed. But still, its big difference compared to Adobe's 0% GPU usage on iternally long export πŸ˜‚
    0
  • Chad Dahlquist
    OK
    got my PC together and still have to figure things out but did a quick test
    downloaded the 100 iso files from dpreview for the MKII and sony


    7820x chip
    1080 evga ftw GPU
    32 gig mem
    1TB Samsung 960 EVO

    this is the quick with just the GPU did not bother with out 😊
    ran 5x and then did avg but pretty much every time was same time

    5dMKIII
    11.8 sec
    9.8 @ 90%


    D810
    OC to 4400 and GPU at %120
    15.3 seconds @ 100%
    13.3 @ 90%
    no OC I am at 16.5 sec @ 100%

    tiff is about 6 seconds since I work with a D810 and TIFF this is great for me πŸ˜‰

    sony a7rII
    17.5 sec

    on the OC the thermals get a bit higher and of course fan noise so not going to bother 😊
    at 4.6 I was getting freezes and one BSD so the 4.4 at 1.2v was stable and tested stable but again to much noise for me for about %10 gain
    0
  • gnwooding
    MadManAce wrote:

    Looks like I need to push my overclock 😊

    Just kidding, I am happy where I am at, my CPU temps are nice and cool with a minimal Vcore push over stock. But given we are the only ones that as of now used the same images, it may be possible we are near the ceiling, we will not know until someone with an Intel 7900X or AMD 1950X (with similar GPU) runs a test with the same images.

    From what I am seeing, when using a modern high-count multicore CPU and powerful GPU together, neither exceeds 30% usage for an extended period. Unless PhaseOne can make improvements to tap into that remaining unused power in the GPU (if possible), I donÒ€ℒt think we will see any major leapfrogs in speed. But still, its big difference compared to Adobe's 0% GPU usage on iternally long export πŸ˜‚


    It is indeed very interesting - I did some additional testing after pushing my OC to 4.5GHz (an extra 400MH) and I did not get any reduction in time so I think I definitely have another bottleneck (4GHz on the CPU seems to be where I stop seeing a return).

    Or maybe you need a much faster CPU to see a difference since Chad Dahlquist seems to get better results (with A7RII) with a much better CPU but only a 1080. I would be interested to know the GPU and CPU usage of his system.
    0
  • Christian Gruner
    MadManAce wrote:
    gnwooding wrote:
    So I have made the following changes to my PC:
    I have upgraded my GTX680 to a Asus Strix 1080ti
    I have now overclocked my i7 5820k to 4080MHz (it could go higher but I just did a quick auto overclock in the BIOS)
    I have upgraded my windows to 10
    I have updated capture one to 10.2

    For interest sake my Samsung mzvlw256hehp-000H1 NVME drive has a sequential read of 3467MB/s and write of 1222MB/s using CrystalDiskMark with a 1GiB file size.

    I now get the following results:

    5D mk III, 12 images
    CPU-jpeg - 21 sec
    GPU-jpeg - 12 sec
    GPU-tiff - 5 sec

    A7R II, 12 images
    CPU-jpeg - 42.5 sec
    GPU-jpeg - 21 sec

    P1 XF 100MP, 12 images
    CPU-jpeg - 92 sec
    GPU-jpeg - 48 sec

    All 36 images at once
    CPU-jpeg - 154 sec
    GPU-jpeg - 78 sec
    GPU-tiff - 29 sec

    When using the GPU acceleration CPU usage is about 30% on average I would guess, GPU is very spiky between 0 and 75%,
    not sure where the bottleneck is - the rather low GPU and CPU usage kind of points somewhere else. Looking at the resource monitor there does not appear to be excessive activity to the disks (reading the raw files from one SSD and writing the JPEG to another seems to make no difference for me).

    I know how similar these results are to MadManAce's but I did them at least 3 times and got the same results each time.

    Here are screenshots of the usage

    https://www.dropbox.com/s/0kbzzk5bg5ko8 ... e.png?dl=0
    https://www.dropbox.com/s/4wd881h3rx0mr ... e.png?dl=0




    Looks like I need to push my overclock 😊

    Just kidding, I am happy where I am at, my CPU temps are nice and cool with a minimal Vcore push over stock. But given we are the only ones that as of now used the same images, it may be possible we are near the ceiling, we will not know until someone with an Intel 7900X or AMD 1950X (with similar GPU) runs a test with the same images.

    From what I am seeing, when using a modern high-count multicore CPU and powerful GPU together, neither exceeds 30% usage for an extended period. Unless PhaseOne can make improvements to tap into that remaining unused power in the GPU (if possible), I donÒ€ℒt think we will see any major leapfrogs in speed. But still, its big difference compared to Adobe's 0% GPU usage on iternally long export πŸ˜‚


    What GPU util are you using, that shows the load?
    Usually, if the GPU and CPU and are not fully utilized, it is an indication that it cannot read or write data to/from the disk fast enough.
    0
  • Chad Dahlquist
    Christian Gruner wrote:

    What GPU util are you using, that shows the load?
    Usually, if the GPU and CPU and are not fully utilized, it is an indication that it cannot read or write data to/from the disk fast enough.



    HMMM I feel a ram disk test should be done 😊 hahahahahah
    0
  • MadManAce
    Christian Gruner wrote:

    What GPU util are you using, that shows the load?
    Usually, if the GPU and CPU and are not fully utilized, it is an indication that it cannot read or write data to/from the disk fast enough.


    I used MSI Afterburner to see GPU utilization. I could use HWiNFO64 and get an actual average usage while the test is running instead of an estimate which is what I did, I eyeballed the graph and notice it never went above the 30's .


    I ran the files from the desktop on my old C: drive. Not the fastest kid on the block anymore, but still pretty fast.

    Sequential Read (Q= 32,T= 1) : 1507.970 MB/s
    Sequential Write (Q= 32,T= 1) : 545.607 MB/s
    Random Read 4KiB (Q= 32,T= 1) : 411.751 MB/s [100525.1 IOPS]
    Random Write 4KiB (Q= 32,T= 1) : 292.100 MB/s [ 71313.5 IOPS]
    Sequential Read (T= 1) : 1035.156 MB/s
    Sequential Write (T= 1) : 582.177 MB/s
    Random Read 4KiB (Q= 1,T= 1) : 40.622 MB/s [ 9917.5 IOPS]
    Random Write 4KiB (Q= 1,T= 1) : 158.397 MB/s [ 38671.1 IOPS]

    Test : 1024 MiB [C: 30.7% (114.1/372.0 GiB)] (x5) [Interval=5 sec]
    Date : 2017/09/23 13:52:57
    OS : Windows 10 [10.0 Build 15063] (x64)
    Intel NVMe 750 PCIe 400



    Here is where I normally store photos I am working on:

    Sequential Read (Q= 32,T= 1) : 2509.987 MB/s
    Sequential Write (Q= 32,T= 1) : 1122.158 MB/s
    Random Read 4KiB (Q= 32,T= 1) : 409.950 MB/s [100085.4 IOPS]
    Random Write 4KiB (Q= 32,T= 1) : 326.662 MB/s [ 79751.5 IOPS]
    Sequential Read (T= 1) : 1869.721 MB/s
    Sequential Write (T= 1) : 1146.953 MB/s
    Random Read 4KiB (Q= 1,T= 1) : 45.047 MB/s [ 10997.8 IOPS]
    Random Write 4KiB (Q= 1,T= 1) : 131.780 MB/s [ 32172.9 IOPS]

    Test : 1024 MiB [E: 55.7% (265.6/476.4 GiB)] (x5) [Interval=5 sec]
    Date : 2017/09/23 14:06:02
    OS : Windows 10 [10.0 Build 15063] (x64)
    Plextor PX-512M8PeG (Working Photos Disk)


    For comparison's sake, here is my scratch Sata3 SSD disk:

    Sequential Read (Q= 32,T= 1) : 547.057 MB/s
    Sequential Write (Q= 32,T= 1) : 304.155 MB/s
    Random Read 4KiB (Q= 32,T= 1) : 306.652 MB/s [ 74866.2 IOPS]
    Random Write 4KiB (Q= 32,T= 1) : 126.451 MB/s [ 30871.8 IOPS]
    Sequential Read (T= 1) : 527.675 MB/s
    Sequential Write (T= 1) : 304.492 MB/s
    Random Read 4KiB (Q= 1,T= 1) : 24.864 MB/s [ 6070.3 IOPS]
    Random Write 4KiB (Q= 1,T= 1) : 99.440 MB/s [ 24277.3 IOPS]

    Test : 1024 MiB [S: 8.7% (19.2/221.8 GiB)] (x5) [Interval=5 sec]
    Date : 2017/09/23 13:59:07
    OS : Windows 10 [10.0 Build 15063] (x64)
    Samsung SSD 830 (Scratch Disk)
    0
  • gnwooding
    I am using nvidia inspector and the asus gpu tweak to measure GPU usage, I am using task manager to asses CPU usage.

    In order to try an alleviate any bottle neck caused by the M.2 NVME drive I created a 12GB ram disk and performed the tests again. The results are exactly the same as with the nvme drive, the RAM drive has a read speed of 7GB/s and a write speed of 11GB/s. So despite the massive increase in "disk" speed my results remain unchanged.

    I am therefore beginning to suspect that for whatever reason, in my case I am not limited by a hardware component (CPU runs at about 30% and GPU is mostly idle only spiking up to 75% while using RAM disk). I am not sure whether it is a Windows bottleneck then or Capture One.

    The results are included below for interest

    Using 12GB RAM disk to store raw files and save jpegs to - it makes no difference if I store raw on ram disk and jpeg to nvme or other way round.

    5D mk III, 12 images
    GPU-jpeg - 12 sec


    A7R II, 12 images
    GPU-jpeg - 21 sec

    P1 XF 100MP, 12 images
    GPU-jpeg - 48 sec

    edit:
    using crystaldiskmark the ramdisk has the following speeds:
    Sequential Read (Q= 32,T= 1) : 6952 MB/s
    Sequential Write (Q= 32,T= 1) : 11213 MB/s
    Random Read 4KiB (Q= 32,T= 1) : 1418 MB/s
    Random Write 4KiB (Q= 32,T= 1) : 1291 MB/s
    Sequential Read (T= 1) : 6572 MB/s
    Sequential Write (T= 1) : 9990 MB/s
    Random Read 4KiB (Q= 1,T= 1) : 1008 MB/s
    Random Write 4KiB (Q= 1,T= 1) : 945 MB/s

    and my NVME produces the following numbers:
    Sequential Read (Q= 32,T= 1) : 3450 MB/s
    Sequential Write (Q= 32,T= 1) : 1436 MB/s
    Random Read 4KiB (Q= 32,T= 1) : 398.7 MB/s
    Random Write 4KiB (Q= 32,T= 1) : 282.2 MB/s
    Sequential Read (T= 1) : 1922 MB/s
    Sequential Write (T= 1) : 1430 MB/s
    Random Read 4KiB (Q= 1,T= 1) : 45.05 MB/s
    Random Write 4KiB (Q= 1,T= 1) : 148.6 MB/s
    0
  • Jean-Baptiste Labelle
    Sorry to hijack the thread but I am very interested regarding the post mentioning the GPU.

    I had a SP3 core i7 and now upgraded to a SP4 core i7. I love the device as it is versatile and allow me to edit with COP10 while commuting and the pen is great for that (especially for the masking).
    But beside photo editing, the rest of the stuff I am doing should not benefit from a powerful GPU. But even for photo editing with COP, I would like to know how important the GPU is?

    I saw this thread and make a trial. Indeed, using the GPU for JPEG rendering slow down the CPU utilization from 100% to a 30% average I would say.
    There is also a time gain but it is not dramatic:

    SP4 core i7 6650U, 8Go RAM
    12 JPEG, 85% quality (landscape, portrait...), sRGB, A7R (36Mp):
      with GPU = 2mn 33

      CPU only = 3mn


    That is, at best a 17% gain so nothing huge. On the other hand, it may be that exporting a lot of pictures would increase the difference as the CPU starts to throttle on a Surface because of the size constraint.

    But beside that, do COP really benefit from a powerful GPU? Does it help when making mask or manipulating images or is it mostly the CPU?
    0
  • Christian Gruner
    NNN635487406657266768 wrote:
    Sorry to hijack the thread but I am very interested regarding the post mentioning the GPU.

    I had a SP3 core i7 and now upgraded to a SP4 core i7. I love the device as it is versatile and allow me to edit with COP10 while commuting and the pen is great for that (especially for the masking).
    But beside photo editing, the rest of the stuff I am doing should not benefit from a powerful GPU. But even for photo editing with COP, I would like to know how important the GPU is?

    I saw this thread and make a trial. Indeed, using the GPU for JPEG rendering slow down the CPU utilization from 100% to a 30% average I would say.
    There is also a time gain but it is not dramatic:

    SP4 core i7 6650U, 8Go RAM
    12 JPEG, 85% quality (landscape, portrait...), sRGB, A7R (36Mp):
      with GPU = 2mn 33

      CPU only = 3mn


    That is, at best a 17% gain so nothing huge. On the other hand, it may be that exporting a lot of pictures would increase the difference as the CPU starts to throttle on a Surface because of the size constraint.

    But beside that, do COP really benefit from a powerful GPU? Does it help when making mask or manipulating images or is it mostly the CPU?


    Short answer is a big yes.
    The little longer answer is that also depends on the power-ratio between disk, cpu and gpu.
    From the numbers you have posted, it would seem that your CPU and GPU are about equal in processing power. However, it does move the bulk of the processing to the GPU, leaving your CPU to do other desktop tasks.
    Not all tasks within CO are suitable for GPU computing, but many things are. Processing, adjusting image settings and so on are good examples.
    0
  • Permanently deleted user
    4GHz 8-core AMD FX-8120, 16GB of RAM, GTX 1060 3GB, 250GB NVME

    12 X-T2 files on Capture One 10.2

    CPU-JPEG: 63s
    GPU-JPEG: 26s
    GPU-8bit TIFF: 15s
    0
  • gnwooding
    NNN635487406657266768 wrote:
    Sorry to hijack the thread but I am very interested regarding the post mentioning the GPU.

    I had a SP3 core i7 and now upgraded to a SP4 core i7. I love the device as it is versatile and allow me to edit with COP10 while commuting and the pen is great for that (especially for the masking).
    But beside photo editing, the rest of the stuff I am doing should not benefit from a powerful GPU. But even for photo editing with COP, I would like to know how important the GPU is?

    I saw this thread and make a trial. Indeed, using the GPU for JPEG rendering slow down the CPU utilization from 100% to a 30% average I would say.
    There is also a time gain but it is not dramatic:

    SP4 core i7 6650U, 8Go RAM
    12 JPEG, 85% quality (landscape, portrait...), sRGB, A7R (36Mp):
      with GPU = 2mn 33

      CPU only = 3mn


    That is, at best a 17% gain so nothing huge. On the other hand, it may be that exporting a lot of pictures would increase the difference as the CPU starts to throttle on a Surface because of the size constraint.

    But beside that, do COP really benefit from a powerful GPU? Does it help when making mask or manipulating images or is it mostly the CPU?

    On my systems there is zero lag that I am able to detect when changing sliders or painting masks (overclocking my CPU to 4.5GHz and swapping my GTX 680 for a 1080ti I definitely noticed a difference although it wasn't massive).

    To give you an indication on the type of performance difference between the SP4 and a desktop with a 5820k (overclocked to 4.5GHz), 32Gb RAM and a GTX 1080ti.

    I performed the test using the same parameters as you.

    12 JPEG, 85% quality, A7R (36Mp)

    CPU - 31s
    GPU - 13.5s

    I have also seen a couple of people doing tests using the Fuji X-T2 so I decided to add some of my own.

    12 JPEG, 100% quality, Fuji X-T2

    CPU - 21s
    GPU - 13s

    12 JPEG, 80% quality, Fuji X-T2

    CPU - 18s
    GPU - 9s

    Reducing output quality to 80% clearly makes it much faster.
    0
  • craig stodola
    What's the 5820k like at the base 3.3GHz frequency, when exporting those same 12 X-T2 files at 100%?
    0
  • gnwooding
    CraigJohn wrote:
    What's the 5820k like at the base 3.3GHz frequency, when exporting those same 12 X-T2 files at 100%?


    Sorry it took me so long to reply. Resetting my CPU clocks to default 3.3GHz yielded the following results with the 12 X-T2 files at 100%.

    CPU - 28s
    GPU - 17s

    So under clocking the CPU clearly bottlenecks the GPU as well, what is very interesting is that the GPU performance decreases by roughly the same amount as the CPU only performance. The CPU usage goes from 30% when overclocked to about 50% under clocked. I am wondering whether the "low" CPU usage is not a function of hyper threading not providing a benefit in this scenario.

    I have included some additional tests with hyper threading disabled.

    Stock clock speeds
    CPU - 25s
    GPU - 15.8s (CPU usage now goes up to about 75%)

    Overclocked to 4.5GHz
    CPU - 25s
    GPU - 14s (CPU usage now goes up to about 75%)

    These results are very strange, disabling hyper threading seems to increase performance at stock speeds but then the CPU only time doesn't improve when overclocking with HT disabled. GPU still sees an improvement though.

    Overclocked HT seems to make a significant improvement to the CPU only time and a minor improvement to the GPU time.
    0
  • craig stodola
    Many thanks - and interesting numbers with overclocking vs. stock, and hyper threading being disabled.

    Was there much of user lag when using sliders and brushes with the GTX 680 compared to the 1080ti?
    0
  • gnwooding
    I would say that with the GTX680 there was a bit of lag when drawing masks and making adjustments, it wasn't bad but enough to make me consider upgrading.
    0
  • Denis Mortell
    Interesting thread.

    I saved 12 Jpegs (size 100 @ 300ppi) and I saved the same 12 TIFFs (175Mb/16bit/300ppi).

    Camera Canon 5D Mark IV.
    Capture 1 10 Pro

    System:
    Windows 10 Pro x64
    Intel Core i7 6800k @ 3.4Ghz
    64Gb 3000Mhz RAM
    NVIDIA GeForce GTX 1070

    NOTE: C1 is on a fast SSD, but RAW files are on one 4TB 7200rpm drive and being saved to another 4TB 7200rpm drive.

    12 x Jpegs: 1.9 secs
    12 x TIFFs: 0.875 secs

    I've no idea why the 175Mb TIFFs save in less than half the time of the Jpegs, but they do.

    D.
    0
  • Christian Gruner
    Dinarius wrote:
    Interesting thread.

    I saved 12 Jpegs (size 100 @ 300ppi) and I saved the same 12 TIFFs (175Mb/16bit/300ppi).

    Camera Canon 5D Mark IV.
    Capture 1 10 Pro

    System:
    Windows 10 Pro x64
    Intel Core i7 6800k @ 3.4Ghz
    64Gb 3000Mhz RAM
    NVIDIA GeForce GTX 1070

    NOTE: C1 is on a fast SSD, but RAW files are on one 4TB 7200rpm drive and being saved to another 4TB 7200rpm drive.

    12 x Jpegs: 1.9 secs
    12 x TIFFs: 0.875 secs

    I've no idea why the 175Mb TIFFs save in less than half the time of the Jpegs, but they do.

    D.


    I take it your times are pr file?
    Also, processing to and from a rotational drive is very likely your bottleneck. Try and do the same test, but with files on your SSD.
    The jpegs takes longer than the tiff because of the compression step of the jpeg, whereas the tiff are uncompressed.
    0
  • Denis Mortell
    Yes, times are per file.

    Thanks for the compression clarification.

    I donÒ€ℒt store anything except system software on the SSD.

    Those times are fast enough for me. 😎

    D.
    0
  • NN635427896088650500UL
    I currently have a R9 290X 4GB and wanted to upgrade... WX 5100, is a good choice?

    David532 wrote:
    System
    Windows 10 Pro v1703
    i7-3820
    32Gig 1600MHz RAM
    Samsung SSD 850 EVO 1TB
    AMD Radeon PRO WX 5100 Graphics card

    12x XPro-1 RAF to full size JPG
    with GPU acceleration 20 sec (1.7 secs per image)
    without GPU acceleration 43 seconds (3.6 sec per image)

    That WX 5100 was a good investment 😎
    0
  • Robert Whetton
    NN635427896088650500UL wrote:
    I currently have a R9 290X 4GB and wanted to upgrade... WX 5100, is a good choice?

    what's your benchmark score?
    0
  • Christian Gruner
    NN635427896088650500UL wrote:
    I currently have a R9 290X 4GB and wanted to upgrade... WX 5100, is a good choice?

    David532 wrote:
    System
    Windows 10 Pro v1703
    i7-3820
    32Gig 1600MHz RAM
    Samsung SSD 850 EVO 1TB
    AMD Radeon PRO WX 5100 Graphics card

    12x XPro-1 RAF to full size JPG
    with GPU acceleration 20 sec (1.7 secs per image)
    without GPU acceleration 43 seconds (3.6 sec per image)

    That WX 5100 was a good investment 😎


    Performance pr money spent is quite bad for Workstation cards in Capture One. Instead go for gaming cards, for the amount to spend on a WX5100 you could get, i.e., 2 x R9 Nano and likely get 2-3x the speed, for the same amount of money.
    0
  • gnwooding
    I think you would actually see a significant (probably around 15-20%) reduction in performance if you went from an R9 290x to a WX 5100.

    I believe you can get a good idea of the different performance of the different cards:
    https://browser.geekbench.com/opencl-benchmarks
    https://compubench.com/result.jsp?benchmark=compu15d
    0
  • Christopher Hauser
    Currently running two R390 iny my system. Is there currently a newer option which gives a good performance boost?
    0
  • Christian Gruner
    ChristopherHauser wrote:
    Currently running two R390 iny my system. Is there currently a newer option which gives a good performance boost?


    The market evovles, check this link (be sure that "Choose a test" is set to "Video composition", which what most closely resembles what CO is doing):

    https://compubench.com/result.jsp?bench ... e&base=gpu
    0
  • Permanently deleted user
    anybody can test 5k imac with external GPU? since high sierra now supports these...
    0
  • craig stodola
    A new 2017 5K iMac with an external GPU would be interesting.


    Here's someone's performance with a Mac Mini and external GTX 1070.

    Postby atenolol » Thu Aug 24, 2017 11:12 pm

    Hardware:
    Macmini 2012
    2.3GHz quad core i7
    16GB RAM
    GTX 1070 8GB @10GBit thunderbolt

    Software:
    macOS 10.12.6
    CO: 10.1.2

    Camera: Nikon D3X
    Photos: 12 NEF in 24MP (Nikon Lossless compression 20-29MB each)

    Exported to 100% JPEG.
    No other processing.

    GPU: 17 sec
    CPU: 47 sec


    Here's the Mac Board CPU/GPU benchmark - wish more people would contribute over there. πŸ˜„

    viewtopic.php?f=68&t=26527&hilit


    As a note, it's a shame Apple killed the quad core Mac Mini. I was hoping Apple would release/announce a beastly little 8700K Mac Mini with an option for an 8GB AMD RX 570 or 580. Even if that Mac Mini would be double or triple height.
    0

Please sign in to leave a comment.