Skip to main content

⚠️ Please note that this topic or post has been archived. The information contained here may no longer be accurate or up-to-date. ⚠️

12 image CPU - GPU exporting benchmarks.

Comments

94 comments

  • Thomas D.
    CraigJohn wrote:
    Thanks for chiming in with this. Is it possible to run this test without overclocking? Would love to see what your rig can do with stock set-up. I doubt I'll overclock, not for critical client work.

    Also - had a few other questions

    Ò€’ How is the overall feel when working with Capture One Pro? Are the exposure/WB sliders smooth, and thumbnail draws/library scrolling quick? Mine is painfully slow, especially when loading a new library and trying to scroll through the images.

    Ò€’ How well does it work with Photoshop? How fast can you open 25 Hi-res images? My computer seems to take forever to open 25 hi-res 24MP Fuji X-T2 files. ...Is Arbitrary rotating a layer on a high-res file smooth?

    many many thanks.


    Hi, it's not Overclocked, it's Stock.

    Capture One was and is Smooth (Old and new System).

    Starting Capture One with a Session of 1296 images takes 44 sec until all images are Loaded.(of 1Dx + 1Dx II).

    Starting Photoshop with 25 Tiff (80 to 100 MB each) files (of 1Dx + 1Dx II) takes 29 sec. Rotating was and is Smooth. <- Turn of the Ruler, the Ruler can cause stuttering.

    I Recommend always Pro GPUs for working, they are much smoother cause they are dropping no Frames.

    Kind regards
    Thomas
    0
  • Christian Gruner
    Christian Gruner wrote:
    Here are a few on AMD Ryzen 7 1800x, with 2 x AMD R9 Nano (benchmark at 0.55), processing to and from a M.2 disk:

    D810, 12 images
    CPU-jpeg: 45sec
    GPU-jpeg: 19 sec
    GPU-tiff: 6 sec

    5Dmk II/III, 12 images
    CPU-jpeg: 34 sec
    GPU-jpeg: 13 sec
    GPU-tiff: 5 sec

    X-Pro2, 12 images:
    CPU-jpeg: 31 sec
    GPU-jpeg: 15 sec
    GPU-tiff: 9 sec

    Bonus-info, the machine does 99 5dm3 raws in 39 seconds when processing to to 8 bit tiff, quite nice performance!


    So, I just got an i9-7900x setup, and here are the results of the same runs as I did on the Ryzen. Note that this test is using the same 2 x R9 Nano, and same M.2 disk, so only change is the more expensive i9 processor.

    i9-7900X

    D810, 12 images
    CPU-jpeg: 20 sec
    GPU-jpeg: 5,5 sec
    GPU-tiff: 5,3 sec

    5Dmk II/III, 12 images
    CPU-jpeg: 17 sec
    GPU-jpeg: 3,6 sec
    GPU-tiff: 3,5 sec

    X-Pro2, 12 images:
    CPU-jpeg: 17 sec
    GPU-jpeg: 6,5 sec
    GPU-tiff: 6,5 sec

    The 99 5d3 raws now takes 27 seconds instead.
    0
  • Robert Whetton
    can we get a standard set of images to use? I'm willing to go take some stock shots with my 7D2

    Landscape, Portrait, Car/s etc?
    0
  • Robert Whetton
    Christian Gruner wrote:

    The 99 5d3 raws now takes 27 seconds instead.

    Impressive! (but it's a £900 CPU). BTW did you have the 1800X OC'd or stock speeds?
    0
  • craig stodola
    The curious thing about the 7900x, you wouldn't really need to run GPU acceleration to not see a massive benefit over a much older computer. You'd simply need a card to drive your display - so really, a $130 video card would do. πŸ˜‚

    7900x + X-Pro2, 12 images:
    CPU-jpeg: 17 sec

    vs.

    2009 Mac 2.66GHz QuadCore Xeon W3520, Sapphire Radeon HD 7950

    Fuji X-T2 - same sensor as the X-Pro2
    CPU = 70 sec
    GPU = 44 sec

    The 7900x 17 sec score is nearly as fast as the 1800x GPU score (15 sec) for the same X-Pro2 files.
    0
  • Permanently deleted user
    Damn... πŸ˜„
    0
  • Chad Dahlquist
    for me the thing that matters most is any lag with adjustments and or multiple layers and the brush ?

    would love to hear if Christian noticed any difference with the ryzen 1800 vs the intel 7900 as far as interface ?

    lag on the brushes or thumbnails showing up etc..


    also seems the jpg vs tiff on the 7900 are really close now and kinda wonder if that is about the limit of speed 😊 very interesting
    0
  • Thomas D.
    gusferlizi wrote:
    Damn... πŸ˜„


    Oh yea!

    My System is still unstable, maybe i have to reinstall Windows on the Weekend.

    To have a Batch of images for Bench-marking would be Great, i can also contribute.
    0
  • craig stodola
    Tom-D wrote:
    gusferlizi wrote:
    Damn... πŸ˜„


    Oh yea!

    My System is still unstable, maybe i have to reinstall Windows on the Weekend.

    To have a Batch of images for Bench-marking would be Great, i can also contribute.



    I can send you X-T2 files and D750 files.

    ...Still incredibly anxious to see your 7820x numbers. πŸ˜‚
    0
  • craig stodola
    I just saw this video on Youtube, and now I'm wondering how Capture One Pro would react with the 7900x and a $130 GeForce 1050 - or any little inexpensive video card. And yes, I know - get a professional video card. πŸ˜„

    I'm just wildly curious now about this performance thing after seeing this video.

    https://www.youtube.com/watch?v=_6XYaFqq2mg
    0
  • Thomas D.
    Christian Gruner wrote:
    Christian Gruner wrote:
    Here are a few on AMD Ryzen 7 1800x, with 2 x AMD R9 Nano (benchmark at 0.55), processing to and from a M.2 disk:

    D810, 12 images
    CPU-jpeg: 45sec
    GPU-jpeg: 19 sec
    GPU-tiff: 6 sec

    5Dmk II/III, 12 images
    CPU-jpeg: 34 sec
    GPU-jpeg: 13 sec
    GPU-tiff: 5 sec

    X-Pro2, 12 images:
    CPU-jpeg: 31 sec
    GPU-jpeg: 15 sec
    GPU-tiff: 9 sec

    Bonus-info, the machine does 99 5dm3 raws in 39 seconds when processing to to 8 bit tiff, quite nice performance!


    So, I just got an i9-7900x setup, and here are the results of the same runs as I did on the Ryzen. Note that this test is using the same 2 x R9 Nano, and same M.2 disk, so only change is the more expensive i9 processor.

    i9-7900X

    D810, 12 images
    CPU-jpeg: 20 sec
    GPU-jpeg: 5,5 sec
    GPU-tiff: 5,3 sec

    5Dmk II/III, 12 images
    CPU-jpeg: 17 sec
    GPU-jpeg: 3,6 sec
    GPU-tiff: 3,5 sec

    X-Pro2, 12 images:
    CPU-jpeg: 17 sec
    GPU-jpeg: 6,5 sec
    GPU-tiff: 6,5 sec

    The 99 5d3 raws now takes 27 seconds instead.




    So, Reinstalled Windows, get a BSOD while using CO 10.1.2 went to 10.0.2, have no luck with 10.1.2.

    Some Numbers:

    Images Readed and Saved on Sandisk Extreme Pro 960 SSD.

    All images from Dpreview Noise/Image Quality Test and all images are the Same.

    ISO 100/200, 100%, No sharpening, sRGB, 300px/in.

    10.0.2
    XT-2, 12 Images:
    CPU-jpeg: 31 sec
    CPU-tiff: 24 sec

    10.1.2
    XT-2, 12 Images:
    GPU-jpeg: 13 sec
    GPU-tiff: 6 sec

    10.0.2
    D750, 12 images:
    CPU-jpeg: 22 sec
    CPU-tiff: 17 sec
    GPU-jpeg: 12 sec
    GPU-tiff: 5 sec

    10.0.2
    D810, 12 images:
    CPU-jpeg: 33 sec
    CPU-tiff: 25 sec
    GPU-jpeg: 17 sec
    GPU-tiff: 6 sec

    10.1.2
    D810, 12 images:
    CPU-jpeg: 32 sec


    10.0.2
    IQ3 100MP, 12 images:
    CPU-jpeg: 91 sec
    CPU-tiff: 72 sec
    GPU-jpeg: 45 sec
    GPU:tiff 17 sec

    Import an Creating Cache Files for 214 Images (2560px) (1DX) 110 sec/ 1:50 min)
    0
  • Thomas D.
    I've Updated my last post with Numbers of V10.1.2, it seems that my system is now Stable. πŸ˜„
    0
  • Chad Dahlquist
    Tom-D wrote:
    I've Updated my last post with Numbers of V10.1.2, it seems that my system is now Stable. πŸ˜„



    cool 😊

    am curious in the interface editing images adjusting sliders using the brush in layers scrolling all the things we do is there any place it still lags a touch ? and how do you feel that feel of moving image to image is any lag or thoughts and even comparing to your old system 😊

    thanks 😊
    0
  • Chad Dahlquist
    this has been a huge help 😊 my brain dump helps me decide 😊 hahahahahahaha

    a common camera on some new chips is the D810
    so going with that
    and yes they are not the same image but like CraigJohn Mentioned we all have dif images all day long so its OK for the general idea 😊

    so my take so far since I am also on the hunt and switching from mac

    the dual card thing? I think the Ryzen chipset is not doing a good job of handling dual cards
    the intel is doing a much better job
    the Ryzen is new and needs time to mature and is not the thread-ripper true workstation style chip so the chipset might reflect that?
    the x299 is more a workstation style chipset
    again just rough ideas

    the frontier cards again new and needs time to mature and get better and should be about 9.5sec when it does

    modern GPU for sure are quicker


    Christian 1800x dual mid level GPU x2
    CPU 45
    GPU 19

    GPU = 2.37xÒ€¨Ò€¨

    Christian 7900x dual mid level GPU x2
    CPU 20
    GPU 5.5

    GPU = 3.63x

    stephan ryzen 1700x old cpu
    CPU 42
    GPU 21Ò€¨
    GPU = 2x

    MadManAce ryzen 1700(assume its OC ?) fastest current GPU
    CPU 41
    GPU 11.75

    GPU = 3.5x

    Tom-D 7820x new top level AMD GPU new card drivers need to mature
    CPU 33
    GPU 12
    Ò€¨GPU = 2.75x
    0
  • Thomas D.
    Chad Dahlquist wrote:
    this has been a huge help 😊 my brain dump helps me decide 😊 hahahahahahaha

    a common camera on some new chips is the D810
    so going with that
    and yes they are not the same image but like CraigJohn Mentioned we all have dif images all day long so its OK for the general idea 😊

    so my take so far since I am also on the hunt and switching from mac

    the dual card thing? I think the Ryzen chipset is not doing a good job of handling dual cards
    the intel is doing a much better job
    the Ryzen is new and needs time to mature and is not the thread-ripper true workstation style chip so the chipset might reflect that?
    the x299 is more a workstation style chipset
    again just rough ideas

    the frontier cards again new and needs time to mature and get better and should be about 9.5sec when it does

    modern GPU for sure are quicker

    the two ryzen chips really show this a older card vs the top dog and we gain almost double the speed with GPU

    also with the 7900 jpg and tiff being so close I wonder if that is starting to hit the limits of how quick we can get


    Christian 1800x dual mid level GPU x2
    CPU 45
    GPU 19

    GPU = 2.37xÒ€¨Ò€¨

    Christian 7900x dual mid level GPU x2
    CPU 20
    GPU 5.5

    GPU = 3.63x

    stephan ryzen 1700x old cpu
    CPU 42
    GPU 21Ò€¨
    GPU = 2x

    MadManAce ryzen 1700(assume its OC ?) fastest current GPU
    CPU 41
    GPU 11.75

    GPU = 3.5x

    Tom-D 7820x new top level AMD GPU new card drivers need to mature
    CPU 33
    GPU 12
    Ò€¨GPU = 2.75x


    We should use 100% Output Quality.

    ManManAce's Output Quality was 90%.

    At the Beginning of the Thread there is also a Post with 80% Output Quality.
    0
  • Thomas D.
    Because of the feeling during use and speed in general

    Im Working right now on a Birthday Report and all is Feeling like Realtime, thats a big update from my OCed 3930K!
    0
  • Christian Gruner
    Chad Dahlquist wrote:

    Christian 1800x dual mid level GPU x2
    CPU 45
    GPU 19

    GPU = 2.37xÒ€¨Ò€¨

    Christian 7900x dual mid level GPU x2
    CPU 20
    GPU 5.5


    The R9 Nano is far from being a mid-level card. It is one of their fastest cards currently, despite its small size.

    Chad Dahlquist wrote:

    I think the Ryzen chipset is not doing a good job of handling dual cards

    Not likely, it has more to do with raw power of the CPU. With very fast GPU's, it can't deliver raw-files quickly enough to the GPU.
    Also with the i9-7900x, this is still the case, although it does indeed deliver them a lot quicker, which can be seen in the final processing time.
    0
  • Chad Dahlquist
    fair enough 😊 did not say it was a bad card 😊 sorry if it came out poorly

    maybe I should say an older card then 😊 it is a 2-year-old design?
    about 2/3 power of top cards today thereabouts 😊



    still going to say something is off with the dual cards on ryzen and it should have done better

    the 1080ti on the ryzen ? did not bottleneck? got about the same efficiency or ratio as the 7900x
    so it shows the ryzen cpu can feed just fine

    the 7820x is a faster cpu than the ryzen and it had lower numbers than the ryzen with the 1080ti ?
    because of the gpu being so new and needing driver work but it was slower

    we all know there is a fine relationship between hardware and $$

    would be nice to know for folks how far up the GPU ladder is the sweet spot was more my thinking again not trying to argue or bash etc..
    more interesting to see what they do 😊 this is fun curious stuff 😊
    for the exact reason I bought the 1080 not the ti ? I wanted to stick to a budget and put that extra money into a larger nvme instead ? and sure we all come across where is my money best spent and do I even need the latest 😊
    hoping now the 1080ti would have done me better 😊 but who knows

    I have a 7820x and a nvidia 1080 coming in this weel so will be fun to see what it does and then put my old r9380 out of my current mac on it just to compare cause that is an older lower card


    would be fun to see something like gaming style bench marks for each GPU and see how each one does
    0
  • Robert Whetton
    AMD kicks Nvidia square in the nuts when it comes to computational power
    0
  • Thomas D.
    Bobtographer wrote:
    AMD kicks Nvidia square in the nuts when it comes to computational power


    Yea, an the 1080ti was tested with 90% Quality and the Frontier Vega 10 was tested with 100%, i don't know how much the GPU has less to Compute.

    As far as i know, a R9 380 should be as fast as a 1080.
    0
  • Chad Dahlquist
    Tom-D wrote:
    Because of the feeling during use and speed in general

    Im Working right now on a Birthday Report and all is Feeling like Realtime, thats a big update from my OCed 3930K!

    that is great to hear 😊 cant wait to get my 7820x built

    wish I had the extra funds to build mine with a 1080ti ? but the 1080 was the balance
    again testing my R9380 to compare will be fun also 😊

    question on your memory did you go 8x8 sticks or do 4x16 ? purely curious 😊

    I picked up
    G.Skill - Trident Z RGB 32GB (4 x 8GB) DDR4-3200 Memory and figure will fill out when I get more funds

    this is what I ordered
    https://pcpartpicker.com/list/xrvpTH

    note my main monitor is a NEC PA 27
    wanted the ultra wide as I I do gaming on console pretty hardcore destiny guy and wanted a wide screen for the second monitor as my old HP 3065 kicked it finally 😊 so it is more for fun secondary bit of play and some windows to be on not to work on 😊 hahahahaha
    0
  • Thomas D.
    Hi,

    i'd go with 4x16.

    Your 3200 Mhz Ram will be Faster at CPU bound situations.
    0
  • Permanently deleted user
    Tom-D wrote:
    Bobtographer wrote:
    AMD kicks Nvidia square in the nuts when it comes to computational power


    Yea, an the 1080ti was tested with 90% Quality and the Frontier Vega 10 was tested with 100%, i don't know how much the GPU has less to Compute.

    As far as i know, a R9 380 should be as fast as a 1080.


    I'm not fully sound and scientific on the subject, but I'd think more compression means more processing.

    90% quality has more detail to discard and averaging to do, therefore would demand more computation than 100% quality.

    Don't know, maybe the jpeg algorithm is the opposite.
    0
  • Chad Dahlquist
    Tom-D wrote:
    Hi,

    i'd go with 4x16.

    Your 3200 Mhz Ram will be Faster at CPU bound situations.



    most of my work should fit in 32 for now 😊 but yeah 4x16 would be nice but again my budget 😊 ahhh the budget I hate those things 😊

    sadly our aircon went down today 😊 YIKES as I told my buddy Craig way more than a 1080ti to fix 😊

    compared to my mac pro 5,1 (2010 quad 3.2) I am going to be grinning though 😊 and figure in a year I will get a nice Bday or Xmas gift for the puter
    0
  • thoroughly.exposed
    Included CPU and GPU utilisation

    Sony A7RII Exporting 12 RAW files
    CPU 45s (CPU 100% GPU 0%)
    GPU 24s (CPU 34% GPU 80%)

    odd that the GPU is only 80% the images were being saved to the M.2 drive from a 5400rpm WD Green Drive though, so tested again M.2 to M.2

    GPU 14s (CPU 45% GPU 85%)

    Looks like I need a faster storage drive before upgrading my GPU

    Windows 10 Home 64-bit
    Intel Core i7 6700K @ 4.00GHz
    16.0GB Dual-Channel DDR4 @ 1069MHz (15-15-15-36)
    ASUS Z170-A (LGA1151)
    4095MB NVIDIA GeForce GTX 1070
    238GB NVMe SAMSUNG MZVPV256 M.2
    0
  • MadManAce
    I think this test is good to get a general idea, for example, if combo #1 takes 10 sec and combo #2 takes 110 sec, then itÒ€ℒs safe to say combo #1 is likely is a considerable upgrade. However, unless we all have the same images and export using the same settings, we are just guessing here. Maybe using a few cameras as a base, Fuji Compressed Raw, Canon 5D III or IV, Nikon D810, and Sony A7R from the studio samples posted on http://www.imaging-resource.com/.
    0
  • gnwooding
    I used the ISO100 raw files from

    https://www.dpreview.com/reviews/image- ... 7640987305

    I felt using the 5D III, A7R II and 100MP Phase give a nice spread of mp to test with.

    Specs:
    Intel i7 5820k at stock speeds
    Nvidia GTX 680
    32GB RAM Quad Channel
    m.2 NVME drive

    Done with 100% JPEG (I found this to be slower than 90%)

    5D mk III, 12 images
    CPU-jpeg - 25 sec
    GPU-jpeg - 15 sec

    A7R II, 12 images
    CPU-jpeg - 47 sec
    GPU-jpeg - 27 sec

    P1 XF 100MP, 12 images
    CPU-jpeg - 117 sec
    GPU-jpeg - 70 sec

    I am getting a 1080ti later this week so I will do additional tests then.
    0
  • MadManAce
    gnwooding] wrote:
    I used the ISO100 raw files from

    https://www.dpreview.com/reviews/image- ... 7640987305

    I downloaded the photos referenced by gnwooding (100 ISO and made duplicates to get 12 copies each)

    Specs:
    AMD Ryzen 7 1700 at 3.8MHz
    Nvidia GTX 1080 Ti
    32GB RAM Dual Channel
    NVME drive (1st Generation Intel 750 PCIe)

    100% JPEG (with CaptureOne 10.1)

    5D mk III, 12 images
    CPU-jpeg - 21 sec
    GPU-jpeg - 12 sec

    A7R II, 12 images
    CPU-jpeg - 42 sec
    GPU-jpeg - 21 sec

    P1 XF 100MP, 12 images
    CPU-jpeg - 92 sec
    GPU-jpeg - 48 sec


    All 36 images at once with CaptureOne 10.1
    CPU-jpeg - 154 sec (CPU all cores 84-100%, Avg 98%)
    GPU-jpeg - 80 sec (CPU all cores 20-55%, Avg 35%)


    Updated to 10.2 today and ran one test:

    All 36 images at once CaptureOne 10.2
    GPU-jpeg - 78 sec (CPU all cores 18-54%, Avg 32%)
    0
  • craig stodola
    This is nuts.

    The 5820K with an old GTX 680 performs at a fairly similar level to the overclocked Ryzen 1700 with a 1080ti, which kinda performs at the same clip as the new Intel 7820x + Vega 64 Frontier. I note this, as the CineBench r15 CPU scores vary quite a bit amongst them.

    Here are the CineBench R15 scores I've found with all of these CPUs - rounded to the nearest 5 based on all of the scores I've found. I rounded off as there are variances in scores with the same CPU on every individual run.

    5820K = 1060 MC, 140 SC

    1700 = 1400 MC, 140 SC
    1700 @ 3.9GHz = 1725 MC, 160 SC

    7820X = 1875 MC, 190 SC

    7800X = 1345 MC, 185 SC

    Because of the CineBench r15 scores, these numbers are of interest to me:

    Ryzen 1700 (3.8GHz OC) + 1080 ti ($290 + $750 = $1040 combo)

    5Dmk3 (21MP sensor)
    CPU = 21s
    GPU = 12s


    Intel i7 5820K, 32GB ram, GTX 680 ($390 + $230 = $620 combo)

    5Dmk3 (21MP sensor)
    CPU = 25s
    GPU = 15s


    Intel 7820x, Vega Frontier, NVMe SSd: ($575 + $1,000 = $1,575 combo)

    D750 (24MP sensor)
    CPU = 22s
    GPU = 12s


    I want to say that 5820K has to be overclocked...

    I'd love to see Tom D's new 7820x rig run the same 12 image test with those same dpreview.com 5Dmk3 RAW images. I'm sure there's some unique cooking between the 21MP files of the 5DmkIII vs. the D750 24MP files. Hell, the 24MP crop sensor RAF files of the Fuji cameras are nearly twice as large a the Nikon D750 NEF files.

    Now the question I'm begging for; would the 7800x with a GTX 1060 perform at a similar clip as the 7820x with the Frontier?

    I'm also interested to see how the $225 Ryzen 1600X would perform.

    Thanks for all the numbers, guys. This is just fascinating to me. 😊
    0
  • gnwooding
    So I have made the following changes to my PC:
    I have upgraded my GTX680 to a Asus Strix 1080ti
    I have now overclocked my i7 5820k to 4080MHz (it could go higher but I just did a quick auto overclock in the BIOS)
    I have upgraded my windows to 10
    I have updated capture one to 10.2

    For interest sake my Samsung mzvlw256hehp-000H1 NVME drive has a sequential read of 3467MB/s and write of 1222MB/s using CrystalDiskMark with a 1GiB file size.

    I now get the following results:

    5D mk III, 12 images
    CPU-jpeg - 21 sec
    GPU-jpeg - 12 sec
    GPU-tiff - 5 sec

    A7R II, 12 images
    CPU-jpeg - 42.5 sec
    GPU-jpeg - 21 sec

    P1 XF 100MP, 12 images
    CPU-jpeg - 92 sec
    GPU-jpeg - 48 sec

    All 36 images at once
    CPU-jpeg - 154 sec
    GPU-jpeg - 78 sec
    GPU-tiff - 29 sec

    When using the GPU acceleration CPU usage is about 30% on average I would guess, GPU is very spiky between 0 and 75%,
    not sure where the bottleneck is - the rather low GPU and CPU usage kind of points somewhere else. Looking at the resource monitor there does not appear to be excessive activity to the disks (reading the raw files from one SSD and writing the JPEG to another seems to make no difference for me).

    I know how similar these results are to MadManAce's but I did them at least 3 times and got the same results each time.

    Here are screenshots of the usage

    https://www.dropbox.com/s/0kbzzk5bg5ko8 ... e.png?dl=0
    https://www.dropbox.com/s/4wd881h3rx0mr ... e.png?dl=0
    0

Please sign in to leave a comment.