Skip to main content

Workstation recommendation please

Comments

112 comments

  • Martin Peterdamm
    I would be really nice to have a special C1 Benchmark, for all of us. Especially with openCL, a small invest in a GPU can make a huge difference.
    In my experience with several machines / configurations, you gain the most from latest gaminig gpus. A 3 year old 4c i5 with a gtx 970 is nearly as fast as the fastest mac pro
    0
  • BeO
    Thanks Robbenflosse, good hint. Brings me back to what I was afraid of, making a buying decision based on GPU power and memory with the chance of having to switch off OpenCl and back on the bottom of a potenial lousy CPU.

    I have had my current mobile workstation before I aquired a C1 8 license. This time I know I want to use C1 prior to buying new hardware. This way it becomes very apparent (for me) that there is not much help from P1 with this respect, or I just haven't found it yet.

    Cheers
    BeO
    0
  • Alain Decamps
    Hi

    I have a -older- Intel i5 2500K + AMD 7880 GPU and while doing local adjustments (aka using the brush) the CPU is using 100% and the GPU less than 50%. The CPU is clearly using all 4 cores.

    I'm thinking of upgrading and it would be either a consumer intel i7 6700k or an enthusiast intel i7 5620K (both have enough PCI3 lanes to add a fast PCIE3.0 4x SSD). The 6700K has far higher single thread performance (useful for almost all software) and the 5620K has 6 cores, but not as fast per core. Just for C1 I suppose that the 5620K is a bit faster.

    For workstations (xeon's) C1 probably will use all cores.
    0
  • Fernando Javier Giménez Cepero
    Hi.
    This is my experience.
    I have an i7 920 @2,67ghz stock and 18gb of ram with two r9 280x Asus without crossfire on software options ( it gave me instability with capture one 8 and 9.0)
    On the log I have a 0.081185 y 0.081334 on each gpu. One is @ 1000th clock and 1500mhz memory and the other @ 1050mhz clock and 1600mhz.
    Exporting from an ssd 850 evo to a 840 evo with catalog on the 850 I get cpu activity between 80-100 (average 85) and gpus between 20 to 65 ( average 30).
    Exporting 111 NEFS 14 bit lossless compressed of d800 (45 Mb aprox each)to jpg 300ppp resized to 30cm in widest takes 92 seconds.
    I am am planning to disable hyperthreading on bios temporarily as Alan suggested to perform the same batch export and see if it changes the benchmark in the log. My ram is ddr3 1333 Mhz at 1066Mhz( 3 sticks of 2 gb 3 of 4 gb).
    It would good to build a database of workstations depending on the benchmark.
    0
  • Robert Whetton
    Intel 2500K OC'd to 4.3GHz, 32GB RAM with an AMD R9 390 8GB GPU

    With CL on, 100 7D2 RAW average ISO6400 = 128 seconds
    Without CL on, 100 7D RAW average ISO6400 = 250 seconds

    Coming off single SanDisk 240GB SSD going onto dual Samsung 240GB Evo840's in RAID 0

    Would be good to get a library of RAW files we could pull from to test the same files on different computers.

    But it looks like the deciding factor these days is, how fast is your GPU!
    0
  • Fernando Javier Giménez Cepero
    Hi!
    Thanks very much!

    could you post the benchmark that appears in C/users/(user)/appdata/captureone/logs/...? i think it is "imgcore" the file. You have to view hidden folders to acces to appdata folder...
    0
  • Robert Whetton
    C:\Users\***\AppData\Local\CaptureOne\Logs\ImgCore.log

    OpenCL : found platform AMD Accelerated Parallel Processing, OpenCL Version : OpenCL 2.0 AMD-APP (1912.5)
    OpenCL Device : Hawaii
    OpenCL Driver Version : 1912.5 (VM)
    OpenCL Compute Units : 40
    OpenCL : Loading kernels
    OpenCL : Loading kernels finished
    OpenCL : Benchmarking
    OpenCL : Initialization completed
    OpenCL benchMark : 0.068000
    0
  • Fernando Javier Giménez Cepero
    Thanks!! 😄

    Thumbs up!
    0
  • Alain Decamps
    Hi

    The log below. But I've noticed that my GPU is not used "much" while doing selection with a brush.

    Logging is now active.
    CPU: GenuineIntel [Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz]
    CPU features: MMX, SSE, SSE2, SSE3, SSSE3, SSE41, SSE42, AVX, CX8, RDTSCP, POPCNT
    OpenCL initialization...
    First chance exception (thread 712): 0x000006C6 - RPC error (see WinError.h)
    OpenCL : found platform AMD Accelerated Parallel Processing, OpenCL Version : OpenCL 2.0 AMD-APP (1912.5)
    OpenCL Device : Pitcairn
    OpenCL Driver Version : 1912.5 (VM)
    OpenCL Compute Units : 20
    OpenCL : Loading kernels
    OpenCL : Loading kernels finished
    OpenCL : Benchmarking
    OpenCL : Initialization completed
    OpenCL benchMark : 0.145777
    0
  • BeO
    Hi

    I narrowed down my selection to a workstation as decribed here:

    viewtopic.php?f=61&t=21628&p=103179&sid=cdb92909a7d25c52af47edb284c26b3d#p103179

    Any thoughts?

    Thanks and cheers
    BeO
    0
  • Alain Decamps
    [quote="BeO" wrote:
    Hi

    I narrowed down my selection to a workstation as decribed here:

    viewtopic.php?f=61&t=21628&p=103179&sid=cdb92909a7d25c52af47edb284c26b3d#p103179

    Any thoughts?

    Thanks and cheers
    BeO

    Hi

    The xeon is very "close" to the Core i7-5930K, it does has the possibility to use ECC-ram.

    When all cores are used I expect it to be 20-25% or more faster than the i7 6700K. But for single core loads I expect the i7 6700K to be 20-25% faster.

    Without tests not an easy choice.
    0
  • BeO
    Thanks for replying.

    I don't know whether or not INtel Core i7 features like SSE4.2 are being used which is not available in the Xeons. Otherwise they look quite close, ECC Ram might be a little slower.
    You're right, without tests/benchmarks not easy to say.

    I assume you think both processors are quite acceptable...

    cheers
    BeO
    0
  • Alain Decamps
    [quote="BeO" wrote:
    ...
    I assume you think both processors are quite acceptable...

    cheers
    BeO


    Yes, To get even better performance you'll need to go to specific workstation stuff (dual processor or very expencive xeon's.)

    For me it's the uncertainly of the extra performance gain with hyperthreading while doing local adjustments that makes things complicated. It could be almost zero up to about 30%.

    This makes for me a difference of a 50-70% speed increase versus 100-120% speed increase. I doubt I"ll go thru the trouble for just 50%.
    0
  • Christian Gruner
    Actually, Xeons are not really great performers, and they are way to expensive per performance unit in CO. In addition, multi-CPU setup requires multiple cooling solutions, making them even more expensive.

    Instead, look into the i7's running on LGA2011-v3 socket. More ram capability, more cache, and more importantly up to 8 physical cores on the chip. They are indeed a bit more expensive, but worth it, when the higher numbers of cores.
    0
  • Alain Decamps
    [quote="Christian Gruner" wrote:
    Actually, Xeons are not really great performers, and they are way to expensive per performance unit in CO. In addition, multi-CPU setup requires multiple cooling solutions, making them even more expensive.

    Instead, look into the i7's running on LGA2011-v3 socket. More ram capability, more cache, and more importantly up to 8 physical cores on the chip. They are indeed a bit more expensive, but worth it, when the higher numbers of cores.


    I agree that the top xeon's are expensive and not by default good for CO. While a dual processor setup will be quite some money, but the cost is relative for someone using a top phase one camera.



    BTW. the Intel Xeon E5-1650v3 3.50GHz 15MB 2133 6Core CPU is very much like the i7-5930K a six core 2011-3 i7.
    0
  • BeO
    [quote="Christian Gruner" wrote:
    Actually, Xeons are not really great performers, and they are way to expensive per performance unit in CO. In addition, multi-CPU setup requires multiple cooling solutions, making them even more expensive.

    Instead, look into the i7's running on LGA2011-v3 socket. More ram capability, more cache, and more importantly up to 8 physical cores on the chip. They are indeed a bit more expensive, but worth it, when the higher numbers of cores.


    Hi Christian,

    What is your opinion that i7s are better than Xeons based on?
    As Alain mentioned, "the Intel Xeon E5-1650v3 3.50GHz 15MB 2133 6Core CPU is very much like the i7-5930K a six core 2011-3 i7."

    Thanks,
    BeO
    0
  • Christian Gruner
    http://ark.intel.com/compare/82931,88195,82765

    Lack of SSE instructions is one obvious reason. And costing almost the same, I would grab the i7 6-core anyday, as the market looks right now.
    0
  • BeO
    Thanks Christian,

    At a similar price tag I would prefer a Xeon over an i7 e.g. for ECC memory, and a workstation over a desktop computer.

    However, you raised an important point, SSE instructions which seem to be used by OpenCL. And I assume OpenCL code would be executed by the gfx card as well as the CPU, and disabling hardware acceleration in the preferences only applies to the gfx card?

    I've done some internet research most recently and at first sight it seemed to me that AVX is the successor of SSE 4.2, and I concluded that even if Intel is not listing SSE for the Xeon processors (for whatever reason, maybe a slightly different documentation notation for Xeons, that's what I thought), that AVX would be backwards compatible with SSE.
    e.g.
    https://en.wikipedia.org/wiki/Advanced_ ... Extensions

    However, looking at the INtel instruction set it seems that not all SSE instructions are implemented in AVX, or they might have a different name?
    https://software.intel.com/sites/landin ... SE4_2,AVX2
    (e.g. _mm_cmpestra)

    But I am not sure how to read it properly, it might be that every checkbox (e.g. sse3) only shows the additional instructions to its predecessor (e.g. sse2)

    There is another page saying that Xeon (server processors) like the E5-1650 v3 Haswell supports SSE.
    https://en.wikipedia.org/wiki/Haswell_% ... processors

    Hence, can you please answer the following question (or should I open a support call, I would share the answer then):
    - Does C1 use CPUs SSE instructions, which version?
    - Does AVX instruction set contain all SSE instructions?
    - Does a Xeon processor (Haswell) which lists AVX2 but not explicitly SSE render and process images slower than a comparable i7 processor due to unsupported SSE instructions?
    - Or due to any other difference?
    - Or does a Xeon have a similar performance?

    - Is OpenCL used for the CPU if hardware acceleration in the preferences is set to "Auto"?
    - What is used if set to "Never"?

    Thank you Christian
    and cheers
    BeO

    Edit: btw, SSE instructions can be migrated to AVX instructions which might have a better performance, don't know if this applies to C1 though...
    https://software.intel.com/en-us/blogs/ ... ywords%3A*
    0
  • Robert Whetton
    So new Crimson Drivers for Radeon cards sees a marked improvement with cards 😊
    2016-01-13 15:27:49.489> CPU: GenuineIntel [Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz]
    2016-01-13 15:27:49.490> CPU features: MMX, SSE, SSE2, SSE3, SSSE3, SSE41, SSE42, AVX, CX8, RDTSCP, POPCNT
    2016-01-13 15:27:50.001> OpenCL initialization...
    2016-01-13 15:27:50.093> OpenCL : found platform AMD Accelerated Parallel Processing, OpenCL Version : OpenCL 2.0 AMD-APP (1912.5)
    2016-01-13 15:27:50.118> OpenCL Device : Hawaii
    2016-01-13 15:27:50.118> OpenCL Driver Version : 1912.5 (VM)
    2016-01-13 15:27:50.118> OpenCL Compute Units : 40
    2016-01-13 15:27:50.441> OpenCL : Loading kernels
    2016-01-13 15:27:50.875> OpenCL : Loading kernels finished
    2016-01-13 15:27:50.875> OpenCL : Benchmarking
    2016-01-13 15:27:50.980> OpenCL : Initialization completed
    2016-01-13 15:27:50.980> OpenCL benchMark : 0.066074
    0
  • Alain Decamps
    [quote="BeO" wrote:
    Thanks Christian,

    At a similar price tag I would prefer a Xeon over an i7 e.g. for ECC memory, and a workstation over a desktop computer.

    However, you raised an important point, SSE instructions which seem to be used by OpenCL. And I assume OpenCL code would be executed by the gfx card as well as the CPU, and disabling hardware acceleration in the preferences only applies to the gfx card?

    I've done some internet research most recently and at first sight it seemed to me that AVX is the successor of SSE 4.2, and I concluded that even if Intel is not listing SSE for the Xeon processors (for whatever reason, maybe a slightly different documentation notation for Xeons, that's what I thought), that AVX would be backwards compatible with SSE.
    e.g.
    https://en.wikipedia.org/wiki/Advanced_ ... Extensions

    However, looking at the INtel instruction set it seems that not all SSE instructions are implemented in AVX, or they might have a different name?
    https://software.intel.com/sites/landin ... SE4_2,AVX2
    (e.g. _mm_cmpestra)

    But I am not sure how to read it properly, it might be that every checkbox (e.g. sse3) only shows the additional instructions to its predecessor (e.g. sse2)

    There is another page saying that Xeon (server processors) like the E5-1650 v3 Haswell supports SSE.
    https://en.wikipedia.org/wiki/Haswell_% ... processors

    Hence, can you please answer the following question (or should I open a support call, I would share the answer then):
    - Does C1 use CPUs SSE instructions, which version?
    - Does AVX instruction set contain all SSE instructions?
    - Does a Xeon processor (Haswell) which lists AVX2 but not explicitly SSE render and process images slower than a comparable i7 processor due to unsupported SSE instructions?
    - Or due to any other difference?
    - Or does a Xeon have a similar performance?

    - Is OpenCL used for the CPU if hardware acceleration in the preferences is set to "Auto"?
    - What is used if set to "Never"?

    Thank you Christian
    and cheers
    BeO

    Edit: btw, SSE instructions can be migrated to AVX instructions which might have a better performance, don't know if this applies to C1 though...
    https://software.intel.com/en-us/blogs/ ... ywords%3A*


    According to :
    https://en.wikipedia.org/wiki/List_of_I ... erformance

    SSE is supported. I wouldn't be surprised that they use the same dies than the i7 haswell-ep CPU's.

    As far as I know the GPU's are not using ECC memory, so for CO work it probably wouldn't make a difference. I wouldn't pay extra for using ECC in a CO box. (databases or NAS storage is another story).
    0
  • BeO
    Christian,

    what evidence do you have for similar processors (Xeon vs. Intel Core)? Any concrete test results? Any information from development? Could you please comment on my questions above?

    Thanks in advance
    BeO
    0
  • Christian Gruner
    [quote="Alain" wrote:
    - Does C1 use CPUs SSE instructions, which version?


    Yes, in the latest version available to the software from a given configuration.

    [quote="Alain" wrote:
    - Does AVX instruction set contain all SSE instructions?

    It contains equivalents, more here: https://software.intel.com/sites/defaul ... Bfinal.pdf

    [quote="BeO" wrote:
    - Does a Xeon processor (Haswell) which lists AVX2 but not explicitly SSE render and process images slower than a comparable i7 processor due to unsupported SSE instructions?


    No, as AVX2 is prefered (see the above intel-paper), but this will likely not be the main reason for diff'ing performance.

    [quote="BeO" wrote:
    - Is OpenCL used for the CPU if hardware acceleration in the preferences is set to "Auto"?


    The right answer is "it depends", hence calling it Auto, and not "Enabled". The graphicsadapter has to have 1 gb of ram, and preferably more than 2 gb, and the GPU itself has to be fast enough (benchmark over ~2.0, if my memory serves my right)

    [quote="BeO" wrote:
    - What is used if set to "Never"?

    CPU for everything


    Also note that CPU aren't optimal for image-processing. Image processing is very much GPU-domain. But as mentioned in other thread, everything has to balance to not make a bottleneck somewhere along the pipeline.

    Also see this benchmark page: https://compubench.com/result.jsp?bench ... ase=device

    Try and enable/disable dGPU vs CPU. The difference is massive.

    All this said, it does seem like the Xeon are getting faster pr buck spent compared to the i7 series, so it might not be long before we see similar performance. We might even be there with some of the newer series Xeon, if the rest of the system is play along nicely.
    0
  • Robert Whetton
    [quote="Christian Gruner" wrote:

    The right answer is "it depends", hence calling it Auto, and not "Enabled". The graphicsadapter has to have 1 gb of ram, and preferably more than 2 gb, and the GPU itself has to be fast enough (benchmark over ~2.0, if my memory serves my right)

    Under 2.0?
    0
  • BeO
    Thanks Christian.

    So, does that mean that (because both Xeons as well as i7s have AVX registers) either both are concerned by performance penalties as outlined by the Intel paper, or both are not (e.g. due to recommended optimizations).

    In essence, if there are any performance differences between comparable Xeon and i7 it is not due to Intel not listing SSE42 instruction set on their Xeon processor datasheet, correct?

    I also read you that in the past there were performance differences observed (Xeons slower) but you cannot say this for sure for current processor generations?

    Performance per buck spent is a relative figure but not necessarily good to compare absolute performance (and not very practical, as I usually buy a complete system, and relevant for the bucks spent is also different memory, maybe different moboards, different vendors, different rebates etc.).

    Somebody from P1 mentioned in the forum (it think it was Lionel) there are also CPU benchmarks available, do you consider logging them in the log files like you do it for the graphics card in imgcore.log?

    Once you officially publish benchmark figures for CPUs or systems (which I hope you'll do) please consider publishing absolute bechnmark figures, not (or not only) bechnmark figures per dollar spent.


    Hardware acceleration settings
    My question for hardware acceleration settings in the preferences was not related to the graphics card (I know it controls the use of the GPU, if GPU is supported/capable enough).
    I would want to know if the two settings also control how images are rendered/processed by the CPU processor, e.g. if set to Never, is OpenCL code execution on the CPU disabled and images are rendered by some different code. That's a bit unclear as the preferences settings are not labelled "graphics card acceleration" but "hardware acceleration", and OpenCL code can be executed by both, the GPU and CPU.

    In other words, if I disable hardware acceleration e.g. because of weird effects in image rendering due to bad OpenCL implementation of the graphics card or its driver, does this setting also disable hardware acceleration on the CPU...

    And yes I know graphics card is faster than CPU, but I also know that in case of weird effects the graphics card accerleration needs to be switched off if the problem cannot be solved otherwise, and this happens to C1 users here in the forum, and in this case the CPU bears all the load. Thus, CPU performance matters.

    Thanks and have a nice weekend
    BeO
    0
  • Christian Gruner
    [quote="BeO" wrote:
    Thanks Christian.

    So, does that mean that (because both Xeons as well as i7s have AVX registers) either both are concerned by performance penalties as outlined by the Intel paper, or both are not (e.g. due to recommended optimizations).

    In essence, if there are any performance differences between comparable Xeon and i7 it is not due to Intel not listing SSE42 instruction set on their Xeon processor datasheet, correct?

    I also read you that in the past there were performance differences observed (Xeons slower) but you cannot say this for sure for current processor generations?

    Performance per buck spent is a relative figure but not necessarily good to compare absolute performance (and not very practical, as I usually buy a complete system, and relevant for the bucks spent is also different memory, maybe different moboards, different vendors, different rebates etc.).

    Somebody from P1 mentioned in the forum (it think it was Lionel) there are also CPU benchmarks available, do you consider logging them in the log files like you do it for the graphics card in imgcore.log?

    Once you officially publish benchmark figures for CPUs or systems (which I hope you'll do) please consider publishing absolute bechnmark figures, not (or not only) bechnmark figures per dollar spent.


    Hardware acceleration settings
    My question for hardware acceleration settings in the preferences was not related to the graphics card (I know it controls the use of the GPU, if GPU is supported/capable enough).
    I would want to know if the two settings also control how images are rendered/processed by the CPU processor, e.g. if set to Never, is OpenCL code execution on the CPU disabled and images are rendered by some different code. That's a bit unclear as the preferences settings are not labelled "graphics card acceleration" but "hardware acceleration", and OpenCL code can be executed by both, the GPU and CPU.

    In other words, if I disable hardware acceleration e.g. because of weird effects in image rendering due to bad OpenCL implementation of the graphics card or its driver, does this setting also disable hardware acceleration on the CPU...

    And yes I know graphics card is faster than CPU, but I also know that in case of weird effects the graphics card accerleration needs to be switched off if the problem cannot be solved otherwise, and this happens to C1 users here in the forum, and in this case the CPU bears all the load. Thus, CPU performance matters.

    Thanks and have a nice weekend
    BeO



    While I understand and share your interest in all the nitty-gritty parts, I think you are better off looking at external benchmarks like https://compubench.com/result.jsp?bench ... ase=device (video composition benchmarks)

    Of course there is a error-margin with these numbers, but considering the rest of the system also affects performance, this errors also occurs in the real world. In short, theory is one thing, but in practice you will be a bit off to either side.

    You also said you had problems with OpenCL? What graphics card are you running and with how much ram?

    Regarding publishing benchmarks from Capture One, we are still considering this. So currently I cannot comment on when or even if this will happen. Especially given the benchmarks in link above seem to fairly spot on.

    Regarding different rendering on CPU vs GPU, then yes they are different, as 2 different programming languages are used. However, we run extensive checks to ensure they remain identical in output.
    0
  • BeO
    Thanks Christian.

    [quote="Christian Gruner" wrote:


    You also said you had problems with OpenCL? What graphics card are you running and with how much ram?


    NVIDIA QUADRO FX 880M, 1 GB, C1 OpenCL benchmark around 3.0
    I don't have problems, it is just not used, and I know it is probably both, to slow (3.0) as well as underequipped with VRAM.


    [quote="Christian Gruner" wrote:

    Regarding different rendering on CPU vs GPU, then yes they are different, as 2 different programming languages are used. However, we run extensive checks to ensure they remain identical in output.


    It is still not clear from your answer (to me). Could you please be more precise when this other language is used and what effect the preferences setting has on the code execution on the CPU, please.

    - If gfx card is supported (VRAM and benchmark) and setting is set to AUTO, then the GPUs are used by some OPENCL code, right?
    - if harware acceleration is set to NEVER, then (instead of the GPU) the CPU is used, right?
    Which code, OPENCL or another language?
    - Is OPENCL code executed on the CPU at all?
    If not, why? OpenCL can be executed on CPU, and quite efficiently I think...

    Graphics card
    If external benchmarks represent C1 performance sufficiently, what's the point in having an internal benchmark.

    What should I take as a decision criterion e.g. Mac vs. PC, or Xeon vs. Intel Core, if performance matters, and if you are saying that Xeons are not great C1 performers?

    Imho, it really is time for Phase One to work on and publish decent hardware / OS recommendations especially regarding performance.
    And to publish an overview of gfx card benchmark figures, assuming that the C1 internal benchmark really is that representative as I thought.
    If it is not, C1 should switch OFF automatic disabling of the gfx card based on that internal benchmark...

    Thanks,
    BeO
    0
  • SFA
    BeO,

    You are asking Phase to come up with some numbers for an infinitely variable environment over which they have little control given that all of the physical components and most of the code that will be used in the "system" will not be under their control.

    Now you may well be a reasonable person and and would use the information a supplier might be able to provide in a sensible and logical way with the full understanding that some things may change over time as other parts of the system's environment are up dated - new drivers, new OS updates, the effects of changes to other applications which may not always be predictable, "enhancements" of development tools that may interfere with existing application code and so on.

    However there will be many people who will read the numbers, reject the ideas of any nuance of interpretation and hit the internet with opinions that are misguided.

    It is in the nature of the internet that such opinions, or at least the ones that are repeated and referenced most often tend to be negative, often unreasonably so.

    Taking some sort of processor and defining standard test to compare its performance with other processors designed to undertake similar tasks is relatively simple as a concept. <Making that same test using hardware systems that also influence the results and for multiple processors working together to deliver the final result from highly variable inputs is a different matter.

    I guess, given enough resources, it could be done and kept up to date week by week as new products become available.

    Double up the resources to be able to support OS and Windows (and, probably, 2 or 3 versions of each of them). Double again if ever considering Linux support. All to get some numbers that are meaningful to buyers for a short lifecycle during hardware product development. And that assumes the numbers are indeed meaningful at all for practical photo creation.

    The simpler approach is to make an assumption that product development for Photo editors (as with games) will seek to make use of as much computing resource as is made available and so the more resource you have the better the performance is likely to be. Of course there may always be some weak link in an otherwise powerful system but the general principle is "buy the best you can afford".

    If camera RAW file sizes keep growing (and we keep buying the cameras) then this development race will be perpetual for the foreseeable future so future proofing in so far as we can predict what our needs will be is an important part of the decision making process.

    Personally I see no point in getting hung up on OpenCL.

    My mobile workstation has a low end Quadro GPU. Not much processing power but it does seems to have 2BG memory. Until I installed V9 it was always rejected. The only use I ever saw was an occasional blip on the monitor display as it maybe processed something from Office 365 or somewhere.

    Now it is used, according to the monitoring processes, by Capture One although I'm not sure to what effect and I am not entirely convinced that turning OpenCL on is any faster than using the CPU on this particular machine. As yet I have not tried a direct comparison. There may be some benefits for output processing subject to the edit activities applied to different images in the output batch. However I am not working with any of the cameras that produce such large files these days so I am less likely, I would imagine, to notice a difference.

    In summary, the question of benchmarking has so many variables, including the content that you are intending to process during the years you expect to use a new machine, that any question about recommendations is similar to asking "How long is a piece of string." That is a question I would not wish to answer with a definitive statement.

    Just my (simplistic) opinion of course.



    Grant
    0
  • BeO
    Hi Grant,

    [quote="SFA" wrote:
    BeO,

    You are asking Phase to come up with some numbers for an infinitely variable environment over which they have little control given that all of the physical components and most of the code that will be used in the "system" will not be under their control.


    Yes. I am sure P1 has their test systems, and probably these test systems are not otherwordly but somewhere near what also other users might have, especially for the Macs as I think there are not so many options for Macs.

    I think it is reasonable to develop some meaningful benchmarks, publish them alongside with a description of the benchmark and a description of the specific test systems used (of course not all indefinite environments which might exist).
    Including a disclaimer that such test results are only an indicator and no guarantees...


    This kind of benchmark publication is not unusual in the IT industry.


    [quote="SFA" wrote:
    but the general principle is "buy the best you can afford".


    That's my point. What is the best (or amongst the best)?? What are C1s specific needs? If there is an indication that a Xeon system performs only 50% than an i7 system then I want to know this. (as an example)

    Similiarly, I might potentially be satisfied with a medium performer (to save some money), but which system is a medium performer?

    I really don't want to rely on a gfx card, as the official recommendation is to switch gfx card (OpenCL) OFF, in case of any problems. I am hopeful that my gfx card (in my future system) works but I want to have some indication that the other 2000+ bucks (my future system) is not a "looser" in C1 respect.

    And yes, even the scenario you mentioned with doubling the headcount: If I would have to pay double the amount of dollars for a better performing C1 and I knew a few systems benchmarks which I could use to choose or tailor my system to my needs, I probably could save a lot of money on the system, thus a higher license price would be a win-win situation...

    (e.g. pay 200 bucks instead of 100 bucks for the v9 upgrade, save 1000 bucks on a new system)

    Cheers
    BeO
    0
  • NNN635397352773288520
    Im also on the look for a new computer and i found this. https://www.mm-vision.dk/Vision-W530-Workstation. It has the option to use samsung 950 pro ssd.. Very fast SSD. Because of the new motherboard from asus.
    Im also getting a nvidia quadro videocard. Its abit overkill. But its my future because i study photography and i want something thats made for working..
    0
  • Christian Gruner
    [quote="NNN635397352773288520" wrote:
    Im also on the look for a new computer and i found this. https://www.mm-vision.dk/Vision-W530-Workstation. It has the option to use samsung 950 pro ssd.. Very fast SSD. Because of the new motherboard from asus.
    Im also getting a nvidia quadro videocard. Its abit overkill. But its my future because i study photography and i want something thats made for working..


    Quadro's are usually a waste of money, when it comes to CO, as performance pr buck is miserable on these cards (in CO regards).
    Buy an AMD gamer-card instead. Basically go for as many Stream Processors and the highest mem bandwidth available within your budget. It is that easy to choose 😉
    0

Please sign in to leave a comment.