Skip to main content

[WIndows 10 2004] C1 introduces new bugs in windows management



  • FirstName LastName

    Dear all I have good news!! Finally I can use the NVIDIA GPU with all its potential also with CaptureOne.

    First be sure that in the NVIDIA panel control your CaptureOne must use the NVIDIA card and not the internal GPU. But as you know also with this setting, CaptureOne does not use the GPU.

    But, right now checking some setting in the windows settings (Select Start > Settings > System > Display), in the display tab at the bottom of the page there is graphics setup (System > Display > Graphics Settings). In this new window, The Hardware-accelerated GPU scheduling option is displayed. I did several test and for my hardware setup is it better to have GPU scheduling OFF.

    In the same window at the bottom of the page you can setup other parameters such as select the software you want to manage with graphic card. In our case I choose CaptureOne selecting the option for max performance (external NVIDEA card).

    Now what I did is: 

    1. be sure that in the NVIDIA panel control your CaptureOne must use the NVIDIA card
    2. Select Start > Settings > System > Display> Graphics Settings and select CaptureOne as the software you want to manage with graphic card (with max performance setting). You can try also to turn on the Hardware Accelerated GPU Scheduling
    3. delate al the files in this folder C:\Users\xxxxxx\AppData\Local\CaptureOne\ImageCore\  in order to rebuild the CaptureOne kernel.
    4. Reboot tour PC
    5. Now run CaptureOne and have a look at performance tab of the windows task manager. During the kernel re-building, you will see the CPU and GPUs load. In my case this task took a couple of minutes.

    This is the first time I see the NVIDIA GPU load in the task manager during kernel re-building (see the below picture)


    Now if I edit a raw file, apply some filters, zoom in 100% and if I fast move the picure on the screen I can see the hardware is working and this is the first time I see NVIDIA GPU working with CaptureOne (see the picture)


    If I check the log file ImgCore.log in this folder this folder C:\Users\xxxxxx\AppData\Local\CaptureOne\Logs , NVIDIA now is device 0, pass all the checks without errors as reported previously (ERROR) bin file failed parse [C:\Users\xxxxxx\AppData\Local\CaptureOne\ImageCore\\ICOCL.bin] (verificationCode=2)


    Please try this settings.... for me point #2 solved my problem because only this it's the new feature I enabled compared my previous test.

    Hope this helps and I hope to read your feedback soon.

  • SFA

    My system reads the embedded Intel HD 4000 processor as device 0 and the Nvidia GPU as device 1.

    For some time the HD4000 (which will never be used anyway due lack of performance but is a device to be checked as it has OpenCL capability) has been very slow during assessment and then fails to build a kernel for reasons related to Windows (Win7 in may case) I think. This also prevents the completion of the Nvidia card assessment as they run in parallel.

    However closing and re-opening C1 will re-run the check.

    At that point C1 will discover that a previous run for the HD4000 failed so it will skip that, run the Nvidia assessment and all will be well.

    Not that the performance assessment rating you mentioned previously is better when the number if lower. From memory any over 3 is discounted. My ancient, mobile Nvidia card usually rates around 2.2. It just makes the cut.

    In the past few days for reasons unknown the assessment process has been failing and indeed hanging completely at one point. However a couple of reboots related to some investigation of the problem (basically seeing what happened if the HD4000 facility was disabled) seems to have fixed the immediate problem at least for today. I will continue to check how things progress in the next few days.

    I hope this information is useful in some way.

  • Dave R

    Hi FirstName LastName

    Thanks for that guidance, I had never spotted those advanced graphics controls, I suppose that is because until I bought my current laptop I have always had a desktop as my main machine.  I should have delved further into all the battery saving settings as I seem to have the machine permanently plugged into the mains and don’t need them.

    Anyway your suggestions worked and Capture One is now using both the Intel and the Nvidia graphics fairly intensively.


  • FirstName LastName

    I’m happy to hear that my personal receipt can solve this issue. Also for me this kind of setting is new ( the setting in the Hardware Accelerated GPU Scheduling) however it was worth a try. I hope also other C1 users can try and solve.

  • FirstName LastName

    Is there anybody out there that tried my recipe?

  • Ewout Vochteloo

    Turning on the Windows GPU scheduling resulted in a 28% faster export of TIFFs. I noticed how little C1 uses the GPU (NVIDIA GeForce RTX 3080), so was looking for ways to improve this. After turning scheduling on and rebooting, I immediately saw a lot more GPU activity when exporting. Deleting the ImageCore files and rebooting resulted in poor performance on the first export after that, but the second was fast again.

    GeForce experience does not find C1 as app when scanning, it only finds Affinity Photo. So no optimization options for me there.

  • SFA

    My NVidea control software allows me to specify Programs that should always use the GPU.

    That may be more reliable in some situations than Windows or even some "smart" "AI" system management tools that seem to be getting popular these days in Windows 10.

    The Laptop I acquired about 6 months ago seems to have no problem assessing the internal GPUs (Intel software version and NVidea hardware) and making use of both. It seems to spread the GPU directed load quite equally between them in terms of their utilization percentages. How that compares in "work done" terms I don't know.

    My older system had, at some points over a period of about 2 years, some problems attempting to create kernels for the Intel Software GPU (Identified as Device 0) and would fail before the NVidia card (Identified as Device 1)  build had completed, thus meaning no GPU was active. On the next assessment the Intel resource would usually be ignored, being "known bad", and the NVidia driver build would complete and become active. That was using Windows 7 and so not so easy to see what was and was not working as it is using Win 10.

    That experience seems somewhat similar to your observation. It might be confirmed by the contents of the log file(s)?

  • FirstName LastName

    now I'm experiencing again problem because the GPU (NVIDIA 1660 Ti) is not used at all.

    this is the error message in the log file: bin file failed parse [C:\Users\xxx\AppData\Local\CaptureOne\ImageCore\\ICOCL.bin]

    I've installed the latest NVIDIA driver, but no change. I tried to use my recipe I've suggested few months ago but this time I've failed.


  • FirstName LastName

  • Charles O'Hara

    @ SFA & FirstName LastName

    In my experience, that setting in the Nvidia control panel is useless. Even if it set to "force" Capture One to use OpenCL through a GTX card, it still decides on its own to prefer the onboard GPU and send most of its rendering pipeline to it (which is very sluggish, of course).

    The only way I have found to make CO properly use my discrete GPU is to disable the onboard GPU in the Windows Device Manager.

  • Charles O'Hara

    Update: I tried updating to Nvidia 471.11 and OpenCL doesn't compile at all, so that's what you might be experiencing FirstName.

    I reverted back to 466.77 and now the GPU acceleration is working again. Not sure if this is an Nvidia problem or a Capture One one.

  • SFA


    Bear in mind that there are likely to be other applications using the devices for screen re-writing and stuff and they will probably NOT be directed to the Nvidia card.

    Also you are looking, I think, at a % use by available capacity compared to power, not an absolute measurement.

    My system seems to support the Nvidia card for several applications but the graphs are usually very similar when running a large batch of output files. However whilst current Intel CPU offers a LOT more power than my previous system (for which C1 did not use the Intel at all, ever.) the system, overall, all running applications, seems to try to balance the % use across "facilities" whilst, presumably, monitoring the CPU throug[ut as well and only going for the GPU when there may be some benefit to doing so. 

    All of that may be further influenced by power management settings, the type of processing being undertaken and how much of it might benefit from being set off through a dedicated GPU, the internal comms speeds between devices, memory specifications and, perhaps, whatever "smart"  "AI" type applications the syste may be running to, as claimed, improve your processing experience.

    For instance my recent Dell purchase suggests it has an "Optimiser" suite that, among other things, will allow me to specify up to 5 applications that it should monitor and seek to optimise their performance based on the way I use them.

    I only have C1 set up for monitoring at the moment and the utility is suggesting it has improved something by 3% for CPU usage.  That might just be "Launch times" of course. It has never been slow enough that I would notice 3%. 


    BWT, I also saw relatively low levels of NVidia card usage compared to Intel internal at first. But I suspect that was still balancing the load without taking into account performance so long as the Intel was not performing at its limits. In theory it is likely that running everything in the processor will, when able to handle the data volume, be at least as fast as routing through an external card in everything but the most extreme cases (Extreme = Games at very high quality, Bitcoin mining, High Def video and, possibly, a large batch of complicated giant photo RAW file edits providing all of the peripherals and their communications channels can feed the date quickly enough.)

    When I forced NVidia as a preferred card for C1 the percentage loads, as presented by the Windows monitor graphs, seemed to improve in terms of the % usage balance between the two options but I suspect that the Intel in this machine can handle quite a lot of "regular traffic" without getting hot enough to slow the general CPU functionality. I was surprised by that. (It's a laptop. I know very quickly when it gets hot and it does get hot especially with the Nvidia activity. 


    The Compile problem could also be a Windows problem or, maybe, a system manufacturer update requirement problem in some cases. For example I have known, recently, a Dell suggested update fix something that was not working very consistently.

    In a similar vein, yesterday I tried the audio socket in this newish machine for the first time since I bought it 6 months ago. The speakers are fine but the headphones just gave white noise and perhaps something a bit like the sounds it ought to be producing. Nothing seemed to fix it. No settings changes made any difference, troubleshooting said all was well. Disabling it and re-enabling made no difference. I was about to start a Support Case with Dell, tried a few last things and a re-boot and suddenly the device was reactivated and working quite well. I had remembered that there was an Audio controller software update vie the Dell driver updates program a few weeks ago. And I had clicked on a driver roll-back button, although nothing seemed to happen after that. Maybe there was simply no acknowledgment message but nevertheless the restart must have achieved something.

    (It may be a coincidence that the Dell system optimization utility is telling me there are 2 drivers to update (none yesterday) but not what they are. It seems to install them anyway so I will check the audio socket once again later this evening.)

  • Charles O'Hara

    @ SFA

    FYI I do not run any "smart", "AI", "optimizer" or "balancer" program - just bare, official drivers on Windows 10. I even regularly use "Driver Uninstaller Utility" (DDU) to make sure I have mostly clean display drivers installs (since Nvidia notoriously corrupts its own drivers over time with several updates).

    Capture One prior to some version (20 IIRC) used to allocate resources properly to the main GPU, but it's not the case anymore. I have no problem whatsoever with other GPU accelerated software (Photoshop, several Photoshop plugins, Adobe Bridge, etc., and the occasional 3D game, which all use the most powerful GPU by default). Only Capture One sucks at giving priority to my Nvidia card over my puny onboard GPU, which I have to disable so CO is not confused about it.

    My power management is run at 100% performance, all of the time, with no throttling, to ensure my CPU overclock stays stable (and it has been since 2016).

    The Device Manager graph I showed was just an example, YMMV of course. But with the onboard GPU disabled, the % usage of my Geforce goes from 10-15% to 60-80% (my usual test is to rapidly move the exposure slider around with OpenCL preview turned on). Refreshing a 4K preview usually is power hungry enough, and all graphs aside, just disabling the onboard GPU gives a real performance boost, the preview updating going from really sluggish, low fps to real-time 60hz: this is not a placebo effect for sure.

    All in all, Capture One sucks at managing several, different GPUs, and even their documentation refers to that:

    "How many GPUs can I use with Capture One?

    Capture One supports up to 4 GPUs, but make sure that all the GPUs you use are produced by the same manufacturer ( i.e. AMD or Nvidia)."

    What does Hardware Acceleration do and how do I use it in Capture One? – Capture One

  • FirstName LastName

    Thanks guys for your support. I’m writing here and in another thread and I would thank SFA for his contribution.
    Just yesterday someone told me the Microsoft released this KB5003690 because the last Microsoft update is causing some issue: Updates an issue in a small subset of users that have lower than expected performance in games after installing KB5000842 or later.

    I don’t know if I’m brave enough to try to install manually this update :-)

  • FirstName LastName

    this afternoon I've tried again to rebuild the kernel, internet disabled (then without further update), and this time during kernel duilding I can see the activity of the NVIDIA GPU

    And also at the end of the process I can see a peak on both GPU

    now if I fast move the image finally I can see the load of the CPU and both GPU:

    But if I process the image and save it as jpg, the integrated GPU (Intel UHD 630) is the one that shows a huge load, and not the NVIDIA card. 

    this the log file:

    2021-06-26 20:24:44.446> Logging is now active.
    2021-06-26 20:24:44.446> CPU: GenuineIntel [Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz]
    2021-06-26 20:24:44.446> CPU features: MMX, SSE, SSE2, SSE3, SSSE3, SSE41, SSE42, AVX, AVX2, FMA, F16C, MOVBE, CX8, RDRAND, RDSEED
    2021-06-26 20:24:44.446> CPU features: ADX, RDTSCP, POPCNT, BMI1, BMI2, LZCNT
    2021-06-26 20:24:44.646> First chance exception (thread 3584): 0xE06D7363 - C++ exception
    2021-06-26 20:24:44.646> First chance exception (thread 7796): 0xE06D7363 - C++ exception
    2021-06-26 20:24:44.825> First chance exception (thread 9036): 0xE06D7363 - C++ exception
    2021-06-26 20:24:44.847> First chance exception (thread 3504): 0xE06D7363 - C++ exception
    2021-06-26 20:24:44.875> First chance exception (thread 4476): 0xE06D7363 - C++ exception
    2021-06-26 20:24:45.189> First chance exception (thread 11792): 0xE06D7363 - C++ exception
    2021-06-26 20:24:45.227> First chance exception (thread 6284): 0xE06D7363 - C++ exception
    2021-06-26 20:24:45.421> OpenCL initialization...
    2021-06-26 20:24:45.522> Found 2 OpenCL platforms
    2021-06-26 20:24:45.522> Found 1 OpenCL devices on platform 0 (NVIDIA CUDA)
    2021-06-26 20:24:45.522> Found 1 OpenCL devices on platform 1 (Intel(R) OpenCL HD Graphics)
    2021-06-26 20:24:45.522> OpenCL Device 0 : NVIDIA GeForce GTX 1660 Ti
    2021-06-26 20:24:45.522> OpenCL Driver Version : 471.11
    2021-06-26 20:24:45.522> OpenCL Compute Units : 24
    2021-06-26 20:24:45.522> OpenCL Nvidia Compute Capability : 7.5
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_HOST_UNIFIED_MEMORY : 0
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_GLOBAL_MEM_CACHE_SIZE : 786432
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_GLOBAL_MEM_CACHELINE_SIZE : 128
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_GLOBAL_MEM_CACHE_TYPE : 2
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_GLOBAL_MEM_SIZE : 6144 mb
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_MAX_MEM_ALLOC_SIZE : 1536 mb
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_LOCAL_MEM_SIZE : 49152
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE : 65536
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_VENDOR_ID : 4318
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_VENDOR : NVIDIA Corporation




    2021-06-26 20:24:45.522> OpenCL Device 1 : Intel(R) UHD Graphics 630
    2021-06-26 20:24:45.522> OpenCL Driver Version :
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_HOST_UNIFIED_MEMORY : 1
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_GLOBAL_MEM_CACHE_SIZE : 524288
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_GLOBAL_MEM_CACHELINE_SIZE : 64
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_GLOBAL_MEM_SIZE : 6493 mb
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_MAX_MEM_ALLOC_SIZE : 3246 mb
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_LOCAL_MEM_SIZE : 65536
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_MAX_CONSTANT_BUFFER_SIZE : -890533888
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_VENDOR_ID : 32902
    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_VENDOR : Intel(R) Corporation




    2021-06-26 20:24:45.522> OpenCL CL_DEVICE_MAX_WORK_GROUP_SIZE : 256
    2021-06-26 20:24:45.643> OpenCL : Loading [C:\Users\Archimede\AppData\Local\CaptureOne\ImageCore\\ICOCL.bin] started
    2021-06-26 20:24:45.643> OpenCL : Loading [C:\Users\Archimede\AppData\Local\CaptureOne\ImageCore\\ICOCL.bin] OK
    2021-06-26 20:24:45.763> OpenCL : Loading [C:\Users\Archimede\AppData\Local\CaptureOne\ImageCore\\ICOCL1.bin] started
    2021-06-26 20:24:45.824> OpenCL : Loading [C:\Users\Archimede\AppData\Local\CaptureOne\ImageCore\\ICOCL1.bin] OK
    2021-06-26 20:24:45.824> (ERROR) bin file failed parse [C:\Users\Archimede\AppData\Local\CaptureOne\ImageCore\\ICOCL.bin] (verificationCode=7)
    2021-06-26 20:24:45.824> OpenCL : Loading kernels (dev 1 : Intel(R) UHD Graphics 630)
    2021-06-26 20:24:46.258> OpenCL : Loading kernels finished
    2021-06-26 20:24:46.258> OpenCL : Benchmarking

    2021-06-26 20:24:46.258> Started worker: TileExecuter 0 [unknown] (master: 1de0, worker: 2d1c)
    2021-06-26 20:24:46.503> Shutting down: TileExecuter 0 [unknown] (master: 1de0, worker: 2d1c)
    2021-06-26 20:24:46.503> Ending worker: TileExecuter 0 [unknown] (master: 1de0, worker: 2d1c)
    2021-06-26 20:24:46.503> OpenCL : Initialization completed
    2021-06-26 20:24:46.503> OpenCL benchMark : 1.014725

    So, what I see here is:

    several C++ exception , still the error message (ERROR) bin file failed parse [C:\Users\XXX\AppData\Local\CaptureOne\ImageCore\\ICOCL.bin] (verificationCode=7) , why the memory is set to zero OpenCL CL_DEVICE_HOST_UNIFIED_MEMORY : 0 , and probably the benchmark is referred to only the Intel GPU


Please sign in to leave a comment.