Effect of Metadata on Filter Tool and Browser Speed
For a long time I have been troubled with slow speed of the COP browser and filter tool in large catalogs. After considerable debugging I have discovered the underlying cause. This has been reviewed with support and a bug report has been filed. I provide the information to the community as there are some possible workarounds.
The problem starts with the Filter tool. Filetr provides a unique functionality which doesn't exist in Aperture or Lightroom (and perhaps in no other DAM SW) where it makes a list of all the unique values of each Metadata field, and provides radio buttons which allow the user to immediately select all variants with that value. For large catalogs I have found that the COP stalls if a Metadata field is populated by a unique string for each variant, and COP is fast if there is no data of this type in any MetaData field. I observe this related to the activity of the filters tool.
I tested this on a catalog with 15000 images. Populating a metadata field (like Source, Provider, Special Instructions, Job Identifier, Title) with data for which there only a few variations (less than 500) - like name of the photographer, date of original import, original folder causes no degradation of speed. COP is fast. But populating even one field with a string which is unique for every variant (like path including file name, a unique title, creation date-time, or folder + image counter) creates a massive slowdown. 23 seconds per click or keystroke on my fairly powerful new iMac with SSD drive with only one such field.
I observe that the slowdown occurs whenever the Filters tool is visible (active) and does not occur if the Filters tool is not visible.
I observe that the slowdown occurs in the All Images folder, and also in any User Collection that contains most or all images.
I observe that if I make a custom tab, and move the filters tool there the slowdown does not occur with other tabs active - work around!!
I note there is a slightly hard to find feature on the Filters tool (left click on the three dots, then left click on show/hide filters) which allows the user to specify which metadata fields are shown by the Filter tool. I observe that his selection does not affect the slowdown behavior described here. It also shows that the Filter tool has access to a very large number of Metadata fields.
What I believe is happening is that the Filters tool (when active) is making an update for every Metadata field (not just the ones shown) every time the user makes a keystroke or mouse click.
Updating the Filter tool data is a very lengthy calculation for a large number of variants (n) each with a unique string. It means that you must do a string comparison 0.5*n^2 times - so for a catalog of 15000 images you must make 112,500,000 string comparisons. Compared to 15000 values with 500 unique values - only 3,750,000 string comparisons. This may be the root cause of the issue. I note that the filter tool also sorts the values and radio buttons presented to the user and this may add more comparisons.
I observe that if a sample is taken of the COP process during one of these hangs, the output shows a VERY large stack of merge/sort operations. I believe may that is the Filter Tool update which I have just described.
In my opinion, the Filter tool output/function is not very useful in a large set of images for a Metadata field which is populated by all unique values - the user simply gets a very long list of radio buttons, each of which selects one image. (over 15 screens long for my 15000 image catalog) The user cannot find the right button easily. In this case, the selection operation is more easily accomplished with a simple search, which is fasetr too.
I believe a lot of improvement would occur if Phase One would make the Filter tool stop performing updates for Metadata fields which are hidden by the Filter tool. Such a change requires no change to the GUI and no change to the update algorithm. It also puts the speed and content in control of the user. If the Filter tool makes a long useless list of radio buttons for a metadata field, hide it and the speed should improve. Unless you want that list, but then you must pay for it in waiting time. (I have made this suggestion in my support request)
In the meantime, another user workaround is to check your Metadata fields for any containing unique data (might even be put there by you camera or by other SW) and delete if not really needed.
The problem starts with the Filter tool. Filetr provides a unique functionality which doesn't exist in Aperture or Lightroom (and perhaps in no other DAM SW) where it makes a list of all the unique values of each Metadata field, and provides radio buttons which allow the user to immediately select all variants with that value. For large catalogs I have found that the COP stalls if a Metadata field is populated by a unique string for each variant, and COP is fast if there is no data of this type in any MetaData field. I observe this related to the activity of the filters tool.
I tested this on a catalog with 15000 images. Populating a metadata field (like Source, Provider, Special Instructions, Job Identifier, Title) with data for which there only a few variations (less than 500) - like name of the photographer, date of original import, original folder causes no degradation of speed. COP is fast. But populating even one field with a string which is unique for every variant (like path including file name, a unique title, creation date-time, or folder + image counter) creates a massive slowdown. 23 seconds per click or keystroke on my fairly powerful new iMac with SSD drive with only one such field.
I observe that the slowdown occurs whenever the Filters tool is visible (active) and does not occur if the Filters tool is not visible.
I observe that the slowdown occurs in the All Images folder, and also in any User Collection that contains most or all images.
I observe that if I make a custom tab, and move the filters tool there the slowdown does not occur with other tabs active - work around!!
I note there is a slightly hard to find feature on the Filters tool (left click on the three dots, then left click on show/hide filters) which allows the user to specify which metadata fields are shown by the Filter tool. I observe that his selection does not affect the slowdown behavior described here. It also shows that the Filter tool has access to a very large number of Metadata fields.
What I believe is happening is that the Filters tool (when active) is making an update for every Metadata field (not just the ones shown) every time the user makes a keystroke or mouse click.
Updating the Filter tool data is a very lengthy calculation for a large number of variants (n) each with a unique string. It means that you must do a string comparison 0.5*n^2 times - so for a catalog of 15000 images you must make 112,500,000 string comparisons. Compared to 15000 values with 500 unique values - only 3,750,000 string comparisons. This may be the root cause of the issue. I note that the filter tool also sorts the values and radio buttons presented to the user and this may add more comparisons.
I observe that if a sample is taken of the COP process during one of these hangs, the output shows a VERY large stack of merge/sort operations. I believe may that is the Filter Tool update which I have just described.
In my opinion, the Filter tool output/function is not very useful in a large set of images for a Metadata field which is populated by all unique values - the user simply gets a very long list of radio buttons, each of which selects one image. (over 15 screens long for my 15000 image catalog) The user cannot find the right button easily. In this case, the selection operation is more easily accomplished with a simple search, which is fasetr too.
I believe a lot of improvement would occur if Phase One would make the Filter tool stop performing updates for Metadata fields which are hidden by the Filter tool. Such a change requires no change to the GUI and no change to the update algorithm. It also puts the speed and content in control of the user. If the Filter tool makes a long useless list of radio buttons for a metadata field, hide it and the speed should improve. Unless you want that list, but then you must pay for it in waiting time. (I have made this suggestion in my support request)
In the meantime, another user workaround is to check your Metadata fields for any containing unique data (might even be put there by you camera or by other SW) and delete if not really needed.
0
-
Eric,
I think you may have found the key.
Something only vaguely related that I saw in a log file a couple of days ago made me wonder if that (re-reading folder populations and, being similar if one is repopulating, calculating numbers for filters) could be a problem. However I am using Windows and Sessions and not experiencing the dramas reported for Catalogs and, it seems, mainly Macs.
I wonder if these fairly large catalogs are mainly imported files? This perhaps more likely to have history of IPTC data that varies for most records. In theory that should not happen as far as I understand if soince the IPTC codes are intended to be controlled content and table driven. In practice ... who knows what might be in there?
Can you imagine an index based on full Geocodes!
Many many years ago I recall a similar sort of problem (although possibly the inverse of this - lack of information rather than too much) when undertaking a data conversion exercise for a new client.
Our system worked with more information than their old system required so although most of the data could be assimilated and converted quite well some thing were reliant on unique identifiers and then being "distributed" around some location records for operational purposes.
The test files for the routine worked well with the sample data set. We estimated an 8 hour process to convert, write tapes and so on. So they copied the files on a Friday night, couriered the tapes to us, we ran the process on Saturday expecting to verify the tapes with the output files on Sunday and send them off to be loaded so the new system was up and running on Monday first thing.
The conversion ran for 4 days.
It transpired that the Unique identifiers for the 10% or so of records that did not have accurate locations readily available and that were supposed to be 99% identifiable by unique serial number ... did not have serial numbers.(Or something similar to that - it was a very long time ago now and I forget the complete details.)
The conversion routine have a great process for getting the required distribution checking for duplicates and so on ... provided the majority of the records had the promised serial numbers. For the few record thought so have missing serial numbers or "duplicates" (like "No Serial Number" where people could not be bothered to find and enter one) the process was designed to cycles through the records and connect them to a location that simply spread them out as equally as possible. It would have worked perfectly within the time planned for a few tens or even a few hundred records. For several thousand it meant that the process was constantly repeating assessing every record - much the same as a key field index finding there are no common keys and every entry has to be listed (and checked for at the next record) in terms of processing effort.
We worked with the client on their data quality, agreed a revised process which introduce fake but unique keys that could be used in advance rather than after the fact of attempted conversion as they previously insisted, and re-ran the exercise a month or so later well within the cut-over plan timing for the weekend.
One of the more memorable projects I have been involved with. The conversion exercise was by no means the end of the issues they had not realised they had with the data available from their previous records. We had some interesting weeks getting that particular project up and running fully.
The problem you described in your post would appear to be the inverse of the one we had - more or less. But the result will, in effect, be very similar.
Grant0 -
Hi Grant [quote="SFA" wrote:
Many many years ago I recall a similar sort of problem (although possibly the inverse of this - lack of information rather than too much) when undertaking a data conversion exercise for a new client.
Nasty problem you describe. Those kind of things cause gray hair for project managers if the technical staff can't find a quick remedy. And sometimes there just is is no quick remedy.
As you say, you have to have the relevant test conditions, configurations and a data other wise you will receive surprises.[quote="SFA" wrote:
I wonder if these fairly large catalogs are mainly imported files? This perhaps more likely to have history of IPTC data that varies for most records.
Not only that, but large catalogs represent many years of work for a person. For example my catalog not only contains my own images, but also from other contributors - we all went on the same wilderness trips, and we all contributed to a common collection of images.
[quote="SFA" wrote:
In theory that should not happen as far as I understand if soince the IPTC codes are intended to be controlled content and table driven. In practice ... who knows what might be in there?
Grant
A few of the the IPTC fields have a strictly enforced format but many are free form, particularly the "Title" field which can have up to 64 characters - and there are of course no rules for a Title - and it is likely to be different for almost every image.[quote="SFA" wrote:
However I am using Windows and Sessions and not experiencing the dramas reported for Catalogs and, it seems, mainly Macs.
Altough the Phase One likely try very hard to keep the higher level code the same, the lower level code, drivers and services, will differ. And as a consequence some interfaces will differ.
Thanks for the comments0 -
Thanks Eric, very interesting observation. I also see this effect on my system. Unfortunately it comes on top of my other issues with big catalogs, i.e. I can make the system even slower (completely unresponsive) when using filters extensively. So there seems to be several issues: filters and smart albums, all images folder in general, RAM not released when closing catalogs, metadata handling,..
Best
Frank0 -
[quote="FL" wrote:
Thanks Eric, very interesting observation. I also see this effect on my system. Unfortunately it comes on top of my other issues with big catalogs, i.e. I can make the system even slower (completely unresponsive) when using filters extensively. So there seems to be several issues: filters and smart albums, all images folder in general, RAM not released when closing catalogs, metadata handling,..
Best
Frank
Hi Frank
How many images do you have in your big catalog?0 -
My original catalog (Aperture) has 250.000+ images and I have split it into several catalogs with about 25.000 each as V8 was not able to handle this. These worked fine in V8 and V9 and show issues with V10. I also see similar issues but not as slow with smaller catalogs (4000 images each). While the bigger ones are referenced with the images being on a NAS I have generated two copies from one of the smaller ones, a referenced and a managed one, so that I can move them around and test them on different disk and computers. The behaviour is always the same and disk speed does not affect the performance fundamentally.
Best
Frank0 -
Hi Frank
As they say in my team, when you are troubleshooting "you have to peel the onion one layer at time - What you find in the next layer can be a surprise."
Your experience with Drive speed matches mine. There's very little disk i/O, the process is not I/O bound, it's CPU bound. Excessive resource usage by the filter could be responsible for all the symptoms you mention.
I think there is no benefit to using a filter to configure a smart folder, you can achieve the same image selection using additional search terms. However, a filter carries a lot of computational overhead, especially with current implementation.
In your shoes, I would reconfigure smart folders and presets to not use the Filter, and remove the Filter tool from the Library tab.0 -
Eric,
thanks for the hint. I have removed all smart folders and the filter tool but the difference in performance is marginal, i.e. I still spend a lot of time looking at the nice spinning wheel whereas V9 is just behaving fine in the same constellation. Let's hope that the people at PhaseOne do their homework and will soon release a solution.
Frank0 -
This may be a good time to open a new support case .
I am working with a support guy now who is head and shoulders above the ones previously helping me.
The following information I sent is much more extensive and will be much more useful to the investigators than what I have sent before, I recomend the same:
- The file from from COP Filepackager
- Screen shots of various Tooltabs of the inspector. e.g. Library, Metadata
- Quicktime movie of the screen showing the problem, with:
--- Activity window showing
--- Installed ClockMe to show an on screen HUD with a clock, and also show user key strokes
--- Installed Lagente PinPoint configured to clearly identify mouse position (cross hairs) with click indicator on.
- Zipped coctalog.dB file (not the entire catalog). (I got it from the backup folder, zipped it is quite small)
- Sample of Capture One process while it is non-responsive (zipping may help here too)
- Screen shot of the activity monitor while COP is non-responsive, show memory usage and CPU usage
- Screen shot of the Filter configured to show each of the Metadata fields with data (one by one for the big ones) Then Filter tool shows the the list of all Metadata values for that field. Sent the start and the end for a long list
--- I removed the filter tool from the Library tab, and put it on a custom tab. More room for each tool, more control of which one is active.0 -
I made a little Applescript to monitor the status of the the Capture One 10 process, and display on screen when it is logged and how long for. The end of the hang and the duration are also logged in the Applescript log.
Since this utility checks the process status as indicated in the OS with a Unix command, it should not interfere with COP performance - I have checked that and it seems OK.
Use the Script Editor to open a new empty Applescript Document, and copy and paste all of this code into the window. Push compile. It should compile without errors.
Then start Capture One Pro 10, then start the script. It will run continuously, reporting and measuring hangs. If Capture One 10 is not running, the script will crash. I could fix that, but I must go to bed.
It can be easily adapted to other versions of COP.
-- AppleScript to detect and time Capture One Pro 10 hangs for the purpose of screen capture and feedback to Phase One.
-- Eric Valk January 2017 Version 0.2
-- Start the script running after starting Capture One Pro 10.
-- Hangs will be displayed as notifications (make sure you have notifications enabled) and the total duration is displayed as an Alert, and also logged in the Applescript log.
-- Times are only roughly accurate, as the Applescript delay timer is accurate to about 1/60 of a second
set FinalAlertTime to 2.5 -- time before the alert times out and disappears
set theApp to "Capture One 10"
set wait_time to 1.0 / 2.0 -- the time in seconds between checks
set wait_div to 1 / wait_time -- the divisor needed to scale the output
set dispTime to 2 as real -- the time between notifications
set dispCount to (dispTime / wait_time) as integer -- the interval count between notifications
if false then -- set to true to get some deugging info
log wait_time
log wait_div
log dispTime
log (dispTime / wait_time)
log dispCount
end if
set nextDispCtr to 0 --The count for the next display notification
set hang_Ctr to 0 as real -- counts the number of intervals COP is hung
set isHung to false -- state this time
set wasHung to false -- state last time
if true then -- set to false to skip the actual measurements
tell application "System Events"
set PID to unix id of process theApp as Unicode text
repeat while true
set state to word 7 of paragraph 2 of (do shell script "ps -j " & quoted form of PID & " ")
if state = "R" then
set isHung to true
set hang_Ctr to hang_Ctr + 1
if hang_Ctr ≥ nextDispCtr then
set nextDispCtr to hang_Ctr + dispCount
set disphungtime to ((((100 * (hang_Ctr / wait_div)) as integer) as real) / 100.0) as text -- truncate to two digits
display notification "COP is hung for " & disphungtime & " seconds"
end if
if not wasHung then set nextDispCtr to nextDispCtr - 1 -- try to align display on integer seconds
else
if wasHung then
set disphungtime to ((((100 * (hang_Ctr / wait_div)) as integer) as real) / 100.0) as text -- truncate to two digits
display alert "COP was hung for " & disphungtime & " seconds" giving up after FinalAlertTime
log "COP was hung for " & (hang_Ctr / wait_div) & " seconds" & (get current date)
end if
set isHung to false
set hang_Ctr to 0
set nextDispCtr to 0 -- so that the first display occurs as soon as the hang is detected
end if
delay wait_time
set wasHung to isHung
end repeat
end tell
end if0 -
Eric,
That workaround helped out quite a bit.
I don't use filters a lot either, mostly because it sucks the life out of C1. So I've learned to use other methods.
One thing I do use filters for is trying new techniques on older files. If I wanna try a newer noise reduction system, I can quickly find the higher ISO files and get to work. Same goes with C1's new Diffraction Correction. I can pick out the narrower f-stops to see how the files are affected.
I really hope they get this issue resolved and I appreciate you taking the initiative on getting it resolved.
Thanks, Jimmy0 -
Thanks Eric. Will test this tomorrow on a freshly converted catalog as catalogs seem to get slower and slower the longer I use them.
Best
Frank0 -
Eric,
this is what I get when opening one of my normal (25.000 images) catalog:
*COP was hung for 12,5 secondsThursday, 12 January 2017 at 22:02:48*)
(*COP was hung for 2,5 secondsThursday, 12 January 2017 at 22:02:53*)
(*COP was hung for 3,5 secondsThursday, 12 January 2017 at 22:02:59*)
(*COP was hung for 13,5 secondsThursday, 12 January 2017 at 22:03:16*)
(*COP was hung for 1,5 secondsThursday, 12 January 2017 at 22:03:20*)
(*COP was hung for 5,5 secondsThursday, 12 January 2017 at 22:03:29*)
(*COP was hung for 84,0 secondsThursday, 12 January 2017 at 22:05:01*)
(*COP was hung for 5,0 secondsThursday, 12 January 2017 at 22:05:08*)
Now I can start browsing through the catalog and get these times (not editing, just browsing).
(*COP was hung for 0,5 secondsThursday, 12 January 2017 at 22:05:12*)
(*COP was hung for 0,5 secondsThursday, 12 January 2017 at 22:05:17*)
(*COP was hung for 0,5 secondsThursday, 12 January 2017 at 22:05:22*)
(*COP was hung for 0,5 secondsThursday, 12 January 2017 at 22:05:25*)
(*COP was hung for 0,5 secondsThursday, 12 January 2017 at 22:05:28*)
(*COP was hung for 1,5 secondsThursday, 12 January 2017 at 22:05:32*)
(*COP was hung for 3,0 secondsThursday, 12 January 2017 at 22:05:40*)
Not sure how to interpret these values / how they compare to yours.
Frank0 -
Hi Frank
How to interpret..
The log entries which are like this: (*COP was hung for 0,5 seconds Thursday, 12 January 2017 at 22:05:28*)
means there was a very short hang you might not even notice. Half a second is the shortest period the script can determine, so it might have even been shorter than that. There are probably other short hangs the script missed. But identifying short hangs is not the purpose of the tool.
I see that you had one really long hang of almost a minute and a half, a couple in the 10-15 second range, and the rest are under 10 seconds. Would you say that matches your user experience?
If I have the Metadata I started with, and if filters tool is open, I get many many hangs in the 20-30 second range. Literally every time I mouse click or click a key on the filters tool, or on the search window, I get a 25 second hang. Typing a 10 letter search term in the search window takes 5 minutes. I have one version of a catalog where it's even worse (I did something with the Metadata that COP really doesn't like), and each mouse click or key click carries an 84 second penalty.0 -
Eric,
I find it interesting that 84 seconds is a regular hangup number for you and also appears in Frank's output.
If the cause was variable one might expect a more random distribution of numbers.
Is this the sort of process that, once something has kicked it off, will stall things for a fixed period of time whether it needs to or not?
Grant0 -
The long hang up is happening every time I start this catalog. It is not always this particular number (this might be due to the tool as I then see several larger hangs up that would add to something in the range of 80 to 90s) but always happening in the moment when images appear in the browser tap. What is interesting is that
- CPU load is <15% for the complete time
- Memory usage is reasonable and below 25% of the available
- GPU usage is basically none
- GPU memory is high (1.75 out of 2) but this is usual for C1P
- Disk and Network usage is basically none
Though C1P completely stalls for about 1 1/2 minutes the computer is completely fine to use for everything else.
Note: there was a severe bug in V9-V9.1 with GPU memory not being released properly on NVIDIA cards and the system consistently crashing at some point in time. To me it looks like there is some "limit" in GPU memory usage "implemented" that prevents these crashes. However as GPU memory is not released it may well be that everything gets incredible slow and this is limited all processes. But I can't get any better performance when disabling GPU which was possible with the old bug.
Best
Frank0 -
Hi Frank
Here is my hang data . I have updated the script a little to discard hangs under 2 seconds, and to improve the accuracy, the original was reading about 10% low.
(*COP was hung for 4 seconds Friday, January 13, 2017 at 1:32:10 AM*)
(*COP was hung for 54 seconds Friday, January 13, 2017 at 1:33:11 AM*) - selected a filter
(*COP was hung for 106 seconds Friday, January 13, 2017 at 1:35:06 AM*) - selected another filte
(*COP was hung for 27 seconds Friday, January 13, 2017 at 1:35:50 AM*) - opened the advanced search window
(*COP was hung for 29 seconds Friday, January 13, 2017 at 1:36:49 AM*) - setting up a search
(*COP was hung for 29 seconds Friday, January 13, 2017 at 1:37:54 AM*) - setting up a search
(*COP was hung for 28 seconds Friday, January 13, 2017 at 1:38:39 AM*) - setting up a search
(*COP was hung for 27 seconds Friday, January 13, 2017 at 1:41:01 AM*) - setting up a search
(*COP was hung for 28 seconds Friday, January 13, 2017 at 1:42:54 AM*) - setting up a search
Here is catalog opening. About 60 seconds altogether
*COP was hung for 27 seconds Friday, January 13, 2017 at 2:08:20 AM*)
(*COP was hung for 28 seconds Friday, January 13, 2017 at 2:08:51 AM*)
If I get rid of that one metadata field which is the original file name, then the delays associated with the filter tool go away.
Most of the activity indications I get are the same as yours, except that the hangs associated with the filter tool uses up about 1 core (out of 😎 -100%
Loading the catalog uses 2 cores, or 200%. But I'm less worried about catalog loading than the key click delays while using Capture One.0 -
Eric,
thanks, very consistent with my observation. I completely removed filter, smart albums,... some time ago as V8 and V9 had big bugs when using Smart Albums. So I do not see your delays but it is still slower than V9 when browsing and editing. Instead of filters and albums I have split into catalogs accordingly and therefore need to be able to switch fast in between these, i.e. loading and startup time and memory releae is the bigger issue for me.
What bothers me most is that all has been documented and communicated to C1P. They have investigated my full log files and acknowledged the issue, proposed several changes that did not really help and then become quiet. Having bugs in a new release is one thing, not communicating properly makes it much worse.
Frank0 -
So, if I understand the advice offered so far (ignoring the highly technical gobblygook which goes above my head but is meaningful to others who are more expert than myself in this respect), the following setting should reduce the browser speed problem....
Top Menu > View > Remove Tool From Library Tab... > Filters
Would a CO Quit and then re-Launch be necessary?
I shall try it out shortly but trust that this action will not remove any previously set Star Ratings or Color Tabs from any image files. Such cleansing would be absolutely disastrous!0 -
Robin,
I think, from what I have read, this problem only appears in catalogues and then is mainly notable when quite a lot of files are catalogued in that one catalogue.
If you are using sessions I doubt that you would have a problem.
Grant0 -
[quote="RedRobin" wrote:
So, if I understand the advice offered so far (ignoring the highly technical gobblygook which goes above my head but is meaningful to others who are more expert than myself in this respect), the following setting should reduce the browser speed problem....
Top Menu > View > Remove Tool From Library Tab... > Filters
Would a CO Quit and then re-Launch be necessary?
I shall try it out shortly but trust that this action will not remove any previously set Star Ratings or Color Tabs from any image files. Such cleansing would be absolutely disastrous!
As SFA says, this is a strong affect on catalogs with more than about 8000 images. I think there is no effect on sessions or catalogs with less than 2000 images.
After removing Filters my experience is that the effect is immediate, no reboot required.0 -
[quote="Eric Nepean" wrote:
As SFA says, this is a strong affect on catalogs with more than about 8000 images. I think there is no effect on sessions or catalogs with less than 2000 images.
After removing Filters my experience is that the effect is immediate, no reboot required.
....Also now my experience since removing 'Filters' from view in the Browser. Thanks 😊
I only have Sessions, no Catalogs, and some of those Session folders have far more than 2,000 images as I usually dedicate each Session folder to a subject (Birds for example) for the year.0 -
I removed the filter tool, and Capture One version 10.0.1 became unresponsive for 30 min. Activity Monitor says it was reading and writing data. I have learned to go do something else when this happens since C1 appears to be doing something and force quitting is sometimes corrupts the catalog. 0 -
I removed the filter tool, and Capture One version 10.0.1 became unresponsive for 30 min. Activity Monitor says it was reading and writing data. I have learned to go do something else when this happens since C1 appears to be doing something and force quitting is sometimes corrupts the catalog. 0 -
Excellent suggestions.
COP version 10.0.1 behaves quite fast with my 52000 image catalog after moving the Filter tool from the Library tab to a custom tab. Unfortunately, removing the Filter tool from the Library tab took 30 minutes of watching the spinning beach ball, so you might need patience and a look at the Activity Monitor to see that activity is actually happening. Thereafter, advanced searches were near instantaneous as each search criterion was entered. I don't know why CO has to scan the catalog with each little addition to the search criteria (unlike with a Smart Album), but since it was instantaneous, why not.
Also, before doing this, I could not edit the key word library without hanging CO infinitely. Now it works normally.
Jerry C0 -
I spoke to soon.
Although deleting the filter from the Library tab does as I described, when I reopened my Catalog, today, I had a huge problem. Every time I tried to click on an image in the All Images Collection, Capture One became unresponsive. I actually waited 2 hrs while the OS10 Activity Monitor showed Capture One doing a lot of reading and writing (GB worth). Force quitting erupts the database beyond repair. I have lots of backups , so it is not catastrophic.
So, if your version 10 is not crashing, you should see the speedups I described.
I am back to version 9.3, which is very stable. I did find that moving the filters tool out of the Library tab does speed up the search and the data entry in version 9.3.0 -
If you're engaged in removing or moving the filters tool because it may be slowing down filter/browser performance in a larger catalog, it's faster to do it if you first select a small user collection rather than the "All Images" collection.
The slow down due to the filters tool occurs when it is working on a large number, e.g. >>5000, images. But the Filters tool works on the all images in the selected collection(group, album, project). So selecting a group showing no images, or a project or album with a few hundred images temporarily removes the delay.0 -
Neil is correct, which is how I did it. My problem with Version 10 relates to slow performance navigating to All Images after relocating the Filter tool. Selecting an image in All Images results in waiting 15 min on my Mac Pro for C1 to respond. This time gets shorted as I navigate back and forth to All images a couple of time, but this is more annoying than starting a Model T on cold day. On my Macbook Pro, I gave up and force quit after 3 hrs. and was rewarded by an unfixable corrupted database.
Jerry C0
Please sign in to leave a comment.
Comments
27 comments