Send small jpeg to Amazon Rekognition
HI,
I am looking to how I can grab a small jpeg file of any image that I can then process in python to send to Amazon Rekognition to do face detection and scene recognition so I can save those results as keywords against the file.
I've all the parts working but I can't figure out how to use Applescript to get a jpeg file. I'm guessing the best way is to route the image to a recipe that processes the file and take the result of that to send to python to then send it to Amazon. Or does CO store some low quality versions of the image already that I can make use of? I found that I only need jpegs about 400px wide and 50% quality for Amazon to be able to easily identify faces and scenes - that also makes it very quick to upload and receive the data back.
Then that file would need to be deleted after the processing is finished before moving to the next one.
Any help would be much appreciated.
Thanks
Jsson
I am looking to how I can grab a small jpeg file of any image that I can then process in python to send to Amazon Rekognition to do face detection and scene recognition so I can save those results as keywords against the file.
I've all the parts working but I can't figure out how to use Applescript to get a jpeg file. I'm guessing the best way is to route the image to a recipe that processes the file and take the result of that to send to python to then send it to Amazon. Or does CO store some low quality versions of the image already that I can make use of? I found that I only need jpegs about 400px wide and 50% quality for Amazon to be able to easily identify faces and scenes - that also makes it very quick to upload and receive the data back.
Then that file would need to be deleted after the processing is finished before moving to the next one.
Any help would be much appreciated.
Thanks
Jsson
0
-
Hi Rollbahn,
To do this via scripting alone, you would have to process the file first.
We do have a publishing SDK which (depending on skill level) you may or may not find a more useful alternative for this kind of work. Right now, mapping the resulting meta data back to assets in CO would still require the scripting interface.
https://www.captureone.com/en/partnerships/developer
Example plugins
https://www.prodibi.com/Capture-One-Plugin0 -
Thanks Jim_DK - I was aware of the plugin SDK and would love to use it but it's way above my paygrade to try and figure that out unfortunately.
I can muddle my way through doing it in something like python as there is an endless amount of help online.
I'll keep banging away and see if I can figure it out I guess.
Thanks
Jason0 -
You might be able to use the thumbnails that are created the .cot files are about 400-450 pixels. I'd be interested in helping on this sounds interesting
rapdigital+gmail0 -
Thanks - that's exactly what I thought as when you maximise the grid view surely the largest thumbnail is stored somewhere and not generated on the fly.
I just can't see how to dismantle the .cot file.
My basic idea is to highlight what images I want to send to Rekognition and then it sends up the thumbnail, pulls out the labels and faces, matches the faces to a collection to find their name (or asks for a name if they are new) and then puts that data back into a window where you can tick/untick the keywords and faces and then move to the next one.
I can do all that pretty easily outside of CO so now it is just figuring out the rest inside CO. I'm surprised no-one has done this type of thing in a plugin as Capture One really needs these types of add-ons to attract LR users over.0 -
To do inside CO I would proceed as follows:
Assumptions & Strategy
1: Using CO the user has selected one variant in CO, and then runs the script. Initially runs from Script Editor, once debugged runs from CO. Once debugged can become a list of variants.
2: You, will create an OSX/MacOS folder somewhere for the resulting image files. Later this can be done/checked with Applescript.
3: You will initially manually create a Process recipe for this purpose, later this can be automated.
Steps:
1. Using OSX create a folder in the desktop with some unique name for this purpose.
2. Using CO create a Process Recipe with some unique name (the_recipe_name)) that will create a file of the size that you desire in the OSX folder you have just made. Check this with a few different files that it works correctly.
3. Write an Applescript along these lines- Using CaptureOne (Tell application "Capture One")
- Get a reference to the selected variant (set theVariant to first variant whose selected is true)
- Process the variant ( tell current document to process theVariant recipe the_recipe_name))
- (end tell)
- Do the other stuff to send the the jpeg(s) to Amazon and get back the face information and extract the name
- Tell CO to add a new keyword with the name of that was extracted ([color=#0040FF:1aynhtri]Tell application "Capture One" to tell theVariant to make new keyword with properties{name:theName}[/color:1aynhtri])
- Empty the OSX folder.
Once you get these barebones working, consider the following additions- Create the OSX folder if not already present
- If the OSX folder has files already in it delete them
- Choose an OSX folder located somewhere else than th Desktop
- Create a parent keyword, eg. "FaceName", or start the keyword name with a prefix like "Name:" so that you can check if a name keyword is already present, and replace an existing name with a new one
- Extract the properties of the recipe that you now have working, so that it can be created or reset by Applescript.
- Get the script operational when run from CO's script menu (It has to be an app, and it has to have access permissions)
0
Post is closed for comments.
Comments
5 comments