Skip to main content

Content Credentials

Comments

27 comments

  • Mathieu B
    Product Manager

    Hi  Ronald Tan, thank you for your suggestion!

    Content Credentials from Adobe seem to cover two aspects: the attribution of the content to its creator, and transparency on Generative AI. 

    Capture One offers a way to embed Copyright information into your files' metadata, which might cover the first part - but if you feel this is not enough, we would love to learn more about your point of view on this.

    As for the second part, Capture One does not use Generative AI to create content on your behalf so far, and it won't be the case in the new update (16.3) either, so this doesn't seem to apply.

    Happy to hear more thoughts on this!

    1
  • Marcin Mrzygłocki
    Top Commenter

    Mathieu-B

    so this doesn't seem to apply

    DOES apply, as an explicit signature "no parts of the image have been modified with AI" is going to matter more and more in the coming years, more than its reversal or lack of signature. It can be imagined that reporting agencies might require providing signed photos as soon as they get wind of such a feature being available, for the fear of being accused of spreading fake news.

    0
  • Ronald Tan

    Mathieu-B,

    Yes to both. The ability to have include contact information as well as other information has been in the metadata of the IPTC going back years.

    I am referring to the specific addition of—somehow—addition of notifying with authenticity the lack of AI generative works. Does this make sense?

    What about the people who doesn't use Adobe Photoshop? People who only needs and are subscribed to or on perpetual license of C1PRO? They won't have that ability or could add the "Content Credentials."

    At the moment, my images (no AI generative fill or AI-anything), my files in Photoshop has my Instagram, LinkedIn, and Behance profile handles attached. These are stored and contained part of the file. This is "Content Credentials."

    I am asking for this similar "Content Credentials" to be available in C1PRO, when a file within C1PRO is exported by the recipes within the program.

    Does this make more sense?

    0
  • Mathieu B
    Product Manager

    Thanks Ronald Tan, I think it does make sense, but let me reformulate to be sure: you'd like us to make use of Content Credential to certify that images edited in Capture One are *NOT* containing any generative AI. 
    Which means, instead of declaring the presence of generative AI, we would declare the absence of it.

    2
  • Ronald Tan

    Mathieu-B,

    Yes, that is correct. I think we can agree that AI is going to stay and remain with us. I do believe that careful groundworks in the context of software architecture should be start laying out to implementing AI-powered tools in C1PRO.

    When someone or entity or whom/whatever wants to verify and authenticate the exported file(s) at say—https://contentcredentials.org/verify.

    It would be shown the lack of (absence of) AI generative works.

    1
  • Brian Jordan
    Moderator

    Mathieu, I also hope Capture One explores this. Photography has always been a trusted record but that foundation is quickly disappearing. If ContentCredentials is the way to assure viewers of some degree of reality, how can we not be on-board?

    1
  • Ronald Tan

    Also....

    The reason why I got this idea is because with any images with Content Credentials embedded and when anyone looks up the information at the Content Credentials website. It looks like the following. Please see screenshot.

    Notice on the right side, the owner (in this case: me) is shown along with clickable links to my various connected profiles on the internet. Not shown (due to cropped), it shows which app and device was used such that is was issued by "Adobe Inc." and date of issuing of October 26, 2023 at 11:53 AM PDT.

    Here is the possible workflow improvement in the context of C1PRO.

    I had to open my file into Photoshop. Enable Content Credentials. Save. Export with the option to include/embed the Content Credentials into the exported file from Photoshop.

    Enter C1PRO. The purpose of my post is to somehow (near future), do something like this all within C1PRO's exporting options.

    1. Give us the ability to link (connect) to our various social media profiles.

    2. Once we use the Out Recipes within C1PRO, these Content Credentials are embedded and included within ALL files exported from C1PRO such that anyone can upload these file(s) to the website and see the information.

    1
  • Raymond Harrison

    I totally support this and would love to see it happen.

    0
  • david knoble

    Just a side note. The content credentials are normally stored in the DNG file at its creation in the camera. The ID number is linked to a photographer and can be verified. So, part of the question is, can C1 read that data in the DNG file? The second part is that editing software can write the edits, changes to the files to show what changes have been made. Using AI masking is not editing with AI, it is a mask. However, replacing a sky using AI or adding content to the image that is not already there would also show up, and would alter the image. So, the second question is, can C1 write the changes to the content credentials? There are specific protocols for using these credentials, but they are still new. Leica just released the M11-P that has content credentials that can be baked into every DNG. It is becoming more and more important.

    1
  • Brian Jordan
    Moderator

    In short, it's complicated.  Hence the request for Capture One to explore. :)

    1
  • Adam Isler

    I, too have recently learned about this initiative. It embeds read-only, encrypted history of content into the image metadata. This is good not just to ensure it’s not AI but to provide confidence and transparency into the image’s history. Leica is offering it on its new camera and there are rumours of Sony and Nikon following suit. It seems like it would be well worth adding into C1 the ability to pick it up from supported cameras and to have an option to add it at time of import for all files.

    3
  • david knoble

    I added the same request.   The idea is multi-fold and I have the Leica M11-P which is the first Leica camera to contain a chip that creates the content authentication directly in the DNG.  The idea is simple.

    Embedded in the DNG is a block chain style record that lists the camera serial number, the author (which is variable, typed into the camera menu) and some other information, which as I understand it, includes a view of the original image out of camera. 

    As the editing and exporting process continue, the edits are stored in the block chain.

    At any time, the image can be viewed through a content authentication viewer (free) and the information decoded and listed.

    It does not prevent the use of AI, but is intended to be a truthful list of changes such that if AI were used, it would be evident.  If not, it would also be evident.  If content were removed (i.e., people erased, clouds erased, etc.) that would be listed in steps to edit the image, but would also be evident when comparing the image out of camera embedding in the block chain to the resulting image submitted.

    Clearly journalists and news agencies want this first, but submission to photographic contests and grant awards would also like to see this, as would commericial entities receiving images.  

    This is authenticated knowledge, not prevention.

    But, it is also a must going forward for any mainstream photo editing software.  So, back to the request to please put this into the development timeline.  Without this ability in C1, some will be forced to use Adobe products for certain images that require that authenticity and can use C1 for those images that do not require it.  Granted it will start out small, but I believe this will be adopted sooner rather than later.

    Just my thoughts....

    4
  • Marcin Mrzygłocki
    Top Commenter

    Raising attention a bit: currently 7 people follow this thread, yet it has 1 vote in total for now despite quite positive reaction - can you all check if you have put your vote in? Maybe the original post needs an update to reflect ongoing discussion?

    1
  • Ronald Tan

    I was the original author who started this. I sometimes "unsubscribe" from posts as I no longer wish to be notified via email.

    I do believe that the number you have indicated isn't a "vote" per se. I unfollowed my post and it now reads "6."

    I am hoping that the comments and how this discussion for Content Credentials needs to be included for future C1PRO development should be pinned or included for serious internal discussion on how to best implement going forward for the future.

    2
  • BeO
    Top Commenter

    A very interesting topic.

    As pointed out already, Content Credentials and Generative AI transparency are not the same.

    But Content Credentials can support Generative AI transparency (but only too a certain extent)

    Content Credentials
    is fostered by Adobe, it seems to be a specific (read: proprietary) implementation to achieve the goals of the CAI (Content Authenticity Iniative) and C2PA (Coalition for Content Provenance and Authenticity), Adobe being one member, amongst many others.

    https://contentauthenticity.org/how-it-works
    https://contentauthenticity.org/our-members
    https://c2pa.org/

    Content Credentials is in beta in PS and also in LR.

    https://helpx.adobe.com/lightroom-cc/using/content-credentials-lightroom.html

     

    I think the initiave is valuable for content creators and consumers and think it would be a good idea that C1 supports it, should this become more main-stream and established technology.

     

    Generative AI transparency 

    There are no rules yet in effect, but there is a law in preparation in the EU which deals with the risks and chances of AI, one part is the transparency of content generated by AI.

    The EU regulation is under negotation with the member states now, nothing finalized yet, but it will have a global impact, imo.

    I am quite sure that replacing a sky and respective change in illumination of the scene by an AI module will not have to be made transparent, because it does not bear a noteworthy risk.

    From the current proposal of the regulation:

    Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications.

    Amendment 486
    Proposal for a regulation
    Article 52 – paragraph 3 – subparagraph 1

    from: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html

    Especially regarding the disclosure ("Disclosure ... is clearly visible for the recipient of that content.") I have some doubts that a new metadata structure like the Content Credentials is sufficient, because this is not clearly visible if you open a JPG file in a dumb application, let's say MS Paint.

    Probably a watermark is appropriate!?! But this is supported by C1 already.

    But then, I don't know the "generally acknowledged state of the art and relevant harmonised standards and specifications"

    No worries, for those who feel concerned, sky replacement will explicitely be protected from the need to have watermarks or alike :-) :

     Paragraph 3 shall not apply where the use of an AI system that generates or manipulates text, audio or visual content is authorized by law or if it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties. Where the content forms part of an evidently creative, satirical, artistic or fictional cinematographic, video games visuals and analogous work or programme, transparency obligations set out in paragraph 3 are limited to disclosing of the existence of such generated or manipulated content in an appropriate clear and visible manner that does not hamper the display of the work and disclosing the applicable copyrights, where relevant. It shall also not prevent law enforcement authorities from using AI systems intended to detect deep fakes and prevent, investigate and prosecute criminal offences linked with their use


    Anyway, there is currently no generative AI in C1.

     

    1
  • david knoble

    @Beo, I agree with some of what you said. However, regarding sky replacement and risk, that is exactly the point of content authenticity. To be transparent about what changes were made from the original capture and what generated the original capture. Replacing a sky for some photographic applications is not acceptable. For example, journalism and some photography awards. Yet one can easily argue that selling photography as art does not have the same constraint. So it is not whether sky replacement is right or wrong, it is proving whether or not the sky was replaced for the use of the photograph. At the same time, changing luminosity may not matter regardless of the use of the photograph, but knowing it was done and comparing to the original is simply authentication that the sky is darker in the representation, but still the same sky (or whatever luminosity was created). Artistically representing the same scene is different than artistically replacing things in the same scene. Until more recently this discussion was less important because it was too difficult to change the image that drastically without knowing.

    We have always had some level of content authenticity with film. The negative (or slide) was compared to the final image output and by looking at all the negatives from the roll, one could typically tell if the image was altered and in what fashion.

    Personally, I don’t believe that a mark on an image is necessary if the underlying data is there. Trust in the photographer and if proof is necessary, it is in the image. So, news editors and award judges can confirm if needed.

    The key is to adopt the technology in the software and at the pace that the ability to change photographs is progressing, the need to adopt the technology will increase with the same pace. If Capture One does nothing more than read and add to the tokens, exporting it with the export functions, then everything else takes care of itself.

    I think that is the ask here, but correct me if I am wrong.

    2
  • BeO
    Top Commenter

    Hi david.

    If Capture One does nothing more than read and add to the tokens, exporting it with the export functions, then everything else takes care of itself.

    Absolutely.

    I think I agree with almost everything you've written.

    Personally, I don’t believe that a mark on an image is necessary if the underlying data is there. Trust in the photographer and if proof is necessary, it is in the image. So, news editors and award judges can confirm if needed.

    My firm believe is that for deep fakes *) which are published or distributed, it must be very clear that it is a deep fake, and very clear to me does not mean "buried in the metadata".

    *) "Deep fake" defined as an image which pretends a subject or subjects is doing something which they actually didn't, or events which did not happen or did not happen in the way as pretended. UNLESS it is art and clear to everyone that it is.

    If they are not published (yet) but sent to news editors or judges, and not distributed to parties where you cannot be sure they will stay private or labeled by the recipients if used, then metadata might be sufficient. Might.

    But think about the internet, what about the girl in toktiktak who finds an image of herself posted by someone else and she is doing something which she actually didn't do? Or a political blog with deep fake images. Only detectable if you download the image and look into the metadata. No.

    I don't care about sky replacement or removing litter, or even putting someone small into a big landscape, or someone you asked for permission, if the image is (clearly) art, but I do if it is news or of public interest and can be used to mislead or harm people on important matters.

    IMO, it is the responsibility of the image creator to decide and label whether it is a deep fake*), he is accountable, and he needs to have this choice. I cannot be a software vendor. If you replace John drinking a beer in a group of people with John drinking a beer in this group of people, you merge two images together, the images were captured a second apart, all people incl. John now have a smile on their face (which they didn't in the separate images), and John is not a politically exposed person, which is way more sensitive, is this a deep fake? It would, if you replaced John by Jim, or (f/m)ake John dump his beer on someone.

    The software can (probably) not decide if it should get a label "deep fake". It can record technical steps in the metadata though.

    But, deep fakes*) should scream to you what they are.

    0
  • Dmitrii Pchelintsev

    I guess as they managed to get Adobe on board to support it, it will most likley to become the standard to support. So, earlier C1 start to support it, the better for everyone.

     

    1
  • Larry Boothby

    Add me to the support for this technology.  With both Leica and Nikon on board you are going to see more and more cameras with this tech built in.  It just makes sense for C1Pro to incorporate it plus it is an open standard.  There really isn't any reason why you shouldn't.

    0
  • PeterGunnar

    I second this -and the other- comment. The sooner this technology is handled by C1, the better

    1
  • Richard Huggins

    A bit late to this maybe. I'm in a camera club that is toying with making content credentials compulsory for competition entries, and it may extend to more general competitions. Most photographers here (Australia) use Adobe which has it as beta at the moment so there is little resistance. We need something.

    1
  • Frieder Zimmermann

    Me too, I think C1 should join the Content Authenticity Initiative in the near future.

     

    2
  • Eric Valk

    I support this proposal (and have voted for it and followed)
    The complementary question is shall we have a mechanism to give or deny permission to use our images for AI Training? 

    0
  • Raymond Harrison

    I definitely support this post and would appreciate this capability.

    1
  • Eric Valk

    I have already made such a proposal but without much support yet.

    https://support.captureone.com/hc/en-us/community/posts/16207906940957-IPTC-Field-for-Data-Mining

    0
  • Josh Hawkins

    I'd also support this proposal. And I think it will be a make or break issue with surprising speed. This will be a must have throughout our workflow in the next few years I expect.  

    0
  • David Knoble

    I may have missed it, but I would love to hear from the C1 folks if they are in fact looking at this for the software. Will keep posting to keep this thread alive. I have found the verify site is very easy to use and while the technology will take awhile to get in cameras, it will eventually get into phones and other devices (I believe). With today’s international political climates and the advance of AI gen images, this technology will likely need to adapt over time, but the framework makes sense. Let’s get it in the software and improve as we go.

    0

Please sign in to leave a comment.