Skip to main content

Frequently-asked questions

Are Content Credentials alone sufficient to prevent the spread of misinformation?

The most effective approach to content provenance is a combination of technologies and practices, including Content Credentials' secure metadata, undetectable watermarks, and content fingerprinting. Using these three technologies together in concert can make content provenance more robust than using just one.

This "three-pronged" approach includes using:

  • Secure metadata (Content Credentials): Verifiable information about how content was made that cannot be altered without leaving evidence of alteration. This metadata can indicate the provenance of a digital media asset and indicate how it was created. The CAI open-source SDK enables applications to create and securely attach this metadata to assets and display it to end-users.

  • Watermarking: Hidden information undetectable by humans but that can be decoded using a specialized watermark detector. State-of-the-art watermarks can be impervious to alterations such as cropping or rotating or the addition of noise to video and audio. Importantly, a watermark can survive rebroadcasting efforts like screen-shotting, pictures of pictures, or re-recording of media, which can remove secure metadata.

  • Fingerprinting: A way to create a unique code based on pixels, frames, or audio waveforms that can be computed and matched against other instances of the same content, even if there has been some alteration. The fingerprint can be stored separately from the content, re-computed on the fly, and matched against a database of Content Credentials and associated stored fingerprints. This technique does not require embedding of information in the media itself and is immune to information removal because there is no information to remove.

Combining these three approaches provides a unified solution that is robust and secure enough to ensure reliable provenance information.

For more information, see the blog post from April 8, 2024, Durable Content Credentials.

Are Content Credentials a blockchain system?

While Content Credentials are compatible with blockchain, they do not require or use blockchain directly.

Examples:

Are Content Credentials about digital rights management?

No; Content Credentials do not enforce permissions for access to content. In many cases, the name displayed on the Verify website is the name of the exporter of the content, not the rights owner.

The “Produced by” section in Verify refers to the name of the exporter. If the image was created with an Adobe Product such as Photoshop with Content Credentials (Beta) enabled, the “Produced by” section shows the name of the Adobe ID associated with the user who exported the image.

Do Content Credentials indicate if an image is fake or altered?

Content Credentials don't indicate if an image is fake. They can provide information on the origin of an image and how it was edited: For example, if an AI tool supports Content Credentials, then they indicate if an image was generated with AI. If an image was taken with a C2PA-enabled camera, the Content Credentials would show that, along with any subsequent edits, if they were made with C2PA-enabled software tools.

info

Content Credentials provide a positive signal about the origin and history of an image, but they don't provide a negative signal about the authenticity of an image.

An image with Content Credentials is like box of cereal with a nutrition label that tells you what's in it; on the other hand, an image with no Content Credentials is like a box of cereal without any nutrition information, so you don't know what's in it or where it came from. Because the Content Credentials are cryptographically signed, you can trust the information they provide and detect if they've been tampered with.

What happens if someone takes a photo or screenshot of an existing image?

Content Credentials don't prevent anyone from taking a screenshot or photo of an image, but they indicate when a file does not have historical data. A screenshot of an image wouldn't include C2PA metadata from the original image.

Conversely, if you use a C2PA-enabled camera to take a photo of an image, the camera will sign the image, but since it has no way of detecting what it is, the Content Credentials won't reflect that. For example, if the original image was generated with AI, the camera won't flag it as such. In general, the camera will indicate the device, when and where the image was taken in metadata, but it is not capable of analyzing the content of the image.

The C2PA specification allows for recovery of metadata stripped from an asset through a lookup process using either a watermarked ID or a perceptual content-aware hash, also referred to as a fingerprint.

What information is embedded in Content Credentials?

The information embedded in Content Credentials is totally up to each implementor. The manifest that defines the Content Credentials can include various assertions about the image, including the ingredients, the date and time, the location, and the device that created the image.

How can I prove time and place an image was created without revealing my identity?

Content Credentials can specify identity by using the Schema.org CreativeWork assertion, but it is entirely optional.

For example, using Photoshop you can add Content Credentials that indicate what edits were made without saying who did it. You would know Adobe signed the Content Credentials and that's it. Regardless of the "who", the cryptographically-signed manifest ensures you know the date and time. A camera could also include Exif metadata with location information.

How do you prevent faking GPS location metadata?

The location data included in Exif metadata is based on the implementor. People would trust the data based on the various "trust signals" they are given in the manifest, such as who signed it and when.