AB 3211 - Requires that Every Generative AI System Place Watermarks in AI-Generated Content - California Key Vote

Stage Details

See How Your Politicians Voted

Title: Requires that Every Generative AI System Place Watermarks in AI-Generated Content

Vote Smart's Synopsis:

Vote to pass a bill that requires, starting February 1, 2025, that every generative AI system, as defined under the law, place watermarks in AI-generated content in California.

Highlights:

  • Requires a generative artificial intelligence (AI) provider to place an imperceptible and maximally indelible watermark into content produced or significantly altered by a generative AI system that the provider makes available (Sec. 22949, Ch. 41).

  • Requires a generative AI provider to report a vulnerability or failure related to malicious inclusion or removal of provenance information or watermarks to the Department of Technology and to notify other generative AI providers as specified within 96 hours of discovering a material vulnerability or failure (Sec. 22949, Ch. 41).

  • Requires a conversational AI system to clearly and prominently disclose to users that the system produces synthetic content (Sec. 22949, Ch. 41).

  • Requires a large online platform to use labels to disclose the provenance data of content distributed on the platform, or label the content as unknown provenance if the platform is unable to detect provenance data (Sec. 22949, Ch. 41).

  • Requires a large online platform to require a user to disclose whether content is synthetic content if the content does not contain provenance data or the platform cannot interpret or detect provenance data (Sec. 22949, Ch. 41).

  • Requires newly manufactured recording devices sold or distributed in California to offer users an option to place a watermark into content produced by the device (Sec. 22949, Ch. 41).

  • Requires a recording device manufacturer to offer to a user of a recording device purchased before July 1, 2026 a software update enabling the user to place a watermark on content produced by the device and decode the provenance data, if technically feasible (Sec. 22949, Ch. 41).

  • Requires generative AI providers and large online platforms to produce a Risk Assessment and Mitigation Report (beginning on July 1, 2026) assessing the risks and dangers posed by synthetic content on the online platform or hosted on their generative AI hosting platforms, that is audited by qualified, independent auditors who are required to validate or invalidate the claims in the report (Sec. 22949, Ch. 41).

  • Authorizes the department to assess administrative penalties for violations of the bill, including an administrative penalty of $500,000 for intended or negligent violations (Sec. 22949, Ch. 41).

  • Establishes an effective date of July 1, 2026 (Sec. 22949, Ch. 41).

Title: Requires that Every Generative AI System Place Watermarks in AI-Generated Content

arrow_upward