Letter to Jake Sullivan, National Security Advisor, Alondra Nelson, Acting Director of the Office of Science and Technology Policy - Eshoo Urges NSA & OSTP to Address Unsafe AI Practices

Letter

Date: Sept. 20, 2022
Location: Washington, DC

Dear Advisor Sullivan and Director Nelson,

I'm writing to express grave concerns about the recent unsafe release of the Stable Diffusion model by Stability AI. I strongly urge you to address this and similar unsafe releases using any authorities and methods within your power, including export controls, and request that you brief my office on any additional authorities the executive branch may need to address this issue.

On August 22, 2022, Stability AI released its open-source, text-to-image generation model called Stable Diffusion. Unlike OpenAI's DALL-E 2, Stable Diffusion's model is available for anyone to use without any hard restrictions. Predictably, Stable Diffusion was immediately misused after the model was released. Stability AI knew or should have known that Stable Diffusion would be misused and took no discernable steps to protect against these misuses before release. In one instance, Stability AI even provided further directions for how to misuse the model.

Following the open-source release of Stable Diffusion, photos of violently beaten Asian women generated by Stable Diffusion were posted in online chat rooms. Reports also indicate that several 4chan threads have been dedicated to Stable Diffusion-generated pornography, some of which portray real people. In a message posted to users of the Stable Diffusion Discord, Stability AI Founder and CEO Emad Mostaque said to Stable Diffusion users, "If you want to make NSFW [Not Suitable for Work] or offensive things make it on your own GPUs when the model is released." Mr. Mostaque then went on to tell users which GPUs were compatible with its model for the sake of using it to generate illicit content , content Mr. Mostaque knew or should have known would likely include illegal content.

Unfortunately, the extent to which illegal or otherwise potentially dangerous images using Stable Diffusion were generated is unknowable due to its open-source nature, but it is both plausible and probable that pornographic images depicting real people under the age of 18 have been generated on individual users' computers and have created a market for Stable Diffusion-generated illegal depictions of minors, as well as other illegal content. While Stability AI's licensing terms do not provide for illegal content, the open-source release of the model provides for egregious dual-use applications. Stable Diffusion also includes a tool that attempts to detect and block offensive or undesirable images, but that tool can be easily circumvented using the open-source code. This means Stable Diffusion can be -- and reportedly has been -- used to create images that DALL-E 2 currently blocks , including propaganda, violent imagery, pornography, images that potentially violate corporate copyright, and images used for disinformation and misinformation campaigns.

Reporting suggests Stability AI released the unsafe model for funding purposes, as it is now in talks to raise capital and has cemented partnerships with "governments and leading institutions" and the model was reportedly released shortly after it was trained. I am an advocate for democratizing access to AI and believe we should not allow those who openly release unsafe models onto the internet to benefit from their carelessness. Democratizing access to AI may help alleviate incentives to release or deploy unsafe models , and I've been leading this charge through my leadership in the AI Caucus, as well as through my legislation to develop a detailed roadmap for how the U.S. can build, deploy, govern, and sustain a national research cloud and associated research resources in order to make AI systems safer and more ethical by democratizing access to AI resources and testing.

While I commend Stability AI for its overall objective of democratizing access to AI, dual-use tools that can lead to real-world harms like the generation of child pornography, misinformation, and disinformation should be governed appropriately. In the same way that nuclear information and materials may lead to both the generation of energy and horrible atrocities, AI models similarly pose dual-use applications in a digital environment. We currently use export controls to control the release of various types of dual-use technical data, and I urge you to investigate the possibility of using such powers to control the release of unsafe dual-use AI models as well. In an increasingly digital world, we should increase our vigilance against digital harms to both individuals and society.

For all the reasons I've stated, I strongly urge you to address the release of unsafe AI models similar in kind to Stable Diffusion using any authorities and methods within your power, including export controls, and to brief my office on any additional authorities the executive branch may need to address this issue.

Most gratefully,


Source
arrow_upward