Skip to main content


-   Biz & IT
-   Tech
-   Science
-   Policy
-   Cars
-   Gaming & Culture
-   Store
-   Forums

Subscribe

Close


###  Navigate

-   Store
-   Subscribe
-   Videos
-   Features
-   Reviews



-   RSS Feeds
-   Mobile Site



-   About Ars
-   Staff Directory
-   Contact Us



-   Advertise with Ars
-   Reprints

###  Filter by topic

-   Biz & IT
-   Tech
-   Science
-   Policy
-   Cars
-   Gaming & Culture
-   Store
-   Forums

###  Settings

Front page layout

[
Grid](/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/?view=grid)

[
List](/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/?view=archive)

Site theme

[light](/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/?theme=light)

[dark](/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/?theme=dark)

Sign in

#### Copyright —

Stable Diffusion copyright lawsuits could be a legal earthquake for AI
======================================================================

Experts say generative AI is in uncharted legal waters.
-------------------------------------------------------

[Timothy B.
Lee](https://arstechnica.com/author/timlee/) - Apr 3, 2023 11:45
am UTC

![Image generated by Stable Diffusion with the prompt “Mickey Mouse in
front of a McDonalds
sign.”](https://cdn.arstechnica.net/wp-content/uploads/2023/03/4c6c344c-44a9-40af-b291-7ad21554f4de_512x512.jpg)
Image generated by Stable Diffusion with the prompt “Mickey Mouse in
front of a McDonalds sign.”

Timothy B. Lee / Stable Diffusion


#### reader comments

83 <span
class="visually-hidden"> with
#### Share this story

-   Share on Facebook
-   Share on Twitter
-   Share on Reddit

The AI software Stable Diffusion has a remarkable ability to turn text
into images. When I asked the software to draw “Mickey Mouse in front of
a McDonald's sign,” for example, it generated the picture you see above.

Stable Diffusion can do this because it was trained on hundreds of
millions of example images harvested from across the web. Some of these
images were in the public domain or had been published under permissive
licenses such as Creative Commons. Many others were not—and the world’s
artists and photographers aren’t happy about it.

In January, three visual artists [filed a class-action copyright
lawsuit](https://arstechnica.com/information-technology/2023/01/artists-file-class-action-lawsuit-against-ai-image-generator-companies/)
against Stability AI, the startup that created Stable Diffusion. In
February, the image-licensing giant Getty [filed a
lawsuit](https://arstechnica.com/tech-policy/2023/02/getty-sues-stability-ai-for-copying-12m-photos-and-imitating-famous-watermark/) of
its own.

“Stability AI has copied more than 12 million photographs from Getty
Images’ collection, along with the associated captions and metadata,
without permission from or compensation to Getty Images,” Getty wrote in
its lawsuit.

Legal experts tell me that these are uncharted legal waters.

“I'm more unsettled than I've ever been about whether training is fair
use in cases where AIs are producing outputs that could compete with the
input they were trained on,” Cornell legal scholar James Grimmelmann
told me.

Generative AI is such a new technology that the courts have never ruled
on its copyright implications. There are some strong arguments that
copyright’s fair use doctrine allows Stability AI to use the images. But
there are also strong arguments on the other side. There’s a real
possibility that the courts could decide that Stability AI violated
copyright law on a massive scale.

Advertisement

That would be a legal earthquake for this still-nascent industry.
Building cutting-edge generative AI would require getting licenses from
thousands—perhaps even millions—of copyright holders. The process would
likely be so slow and expensive that only a handful of large companies
could afford to do it. Even then, the resulting models likely wouldn’t
be as good. And smaller companies might be locked out of the industry
altogether.

A “complex collage tool?”
-------------------------

The plaintiffs in the class-action lawsuit describe Stable Diffusion as
a “complex collage tool” that contains “compressed copies” of its
training images. If this were true, the case would be a slam dunk for
the plaintiffs.

But experts say it’s not true. [Erik
Wallace](https://twitter.com/eric_wallace_), a computer scientist at the
University of California, Berkeley, told me in a phone interview that
the lawsuit had “technical inaccuracies” and was “stretching the truth a
lot.” Wallace pointed out that Stable Diffusion is only a few gigabytes
in size—far too small to contain compressed copies of all or even very
many of its training images.

In reality, Stable Diffusion works by first converting a user’s prompt
into a latent representation: a list of numbers summarizing the contents
of the image. Just as you can identify a point on the Earth’s surface
based on its latitude and longitude, Stable Diffusion characterizes
images based on their “coordinates” in the “picture space.” It then
converts this latent representation into an image.

Page: 1
[2](https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/2/)
[3](https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/3/)
[4](https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/4/)
[Next <span
class="arrow">→](https://arstechnica.com/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/2/)


#### reader comments

83 <span
class="visually-hidden"> with
#### Share this story

-   Share on Facebook
-   Share on Twitter
-   Share on Reddit


Timothy B. Lee Timothy
is a senior reporter covering tech policy and the future of
transportation. He lives in Washington DC.

**Email**  //
**Twitter** [@binarybits](https://www.twitter.com/binarybits)

Advertisement

[]()
Promoted Comments
-----------------



**[shelbystripes](https://arstechnica.com/civis/members/shelbystripes.382395/)**

> The training has a good chance of being found not to be a copyright
> issue or declared fair-use, however the generation of copyright
> infringing materials is likely to be found a breach of copyright,
> however the infringement would be done by the end-users not the owners
> of the AI, so they should in theory get the same protection Sony got
> in the Betamax ruling.

The Betamax ruling is actually illustrative for why these AI
tools—specifically ones that are trained to reproduce (reproduce?)
identifiable Mickey Mouse and McDonald’s IP—might ***be*** contributory
infringement. It’s a good contrast ruling.

What do I mean?

The Betamax case found that Sony wasn’t liable for *contributory*
infringement, which is a real thing on its own, liability for knowingly
inducing or facilitating copyright infringement by someone else. Sony
was accused of inducing infringement by making and selling a device
*specifically intended* for making copies of (recording) copyrighted
audiovisual material (broadcast TV) with knowledge of this infringing
use.

The SCOTUS ruling in the Betamax case didn’t eliminate or diminish
contributory infringement. Instead it found that the alleged *direct
infringement* that Sony was supposedly inducing, wasn’t infringement at
all. The activity Sony was “inducing” was just an individual person
recording broadcast TV content—*which they were permitted and even
encouraged to watch, for free*—so they could enjoy it for free later.
This is called “time-shifting”.

And the Betamax ruling said time-shifting by VCR owners was fair use.

So core of what let Sony off the hook, was that what Sony was trying to
“induce”, was a significant non-infringing use. And it was
non-infringing *because* the allegedly infringing use was just a mere
time-shift of a use *that the public was permitted and encouraged to use
for free*.

The closest ***valid*** analog I can think to this is, Google image
search. You put in what you’re searching for, it shows you thumbnails of
images on a site similar to what you’re looking for, with a link to the
site / page where it’s located. It’s helping you find images that people
want you to directly view on their own website anyway. And making small
thumbnails demonstrates their intent is to direct people to the
copyright holder’s site to enjoy the content. So making thumbnails of
Getty Images should be fair use, if it’s just helping people find the
page on Getty Images where that image is displayed. That’s similar to
Betamax, theoretically.

But—and here’s the difference—Getty Images has images on its website
***for the purpose of selling you access rights to the image***. Those
images are heavily watermarked and limited in resolution, and shown to
people to give them an idea of what they can license, and the ability to
buy a license. They are ***not*** meant to be viewable for free just to
enjoy the full original image, let alone to make copies from, or
integrate those copies into art you’re making.

But that’s what these AI tools *do*. They enable people to create
(relatively) high-resolution artwork that substantially incorporates and
reproduces Getty Images or other copyright owners’ material. And it
removes any watermarks or attribution in the process. And it can
reproduce copies that are damn close derivatives to copyrighted works.

Unlike Betamax VCRs, this is doing far more than reproductions of
something that people were encouraged to watch and enjoy for free.
Unlike Google image search, this is not just helping people find images
they can go access and enjoy in the manner the original copyright holder
intended.

This is knowingly consuming copyrighted material with the knowledge it
could be used to create derivatives of copyrighted works. And that is
its primary use offering—if they’re offering something trained on
copyrighted works, they’re literally offering to help you make
derivatives of those copyrights. And while they put a lot of effort into
making this AI model able to do that, it sounds like some of these AI
creators aren’t putting much care or effort into teaching it how to not
create blatantly infringing derivatives.

That sounds like it could easily be contributory infringement to me.

[April 3, 2023 at 2:50
pm](https://arstechnica.com/civis/posts/41759151/)

### Channel Ars Technica

[← Previous
story](https://arstechnica.com/cars/2023/04/the-2023-hyundai-ioniq-6-a-streamlined-look-equals-serious-range/)
[Next story <span
class="arrow">→](https://arstechnica.com/science/2023/04/a-passenger-aircraft-that-flies-around-the-world-at-mach-9-sure-why-not/)

### Related Stories

### Today on Ars

-   [Store](/store/)
-   [Subscribe](/store/product/subscriptions/)
-   [About Us](/about-us/)
-   [RSS Feeds](/rss-feeds/)
-   [View Mobile
   Site](/tech-policy/2023/04/stable-diffusion-copyright-lawsuits-could-be-a-legal-earthquake-for-ai/?view=mobile)

-   [Contact Us](/contact-us/)
-   [Staff](/staff-directory/)
-   [Advertise with us](https://www.condenast.com/brands/ars-technica)
-   [Reprints](/reprints/)

### [Newsletter Signup](/newsletters/)

Join the Ars Orbital Transmission mailing list to get weekly updates
delivered to your inbox.

Sign me up →



CNMN Collection
WIRED Media Group
© 2023 Condé Nast. All rights reserved. Use of and/or registration on
any portion of this site constitutes acceptance of our [User
Agreement](https://www.condenast.com/user-agreement/) (updated 1/1/20)
and [Privacy Policy and Cookie
Statement](https://www.condenast.com/privacy-policy/) (updated 1/1/20)
and [Ars Technica
Addendum](/amendment-to-conde-nast-user-agreement-privacy-policy/)
(effective 8/21/2018). Ars may earn compensation on sales from links on
this site. [Read our affiliate link policy](/affiliate-link-policy/).
[Your California Privacy
Rights](https://www.condenast.com/privacy-policy/#california) |
Do Not Sell My Personal Information
The material on this site may not be reproduced, distributed,
transmitted, cached or otherwise used, except with the prior written
permission of Condé Nast.
[Ad
Choices](https://www.condenast.com/online-behavioral-advertising-oba-and-how-to-opt-out-of-oba/#clickheretoreadmoreaboutonlinebehavioraladvertising(oba))