Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI and Copyright: Expanding copyright hurts everyone (eff.org)
39 points by mooreds 49 days ago | hide | past | favorite | 72 comments


How do they get to the conclusion that AI uses are protected under the fair use doctrine and anything otherwise would be an "expansion" of copyright? Fairly telling IMO


AI training and the thing search engines do to make a search index are essentially the same thing. Hasn't the latter generally been regarded as fair use, or else how do search engines exist?


There was a relatively tiny but otherwise identical uproar over Google even before they added infoboxes that reduced the number of people who clicked through.


There was also the lawsuit against google for the Google Scholar project, which is not only very similar to how AI use ingest copyright material, but even more than AI actually reproduced word for word (intentionally so) snippets of those works. Google Scholar is also fair use.


> There was a relatively tiny but otherwise identical uproar over Google even before they added infoboxes that reduced the number of people who clicked through.

But is that because it isn't fair use or because of the virulent rabies epidemic among media company lawyers?


This was normal people, as much as bloggers on the pre-social media early web could be considered normal.


Normal people that aren't media companies were objecting to search engines indexing websites? That seems more likely to have been media companies using the fact that they're media companies to get people riled up over a thing the company is grumpy about.


I don't think regular people pay attention to copyright decisions (they don't even pay attention to the cases to make it to the supreme court) but there are plenty of lawyers who don't work for media companies who disagree with the findings. I also think your characterization is ridiculous and pejorative.


They disagree with search engines being fair use?

The general problem is that both the structure of copyright and the legacy media business model were predicated on copying being a capital-intensive process. If a printing press is expensive then reproduction is a good place to collect royalties, because you could go after that expensive piece of equipment if they don't pay. And if a printing press is expensive then a publisher who has one is offering a scarce service in a market with a high barrier to entry.

The internet made copying free and that pretty well devastated the publishing industry, more as a result of the second one than the first. If your product isn't scarce -- if your news reporting is in competition with every blog and social media post -- you're not getting the same margins you used to. But there's no plausible way the incumbents are going to convince people that reporters with a website instead of a printing press need to be excluded from the market so they can have less competition, and that by itself and nothing more means their traditional business model is gone. They're competing for readers and advertisers against Substack and Reddit and the cat's not going back in the bag.

Meanwhile copyright infringement got way easier and that's much more plausible to frame as a problem, so the companies want to sic their lawyers on it, except that the bag is here on the ground and the cat is still over there getting a million hits. There is no obviously good way to solve it (but plenty of bad ways to not solve it) and solving it still wouldn't put things back the way they were anyway.

So their lawyers are constantly under pressure to do something but none of their options are good or effective which means they're constantly demanding things that are oppressive or asinine or, like the anti-circumvention clause in the DMCA, own-goals that tech megacorps use against content creators to monopolize distribution channels. Which is why it's an epidemic. If you can see the target the pressure is on to pull the trigger even when all you have is a footgun.


>They disagree with search engines being fair use?

No, with LLMs being fair use. I'm not going to respond to the rest of your post which is a paranoid and pejorative screed based on the fallacy that copyright is predicated on copying being hard or intensive when that was never the case. Copying was always easy. Its the creative part that is hard and why copying was made illegal.


> No, with LLMs being fair use.

In which case you're responding to the wrong thread.

> Copying was always easy.

Compare the price of a physically printed book which is in the public domain to the median one that isn't. The prices are only a little lower because the printing and distribution costs are significant.

Now compare the price of ebooks in the public domain with ebooks still under copyright. The latter isn't 40% more or 75% more, it's a billion percent more. Infinitely more. Copying went from being a double-digit percentage of the price to being zero.


Most important part of fair use is does it harm the market for the original work. Search helps to brings more eyes to the original work, llms don't.


The fair use test (in US copyright law) is a 4 part test under which impact on the market for the original work is one of 4 parts. Notably, just because a use has massively detrimental harms to a work's market does not in and of itself constitute a copyright violation. And it couldn't be any other way. Imagine if you could be sued for copyright infringement for using a work to criticize that work or the author of that work if the author could prove that your criticism hurt their sales. Imagine if you could be sued for copyright infringement because you wrote a better song or book on the same themes as a previous creator after seeing their work and deciding you could do it better.

Perhaps famously, emulators very clearly and objectively impact the market for a game consoles and computers and yet they are also considered fair use under US copyright law.

No one part of the 4 part test is more important than the others. And so far in the US, training and using an LLM has been ruled by the courts to be fair use so long as the materials used in the training were obtained legally.


> And so far in the US, training and using an LLM has been ruled by the courts to be fair use so long as the materials used in the training were obtained legally.

Just like OpenAI is rightfully upset if their LLM output is used to train a competitor’s model and might seek to restrict it contractually, publishers too may soon have EULAs just for reading their books.


OpenAI's hypocrisy on this matter is precisely why hackers should be taking this as the best opportunity we've had in decades to scale back the massive expansions that Disney et al have managed to place on copyright. But instead of taking advantage of the fact that for once someone with funding and money can go toe to toe with the big publishers and that in doing so they will be hoist on their own petard, a lot of hackers appear to be circling the wagons and suddenly finding that they think this whole "IP" thing is good actually and maybe we should make copyright even stronger.

Surely making copyright even stronger (and even expanding it to cover style as some have argued in response to the Ghibli style stuff) will have no unintended consequences going forward into a future where more and more technology is locked down by major manufacturers with a strong incentive to use and abuse IP law to prevent competition and open alternatives... right?


GenAI art is like counterfeit goods. If left unchecked it will mostly destroy the market for the original.


That's certainly an argument often made about counterfeit goods, and it can certainly be true in cases (and counterfeiting has other problems, namely confusing the origin of a specific good when that matters to the consumer), but it's also not a universal truth either. Were it a universal truth, that would imply generally that open source can't work because anyone can make and distribute copies of the open material, but also it implies that Windows and macOS should not exist because of all the innumerable Linux clones.

Also instructive would be the IBM BIOS clone, it is perhaps true that the "IBM Compatibles" killed the market that existed for IBM machines at that moment in time, but it's also true that it opened whole new markets, both to the clone makers and the ancillary businesses, but also arguably IBM themselves.

3d printing and Arduino are probably other examples where "counterfeits" might have shrunk the market for the originals (Prusa is notably reducing how open their designs are, and Arduino themselves are not the healthiest, modulo being owned by Qualcom now), but the market for Aruduino projects and ancillary supplies and certainly the market for 3d printers is massively healthy, and arguably both are healthier than if Arduino or Prusa (or really Reprap) were the single and sole providers of their products.

And I think art has an even stronger bulwark in that a lot of the value of a given "art" comes not from the art itself, but from the artist. It's very possible many famous artist's works were actually made by their apprentices, but until someone proves that, the art will continue to have value as an original work of the artist. But art is also a dime a dozen (or less). The internet is full of free or dirt cheap art and today you can go on fiver or mechanical turk and commission any number of artworks for probably less than your day's wages. But no one is buying tickets to your Fiver concert. No one buys $1k per plate dinners at Deviant Art gallery showings. But they will pay many thousands of dollars for a piece of artwork that might destroy itself because the person who produced that artwork is named Banksy.


I don't think they are rightfully upset at all. Yeah, no kidding. Everyone becomes pro rent seeker when it suits them. Which is the exact reason we must rain it in


I misspoke. I should have written “understandably upset”.


1. Character of the use. Commercial. Unfavorable.

2. Nature of the work. Imaginative or creative. Unfavorable.

3. Quantity of use. All of it. Unfavorable.

4. Impact on original market. Direct competition. Royalty avoidance. Unfavorable.

Just because the courts have not done their job properly does not mean something illegal is not happening.


All of these apply to emulators.

* The use is commercial (a number of emulators are paid access, and the emulator case that carved out the biggest fair use space for them was Connectix Virtual Game Station a very explicitly commercial product)

* The nature of the work is imaginative and creative. No one can argue games and game consoles aren't imaginative and creative works.

* Quantity of use. A perfect emulator must replicate 100% of the functionality of the system being emulated, often times including bios functionality.

* Impact on market. Emulators are very clearly in direct competition with the products they emulate. This was one of Sony's big arguments against VGS. But also just look around at the officially licensed mini-retro consoles like the ones put out by Nintento, Sony and Atari. Those retro consoles are very clearly competing with emulators in the retro space and their sales were unquestionably affected by the existence of those emulators. Royalty avoidance is also in play here since no emulator that I know of pays licensing fees to Nintendo or Sony.

So are emulators a violation of copyright? If not, what is the substantial difference here? An emulator can duplicate a copyrighted work exactly, and in fact is explicitly intended to do so (yes, you can claim its about the homebrew scene, and you can look at any tutorial on setting up these systems on youtube to see that's clearly not what people want to do with them). Most of the AI systems are specifically programmed to not output copyrighted works exactly. Imagine a world where emulators had hash codes for all the known retail roms and refused to play them. That's what AI systems try to do.

Just because you have enumerated the 4 points and given 1 word pithy arguments for something illegal happening does not mean that it is. Judge Alsup laid out a pretty clear line of reasoning for why he reached the decision he did, with a number of supporting examples [1]. It's only 32 pages, and a relatively easy read. He's also the same judge that presided over the Oracle v. Google cases that found Google's use of the java APIs to be fair use despite that also meeting all 4 of your descriptions. Given that, you'll forgive me if I find his reasoning a bit more persuasive than your 52 word assertion that something illegal is happening.

[1]: https://fingfx.thomsonreuters.com/gfx/legaldocs/jnvwbgqlzpw/...


>If not, what is the substantial difference here?

Well they are completely different systems functioning in completely different ways and only looking at one of the four factors isn't doing any favors.


I believe we’re in violent agreement here, because my point was that all 4 aspects are equally important and they need to be evaluated as a whole. And further that the current legal rulings on these systems delve into each of those parts with much more nuance and care than the provided 56 word surface level examination of the issues


It seems like you're responding to a question about training by talking about inference. If you train an LLM because you want to use it to do sentiment analysis to flag social media posts for human review, or Facebook trains one and publishes it and others use it for something like that, how is that doing anything to the market for the original work? For that matter, if you trained an LLM and then ran out of money without ever using it for anything, how would that? It should be pretty obvious that the training isn't the part that's doing anything there.

And then for inference, wouldn't it depend on what you're actually using it for? If you're doing sentiment analysis, that's very different than if you're creating an unlicensed Harry Potter sequel that you expect to run in theaters and sell tickets. But conversely, just because it can produce a character from Harry Potter doesn't mean that couldn't be fair use either. What if it's being used for criticism or parody or any of the other typical instances of fair use?

The trouble is there's no automated way to make a fair use determination, and it really depends on what the user is doing with it, but the media companies are looking for some hook to go after the AI companies who are providing a general purpose tool instead of the subset of their "can't get blood from a stone" customers who are using that tool for some infringing purpose.


re ".....AI training and the thing search engines do to make a search index are essentially the same thing. ...."

Well, AI training has annoyed LOTS people. Overloaded websites.. Done things just because they can . ie Facebook sucking up content of lots pirate books

Since this AI race started our small website is constantly over run by bots and it is not usable by humans because of the load.. NEWER HAD this problem before AI , when just access by search engine indexing .....


This is largely because search engines are a concentrated market and AI training is getting done by everybody with a GPU.

If Google, Bing, Baidu and Yandex each come by and index your website, they each want to visit every page, but there aren't that many such companies. Also, they've been running their indexes for years so most of the pages are already in them and then a refresh is usually 304 Not Modified instead of them downloading the content again.

But now there are suddenly a thousand AI companies and every one of them wants a full copy of your site going back to the beginning of time while starting off with zero of them already cached.

Ironically copyright is actually making this worse, because otherwise someone could put "index of the whole web as of some date in 2023" out there as a torrent and then publish diffs against it each month and they could all go download it from each other instead of each trying to get it directly from you. Which would also make it easier to start a new search engine.


Weird, AI companies insist that AI models are not just indexes but instead something the model has "learned".

So, again, to answer my question, it's certainly not a settled matter of law that AI models and/or their "training" is actually akin to a search engine such that it amounts to a fair use. So how is it that the EFF is reporting it like a fact?


Google doesn't offer for own gains copies of existing websites (except they do that lately as well)


Basically, it’s an open question that courts have yet to decide. But the idea is that it’s fair use until courts decide otherwise (or laws decide otherwise, but that doesn’t seem likely). That’s my understanding, but I could be wrong. I expect we’ll see more and more cases about this, which is exactly why the EFF wants to take a position now.

They do link to a (very long) article by a law professor arguing that data mining is fair use. If you want to get into the weeds there, knock yourself out.

https://lawreview.law.ucdavis.edu/sites/g/files/dgvnsk15026/...


> Basically, it’s an open question that courts have yet to decide.

While it hasn't either been ruled on or turned away at the Supreme Court yet, a number of federal trial courts have found training AI models from legally-acquired materials to be fair use (even while finding, in some of those and other cases, that pirating to get copies to then use in training is not and using models as a tool to produce verbatim and modified similar-medium copies of works from the training material is also not.)

I’m not aware of any US case going the other way, so, while the cases may not strictly be precedential (I think they are all trial court decisions so far), they are something of a consistent indicator.


> even while finding, in some of those and other cases, that pirating to get copies to then use in training is not

I still don't get this one. It seems like they've made a ruling with a dependency on itself without noticing.

Suppose you and some of your friends each have some books, all legally acquired. You get them together and scan them and do your training. This is the thing they're saying is fair use, right? You're getting together for the common enterprise of training this AI model on your lawfully acquired books.

Now suppose one of your friends is in Texas and you're in California, so you do it over the internet. Making a fair use copy is not piracy, right? So you're not making a "pirated copy", you're making a fair use copy.

They recognize that one being fair use has something to do with the other one being, but then ignore the symmetry. It's like they hear the words "file sharing" and refuse to allow it to be associated with something lawful.


In Judge Alsup's case, it largely hinged on whether you had a right to the initial copy in the first place. If I read (and recall) his ruling correctly, the initial pirated copy (that is, downloading from a source that didn't have the right to distribute it in the first place) made all subsequent intermediary copies necessary to the training process also not fair use.

So in the case of you and your friends, it isn't the physical location that makes a difference, but whether you obtained the original copy legally, and the subsequent copies were necessary parts of the training process. This is also one of those places where we see the necessity for a legal concept of corporate "personhood". AnthonyMouseAI Inc. is the entity that needs to acquire and own the original copy in order for you and your friend to be jointly working on the process and sending copies back and forth. If your friend stops being an employee of AnthonyMouseAI Inc, they can't keep those copies and you can't send them any more.

Can you and your buddies do this without forming a legal corporation or partnership? Sure. Will that be a complicating factor if a publisher sued you? Probably.


> In Judge Alsup's case, it largely hinged on whether you had a right to the initial copy in the first place. If I read (and recall) his ruling correctly, the initial pirated copy (that is, downloading from a source that didn't have the right to distribute it in the first place) made all subsequent intermediary copies necessary to the training process also not fair use.

But that's the issue, isn't it? Someone has bought a legitimate copy and put it on Napster with the intention of providing copies to people when it's fair use.

If people then download it for non-fair use purposes while they all wink at each other, it doesn't become fair use just because you said it was.

But if the person(s) downloading it actually do make fair use of it, isn't that different? How is it unlike borrowing a book from the library and quoting an excerpt from it in your criticism?

> Can you and your buddies do this without forming a legal corporation or partnership?

Partnerships aren't corporations and are generally what gets formed by default if nobody files any paperwork. There are often reasons that's not what you want, but that's not the point here.

It's also sort of like, so if you filed official paperwork for a limited-liability partnership that lets anyone in then you could do it? If that was actually the thing that matters then they would just do that, right? Which also implies that it shouldn't be, because vacuous requirements are unproductive.


> But that's the issue, isn't it? Someone has bought a legitimate copy and put it on Napster with the intention of providing copies to people when it's fair use.

To the best of my understanding of the law, making whole copies of a work for other people to then make fair use of is generally not considered fair use. There are some narrow cases where it has been; the two that come to mind are a case (whose name escapes me) that found reproducing an entire editorial for commentary and discussion on a discussion forum was fair use and a handful of cases regarding whether a print shop is liable for copyright infringements if a customer orders a copy made of a work.

Likewise the person downloading it can't be making fair use of the work unless they already have a copy of the work they have legally obtained. This was a key point of the Anthropic case, the use of legally purchased or acquired books was fair use, the use of the copies obtained only via piracy were not. I don't recall if Alsup addressed it specifically in his ruling, and to the best of my knowledge otherwise no rulings have ever been made on the legality of "I own a legal copy of this work, and I am acquiring a copy made from someone else's legal copy for fair use purposes" (this is often the theory behind why downloading ROMs if you own the game already could be legal).

> How is it unlike borrowing a book from the library and quoting an excerpt from it in your criticism?

Libraries have been something publishers have hated for a long long while. They routinely try to extort massive amounts of money out of them (see separate pricing for "library" copies of media, maximum lending count limits on "library" ebooks etc). One of the reasons is that a library as you point out sort of puts a big dent in the "fair use derives from ownership / first sale doctrine" idea. Again, to the best of my knowledge, cases involving using library copies just tend to get their own set of rules, and are also probably just fewer and farther between because it's unlikely you can convince a library to let you just scan all of their books regardless of the legality of you doing so.

As for the partnership thing, my point was mostly that any legal entity that can jointly own property would have to be the owner of a work for people within that entity to distribute copies between each other, and that once a person leaves the legal entity they could no longer keep or obtain copies of works owned by the entity.


> To the best of my understanding of the law, making whole copies of a work for other people to then make fair use of is generally not considered fair use.

Who is making the copy? You have your copy that you bought. Bits in a wire are transient so that shouldn't be a copy any more than opening a book and exposing it to a reading lamp. The copy is made at the destination by them on their own storage medium. It still seems hard to distinguish from borrowing a book and making fair use of that.

> Likewise the person downloading it can't be making fair use of the work unless they already have a copy of the work they have legally obtained. This was a key point of the Anthropic case, the use of legally purchased or acquired books was fair use, the use of the copies obtained only via piracy were not.

Which is the thing I'm struggling to understand the logic of. It's like calling it piracy by stipulation from the outset and then using that to decide if it's piracy or not. That's just assuming the conclusion. If the person uploading it has a purchased copy, and the person downloading it in order to make a copy is making fair use, where is the piracy happening? A fair use copy of a lawfully purchased work is being made. It seems like they're just trying to find a way to call it piracy because it involves file sharing without even considering what's actually happening.

> Libraries have been something publishers have hated for a long long while.

Well yeah, but they're not supposed to be the ones who decide what the law is.

> it's unlikely you can convince a library to let you just scan all of their books regardless of the legality of you doing so.

Isn't scanning books from libraries the thing Google Books did? And the legality is the issue.

> As for the partnership thing, my point was mostly that any legal entity that can jointly own property would have to be the owner of a work for people within that entity to distribute copies between each other, and that once a person leaves the legal entity they could no longer keep or obtain copies of works owned by the entity.

Which still doesn't seem like it quite works.

Suppose you want to train an AI model, so you buy a bunch of books, scan them, train the model and publish the model on the internet. Presumably someone who gets a copy of the model doesn't need a copy of the books? But then why would you need a copy of the books? Couldn't you discard the scans, sell the books and keep the model?

At which point, how are we distinguishing this from borrowing the books, training the model and discarding the scans?

We can also consider this from a policy perspective. If Google had to buy a single copy of the book and wasn't allowed to sell it, would that really matter to the publisher or Google? Nope. The publishers are just looking for a hook to sue them. But it would matter to someone smaller who was trying to do it on a budget because it would make the smaller or non-profit effort non-viable, which doesn't get the publishers anything either way. So what should we do here? Probably not the thing that gives a big advantage to massive incumbents over their smaller/decentralized/non-profit competitors without providing the publishers with anything of significance.


Reminder that in the Alsup case, the shadow library was not used for training.


> Basically, it’s an open question that courts have yet to decide.

This is often repeated, but not true. Multiple US and UK courts have repeatedly ruled that it is fair use.

Anthropic won. Meta won. Just yesterday, Stability won against Getty.

At this point, it's pretty obvious that it's legal, considering that media companies have lost every single lawsuit so far.

https://fixvx.com/technollama/status/1985653634390995178


> But the idea is that it’s fair use until courts decide otherwise

That's certainly an "idea"


I think your question was supposed to be rhetorical, but I think it's safe to assume that the answer is that they're lawyers. They've read the law, and read through a large number of cases to see how judges have interpreted it over the past century or so.


I'm a lawyer. Many lawyers disagree. It's certainly not a settled matter of law that AI model training is a fair use.


They probably get to that conclusion because the courts have rules that AI uses are protected under fair use, and so yes changing that would be an expansion of copyright.


Not the EFF I once knew. Are they now pro-bigtech?


> Not the EFF I once knew. Are they now pro-bigtech?

There's nothing pro-bigtech in this proposal. Big tech can afford the license fees and lawsuits... and corner the market. The smaller providers will be locked out if an extended version of the already super-stretched copyright law becomes the norm.


They’ve always been anti-expansive-copyright, which has historically aligned with much (but not all) of big tech, and against big content/media.

A lot of the people that were anti-expansive-copyright only because it was anti-big-media have shifted to being pro-expansive-copyright because it is perceived as being anti-big-tech (and specifically anti-AI).


They have always been anti-bigcontent. Maybe you are the one who has changed


EFF is bought and paid for. Not once does this piece mention that "AI" and humans are different and that a digital combine harvester mowing down and ingesting the commons does not need the same rights as a human.

It is not fair use when the entire output is made of chopped up quotes from all of humanity. It is not fair use when only a couple of oligarchs have the money and grifting ability to build the required data centers.

This is a another in the long lists of institutions that have been subverted. ACLU and OSI are other examples.


What definition of "sufficiently transformative" doesn't include "a book about wizards" by some process being used to make "a machine that spits out text"? A magazine publisher has a more legitimate claim against the person making a ransom letter: at least the fonts are copied verbatim.

There are legitimate arguments to be made about whether or not AI training should be allowed, but it should take the form of new legislation, not wild reinterpretations of copyright law. Copyright law is already overreaching, just imagine how goddawful companies could be if they're given more power to screw you for ever having interacted with their "creative works".


We did have that. In some EU countries, during the cassette tape and Sony Walkman era, private individuals were allowed to make around 5 copies for friends from a legitimate source.

Companies were not allowed to make 5 trillion copies.


I am pretty sure companies keep one copy of each item.


> It is not fair use when only a couple of oligarchs have the money and grifting ability to build the required data centers.

Seems like a good argument to not lock down the ability to create and use AI models only to those with vast sums of money able to pay extortionist prices from copyright holders. And let's be clear, copyright holders will happily extort the hell out of things if they can, for an example of this we can look to the number of shows and movies that have had to be re-edited in the modern era because there are no streaming rights to the music they used.


> EFF is bought and paid for.

by whom?


"Here's What to Do Instead" misleading title, no alternatives suggested. Just hand-wavey GenAI agitprop.


These are their alternatives:

What neither Big Tech nor Big Media will say is that stronger antitrust rules and enforcement would be a much better solution. What’s more, looking beyond copyright future-proofs the protections. Stronger environmental protections, comprehensive privacy laws, worker protections, and media literacy will create an ecosystem where we will have defenses against any new technology that might cause harm in those areas, not just generative AI.


None of their alternatives will work or solve the problems that creatives face or the problem that people cannot think for themselves any longer (as seen by the downvoting in this submission).


>the problem that people cannot think for themselves any longer (as seen by the downvoting in this submission).

Quite an interesting take to assume that everyone who disagrees with you cannot think for themselves.


I read that as the problem that people are relying on LLMs to do things for them rather than actually learning the thing themselves. Which is real but it's unclear what it has to do with copyright.


Let’s play out what the successful story looks like, for example in music generation:

* thousands of composers and musicians “contribute” their works for free to OpenAI and their ilk.

* models are trained and can produce movie scores, personalized sound tracks, etc

* the market for composers dwindles to a small sliver. Few if any choose to pursue it.

* OpenAI et al have a de facto monopoly on music creation.

* Soon the art of composition is lost. There are a few who can make a living as composers by selling their persona rather than their music. They actually just use OpenAI to write it for them.

Is that the future we want? Inevitable.


Keep in mind that professional composers are a tiny minority of people who make great music. And very few professional musicians actually live off royalties. I'm sure that music will do just fine without draconian copyright laws, as it has done for literally thousands of years.


Because something had been the same for thousands of years doesn’t mean it will remain so. That is a logical fallacy.

I think it’s undeniable that the financial incentive will change dramatically as genAI tools become substitute goods (even if inferior) to creative output.


http://Suno.ai is already there.


Koda, the danish music copyright organisation just sued Suno.ai [1] calling it the “biggest music theft in history”.

Apparently suno can almost completely reproduce some “big” songs made by danish bands eg D-A-D, Aqua.

Edit- and from the article it seems they are doing what they can to make it a political/legislation issue.

[1]: https://koda.dk/om-koda/nyheder/koda-sagsoeger-ai-tjenesten-...


This could establish precedent for LLMs being infringing on authors.


Might be a skill issue but my prompt was not followed.


I’m not sure why so many seem angry about my scenario.

Is it because my prose sounds biased against GenAI companies? Or I jumped to a bleak conclusion without adequate supporting arguments?


One of the few times I vehemently disagree with the EFF.

The problem is this article seems to make absolutely no effort to differentiate legitimate uses of GenAI (things like scientific and medical research) from the completely illegitimate uses of GenAI (things like stealing the work of every single artist, living and dead, for the sole purpose of making a profit)

One of those is fair use. The other is clearly not.


What happens when a researcher makes a generative art model and publicly releases the weights? Anyone can download the weights and use it to turn a quick profit.

Should the original research use be considered legitimate fair use? Does the legitimacy get 'poisoned' along the way when a third party uses the same model for profit?

Is there any difference between a mom-and-pop restaurant who uses the model to make a design for their menu versus a multi-billion dollar corp that's planning on laying off all their in house graphic designers? If so, where in between those two extremes should the line be drawn?


I'm not a copyright attorney in any country, so the answer (assuming you're asking me personally) is "I don't know and it probably depends heavily on the specific facts of the case."

If you're asking for my personal opinion, I can weigh in on my personal take for some fair use factors.

- Research into generative art models (the kind which is done by e.g. OpenAI, Stable Diffusion) is only possible due to funding. That funding mainly comes from VC firms who are looking to get ROI by replacing artists with AI[0], and then debt financing from major banks on top of that. This drives both the market effect factor and the purpose/character of use factor, and not in their favor. If the research has limited market impact and is not done for the express purpose of replacing artists, then I think it would likely be fair use (an example could be background removal/replacement).

- I don't know if there are any legal implications of a large vs. small corporation using a product of copyright infringement to produce profit. Maybe it violates some other law, maybe it doesn't. All I know is that the end product of a GenAI model is not copyrightable, which to my understanding means their profit potential is limited as literally anyone else can use it for free.

[0]: https://harlem.capital/generative-ai-the-vc-landscape/


At what point do you cross the line from "legitimate use of a work" to illegitimate use?

If I take my legally purchased epub of book and pipe it through `wc` and release the outputs, is that a violation of copyright? What about 10 books? 100? How many books would I have to pipe through `wc` before the outputs become a violation of copyright?

What if I take those same books and generate a spreadsheet of all the words and how frequently they're used? Again, same question, where is the line between "fine" and "copyright violation"?

What if I take that spreadsheet, load it into a website and make a javascript program that weights every word by count and then generates random text strings based on those weights? Is that not essentially an LLM in all but usefulness? Is that a violation of copyright now that I'm generating new content based on statistical information about copyright content? If I let such a program run long enough and run on enough machines, I'm sure those programs would generate strings of text from the works that went into the models. Is that what makes this a copyright violation?

If that's not a violation, how many other statistical transformation and weighting models would I have to add to my javascript program before it's a violation of copyright? I don't think it's reasonable to say any part of this is "clearly not" fair use, no matter how many books I pump into that original set of statistics. And at least so far, the US courts agree with that.


I think your analogy is a massive stretch. `wc` is neither generative nor capable of having market effect.

Your second construction is generative, but likely worse than a Markov chain model, which also did not have any market effect.

We're talking about the models that have convinced every VC it can make a trillion dollars from replacing millions of creative jobs.


It's not a stretch because I'm not claiming they're the same thing, I'm incrementally walking the tech stack to try and find where we would want to draw the line. If things something has to be generative in order to be a violation, that (for all but the most insane definitions of generative) clears `wc`, but what about publishing the DVD or BluRay encryption keys? Most of the "hacker" communities pretty clearly believe that isn't a violation of copyright. But is it a violation of copyright to distribute that key and also software that can use that key to make a copy of a DVD? If not, why? Is it because the user has to combine the key, with the software and specifically direct that software to make a copy of which the copy is a violation of copyright but not the software and key combination?

If that's the combination of the decryption key and the software that can use that key to make a copy of a DVD is not a violation of copyright, does that imply that distributing a model and a piece of software separately that can use that model is also not a copyright violation? If it is a violation, what makes it different from the key + copy software combo?

If we decide that generative is a necessary component, is the line just whenever the generative model becomes useful? That seems arbitrary and unnecessarily restrictive. Google Scholar is an instructive example here, a search database that scanned many thousands of copyright materials, digitized them and then made that material searchable to anyone and even (intentionally) displayed verbatim copies (or even images) of parts of the work in question. This is unquestionably useful for people, and also very clearly producing portions of copyrighted works. Should the court cases be revisited and Google Scholar shut down for being useful?

If market effect is the key thing, how do we square that with the fact that a number of unquestionably market impacting things are also considered fair use. Emulators are the classic example here, and certainly modern retro gaming OSes like Recalbox or Retropie have measurable impacts on the market for things like nostalgia bait mini SNES and Atari consoles. And yet, the emulators and their OS's remain fair use. Or again, lets go back to the combination of the DVD encryption keys and something like handbrake. Everyone knows exactly what sort of copyright infringement most people do with those things. And there are whole businesses dedicated to making a profit off of people doing just that (just try and tell anyone with a straight face that Plex servers are only being used to connect to legitimate streaming services and stream people's digitized home movies).

My point is that AI models touch on all of these sorts of areas that we have previously carved out as fair use, and AI models are useful tools that don't (despite claims to the contrary) clearly fall afoul of copyright law. So any argument that they do needs to think about where we draw the lines and what are the factors that make up that decision. So far the courts have found training an AI model with legally obtained materials and distributing that model to be fair use, and they've explained how they got to that conclusion. So an argument to the contrary needs to draw and different line and explain why the line belongs there.


If your argument is that all of these things somehow combine to make the specific case I mentioned in my original comment legal (which was "stealing the work of every single artist, living and dead, for the sole purpose of making a profit", and I'll add replacing artists in the process to that), then I'm not seeing it.

You also seem to be talking about AI training more generally and not the specific case I singled out, which is important because this isn't a case of simply training a model on content obtained with consent - the material OpenAI and Stable Diffusion gathered was very explicitly without consent, and may have been done through outright piracy! (This came out in a case against Meta somewhat recently, but the exact origins of other company's datasets remain a mystery.)

Now I explained in another comment why I think current copyright laws should be able to clearly rule this specific case as copyright infringement, but I'm not arrogant enough to think I know better than copyright attorneys. If they say it falls under fair use, I'm going to trust them. I'm also going to say that the law needs to be updated because of it, and that brings us full circle to why I disagree with the article in the first place.


My argument is better framed that this:

> "stealing the work of every single artist, living and dead, for the sole purpose of making a profit"

is begging the question. The phrase "stealing" inherently presumes a crime has been committed when that is the very thing up for debate. The note that this is "for the sole purpose of making a profit" asks the reader to infer that one can not make a profit and also engage in fair use, and yet that is clearly not true. And that puts aside entirely that you're not referencing a "specific case" here, but generalizing to all forms of AI training.

We can start by examining the "stealing" phrasing, noting that up until fairly recently it was an axiom in a lot of "hacker" places that "copying is not theft". It feels somewhat contradictory to me to find that now that it's our work that's being copied, suddenly a lot of hackers are very keen on calling it theft.

We can note that a lot of our careers rest on the idea that copying is not prima facie a crime or theft. Certainly IBM wanted it to be so, and yet their BIOS was cloned and many of our careers were kickstarted (or at least kicked into high gear) by the consequences of that. Or how about the fact that despite what Apple would really like to be true, you can't stop people from copying your really cool idea from looking at it and making something similar. They couldn't win that battle against Microsoft, and they couldn't win it against Google or Samsung either.

We can talk about the fact that there is a lot of disagreement that the dead should have any IP rights at all. Notably again many "hacker" spaces were quite vocal about how the various "lifetime+" IP laws were actually detrimental to creativity and society. Until recently it was the Walt Disney corporation in their pursuit of making a profit that was one of the largest advocates for copyrights to be extended long after the creator was dead and gone.

We could also talk (again) about how many forms of fair use are pursued for "the sole purpose of making a profit". See the previous references to the IBM BIOS and the copying of Apple's GUIs. But there is also parody music like Weird Al, or Campbell v. Acuff-Rose Music. We can look also to the previously mentioned Connectix Virtual Game Station and other emulators or to the Oracle V. Google lawsuits. We can look at how TiVo's entire business model was to enable millions of people to create copies of copyrighted material every day, automatically. Or Kelly v. Arriba Soft (which is basically why you can do an image search today) and Suntrust Bank v. Houghton Mifflin (also relevant for the discussion re "the dead").

And again, most related to AI would be Authors Guild v. Google for the ingestion of thousands of copyrighted materials to in the pursuit of profit and creating the Google Scholar application. And application which I again note produces partial, but 100% faithful copies of that original work for the user and does so intentionally, something AI system actively try to avoid.

Which is why I started by asking you to walk the tech stack. If you think the law needs to be updated or you disagree with the current rulings on this matter, I can certainly understand that. So where do you propose drawing the line? What are the specific items that you feel distinguish training an AI from the various other fair use cases that have come before. Why is Google Scholar scanning in thousands of copyrighted materials with the express purpose of both making those materials searchable and displaying copies of that material ok, but an AI scanning in books with the express purpose of creating new material not ok? Why is Android, Windows, Gnome, KDE and other GUIs so very clearly copying the style (if not the works) of Apple ok, but Stable Diffusion producing new art works copying styles not?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: