Hacker News

Nvidia contacted Anna's Archive to access books

(torrentfreak.com)
skilled 1 day ago
> In response, NVIDIA defended its actions as fair use, noting that books are nothing more than statistical correlations to its AI models.

Does this even make sense? Are the copyright laws so bad that a statement like this would actually be in NVIDIA’s favor?

ThrowawayR2 1 day ago
Yes, it's been discussed many times before. All the corporations training LLMs have to have done a legal analysis and concluded that it's defensible. Even one of the white papers commissioned by the FSF ( "Copyright Implications of the Use of Code Repositories to Train a Machine Learning Model" at https://www.fsf.org/licensing/copilot/copyright-implications... ), concluded that using copyrighted data to train AI was plausibly legally defensible and outlined the potential argument. You will notice that the FSF has not rushed out to file copyright infringement suits even though they probably have more reason to oppose LLMs trained on FOSS code than anyone else in the world.
jkaplowitz 1 day ago
> Even one of the white papers commissioned by the FSF

Quoting the text which the FSF put at the top of that page:

"This paper is published as part of our call for community whitepapers on Copilot. The papers contain opinions with which the FSF may or may not agree, and any views expressed by the authors do not necessarily represent the Free Software Foundation. They were selected because we thought they advanced the discussion of important questions, and did so clearly."

So, they asked the community to share thoughts on this topic, and they're publishing interesting viewpoints that clearly advance the discussion, whether or not they end up agreeing with them. I do acknowledge that they paid $500 for each paper they published, which gives some validity to your use of the verb "commissioned", but that's a separate question from whether the FSF agrees with the conclusions. They certainly didn't choose a specific author or set of authors to write a paper on a specific topic before the paper was written, which a commission usually involves, and even then the commissioning organization doesn't always agree with the paper's conclusion unless the commission isn't considered done until the paper is updated to match the desired conclusion.

> You will notice that the FSF has not rushed out to file copyright infringement suits even though they probably have more reason to oppose LLMs trained on FOSS code than anyone else in the world.

This would be consistent with them agreeing with this paper's conclusion, sure. But that's not the only possibility it's consistent with.

It could alternatively be because they discovered or reasonably should have discovered the copyright infringement less than three years ago, therefore still have time remaining in their statute of limitations, and are taking their time to make sure they file the best possible legal complaint in the most favorable available venue.

Or it could simply be because they don't think they can afford the legal and PR fight that would likely result.

ThrowawayR2 1 day ago
Since I very specifically wrote "commissioned by the FSF" instead of "represents the opinion of the FSF" to avoid misrepresenting the paper, you're arguing against something I have not said.
grayhatter 10 hours ago
> Even one of the white papers commissioned by the FSF [...] concluded that using copyrighted data to train AI was plausibly legally defensible [...] notice that the FSF has not rushed out to file copyright infringement suits even though they probably have more reason to oppose LLMs trained on FOSS code than anyone else in the world.

I agree with jkaplowitz, but for a different reason I still believe that your description feels a bit misleading to me. The FSF commissioned paper makes the argument that Microsoft's use of code FROM GITHUB, FOR COPILOT is likely non-infringing, because of the additional github ToS. This feels like critical context to provide given in the very next statement, you widened it to LLMs generally, and the FSF which likely cares about code, not on github as well.

All of that said, I'm not sure it matters, because while I don't find the argument from the that whitepaper very compelling, because it's based critically on additional grants in the ToS. IIRC (going only from memory) the ToS requires that you grant github a license as it's needed to provide the service. Github can provide the services the user reasonably understood github to provide, without violating the additional clauses specified in the existing FOSS license covering the code. That being from a while ago, and I'd say it's very murky now, because everyone knows Microsoft provides copilot, so "obviously" they need it.

Unfortunately, and importantly, when dealing with copyrights, the paper also covers the transformative fair use arguments in depth. And I do find those following arguments very compelling. The paper, (and likely others) are making the argument that the code output from an LLM is likely transformative. And thus can't be infringing compelling, (or is unlikely to be). I think in many cases, the output is clearly transformative in nature.

I've also seen code generated by claude (likely others as well?) to copy large sections from existing works. Where it's clearly "copy/paste" which clearly can't be fair use, nor transformative. The output clearly copies the soul of the work. Thus given I have no idea what dataset they're copying this code from, it's scary enough to make me unwilling to take the chance on any of it.

reorder9695 20 hours ago
So it's legal to train an "intelligence" on everything for free based on fair use, but it's not legal to train another intelligence (my brain) on it?
grayhatter 10 hours ago
No, it's also not illegal to train your brain. If you break into a store, and read all the books, you'll get arrested for breaking and entering. Not for reading the books. My (superficial) take on the argument is that they're hoping by saying "it's not illegal to read" no one will notice, and no one will ask how they got into the book store to begin with.
thisislife2 3 hours ago
So why is it illegal to download a pirated copy of a book from the internet to "train" my brain? There's no breaking and entering there, right?
grayhatter 1 hour ago
The answer is in the name of the law, copyright, the right to produce a copy. The original, ethical intent behind the law was to encourage people to create things. Someone could invest time and money into creating some art that had value, and then they were given the exclusive right to monetize it for some amount of time. You could create something, and I'm not allowed to copy what you created, and sell it without your permission, preventing me from doing no work but capturing all the money you could reasonably make off your work.

Want to create a song? You're the only person allowed to make, or authorize people to duplicate it. You're the only person allowed to control the supply of your effort. Eventually, the public good, and interest was supposed to take over, because in the end, you're right, it's just information. It was supposed to enter "the public domain" where anyone could freely use it. But then Disney got involved, and now it's a toxified weapon used mostly by unethical lawyers against curiosity.

theragra 1 hour ago
Because you are making a copy? Moreover, in some jurisdictions only uploading is illegal. Downloading is fine.
Arnt 5 hours ago
You're close to an important point.

Our current laws are written to make it legal for you to copy the Quran via your brain — some people learn it by rote and can stand up and speak the entire work from one end to the other. This is intended to be legal. Fair use of the Quran.

I went to a concert recently where someone copied every word and (as far as I could hear) every note from a copyrighted work by Bruce Springsteen. Singing and playing. This too is intended to be fair use.

You can learn how to play and sing Springsteen songs verbatim, and you can use his records to learn to sound like him when you sing, and that's intended to be legal.

Since the law doesn't say "but you cannot write a program to do these things, or run such a program once written", why would it be illegal to do the same thing using some code?

The people who want the law to differentiate have a difficult challenge in front of them. As I see it, they need to differentiate between what humans do to learn from what machines do, and that implies really knowing what humans do. And then they need to draw boundaries, making various kinds of computer-assisted human learning either legal or illegal.

Some of them say things like "when an AI draws Calvin and Hobbes in the style of Breughel, it obviously has copied paintings by Breughel" but a court will ask why that's obvious. Is it really obvious that the way it does that drawing necessarily involves copying, when you as a human can do the same thing without copying?

tremon 1 hour ago
> I went to a concert recently where someone copied every word and (as far as I could hear) every note from a copyrighted work by Bruce Springsteen. Singing and playing. This too is intended to be fair use.

Only the learning part is fair use. Playing an artist's songs in public does not violate the copyright of the original performing artist, but it does violate the songwriters' copyright, and you do need a license to play covers in public.

They're called Performing Rights: https://en.wikipedia.org/wiki/Performing_rights

Arnt just now
It can also violate other laws and rules that are not relevant to copyright. Perhaps I should have digressed into listing that? I chose not to.
general1465 1 day ago
Did you pirated this movie? No I did not, it is fair use because this movie is nothing more than a statistical correlation to my dopamine production.
earthnail 1 day ago
The movie played on my screen but I may or may not have seen the results of the pixels flashing. As such, we can only state with certainty that the movie triggered the TV's LEDs relative to its statistical light properties.
gruez 21 hours ago
>Did you pirated this movie? No I did not, [...]

You're probably being sarcastic but that's actually how the law works. You'll note that when people get sued for "pirating" movies, it's almost always because they were caught seeding a torrent, not for the act of watching an illegal copy. Movie studios don't go after visitors of illegal streaming sites, for instance.

aucisson_masque 18 hours ago
> Movie studios don't go after visitors of illegal streaming sites, for instance.

They absolutely do, in France we have Hadopi that tracks torrent leecher. Hadopi had been heavily pushed by the movie and music industry.

gruez 16 hours ago
>They absolutely do, in France we have Hadopi that tracks torrent leecher

You're still uploading even if you don't let it finish and go to "seeding".

bmitc 10 hours ago
It's how the law works for those at the top of the oligarchy.
thaumasiotes 1 day ago
Note that what copyright law prohibits is the action of producing a copy for someone else, not the action of obtaining a copy for yourself.
codedokode 5 hours ago
If I am not mistaken, the law prohibits producing any unauthorized copies. So if you download a pirated book on a computer, you produce an illegal copy: [1]. If I am not missing anything, ML companies are galaxy-scale infringers.

> 106. Exclusive rights in copyrighted works

> Subject to sections 107 through 122, the owner of copyright under this title has the exclusive rights to do and to authorize any of the following:

> (1) to reproduce the copyrighted work in copies or phonorecords;

> 501. Infringement of copyright

> (a) Anyone who violates any of the exclusive rights of the copyright owner as provided by sections 106 through 122 or of the author as provided in section 106A(a), or who imports copies or phonorecords into the United States in violation of section 602, is an infringer of the copyright or right of the author, as the case may be.

[1] https://www.copyright.gov/title17/92chap5.html

bulbar 10 hours ago
Training of an LLM however is a lossy compressing algorithm to provide a copy of a variant of the data to the user later on.
JKCalhoun 1 day ago
I saw the movie, but I don't remember it now.
machomaster 20 hours ago
I saw the movie, but I did not watch it.
ErroneousBosh 21 hours ago
Did you pirate this movie?

No, I acquired a block of high-entropy random numbers as a standard reference sample.

Ferret7446 1 day ago
Indeed, the "copy" of the movie in your brain is not illegal. It would be rather troublesome and dystopian if it were.
visarga 1 day ago
The problem is when you use your "copy" as inspiration and actually create and publish something. It is very hard to be certain you are safe, besides literal expression close paraphrasing is also infringing, using world building elements, or using any original abstraction (AFC test). You can only know after a lawsuit.

It is impossible to tell how much AI any creator used secretly, so now all works are under suspicion. If copyright maximalists successfully copyright style (vibes), then creativity will be threatened. If they don't succeed, then copyright protection will be meaningless. A catch 22.

HWR_14 19 hours ago
> close paraphrasing is also infringing, using world building elements, or using any original abstraction (AFC test)

World building elements? Do you have more details on that, because that feels wrong to me.

Unless you mean the specific names of things in the world like "Hobbits".

SoftTalker 1 day ago
Not yet, anyway.
NitpickLawyer 1 day ago
> Does this even make sense? Are the copyright laws so bad that a statement like this would actually be in NVIDIA’s favor?

It makes some sense, yeah. There's also precedent, in google scanning massive amounts of books, but not reproducing them. Most of our current copyright laws deal with reproductions. That's a no-no. It gets murky on the rest. Nvda's argument here is that they're not reproducing the works, they're not providing the works for other people, they're "scanning the books and computing some statistics over the entire set". Kinda similar to Google. Kinda not.

I don't see how they get around "procuring them" from 3rd party dubious sources, but oh well. The only certain thing is that our current laws didn't cover this, and probably now it's too late.

bulbar 10 hours ago
Is they don't reproduce the data of any kind, how could the LLM be of any use?

The whole/main intention of an LLM is to reproduce knowledge.

olejorgenb 1 day ago
> I don't see how they get around "procuring them" from 3rd party dubious sources

Yeah, isn't this what Anthropic was found guilty off?

masfuerte 1 day ago
Scanning books is literally reproducing them. Copying books from Anna's Archive is also literally reproducing them. The idea that it is only copyright infringement if you engage in further reproduction is just wrong.

As a consumer you are unlikely to be targeted for such "end-user" infringement, but that doesn't mean it's not infringement.

NitpickLawyer 23 hours ago
https://cases.justia.com/federal/appellate-courts/ca2/13-482...

This is the conclusion of the saga between the author's guild v. google. It goes through a lot of factors, but in the end the conclusion is this:

> In sum, we conclude that: (1) Google’s unauthorized digitizing of copyright-protected works, creation of a search functionality, and display of snippets from those works are non-infringing fair uses. The purpose of the copying is highly transformative, the public display of text is limited, and the revelations do not provide a significant market substitute for the protected aspects of the originals. Google’s commercial nature and profit motivation do not justify denial of fair use. (2) Google’s provision of digitized copies to the libraries that supplied the books, on the understanding that the libraries will use the copies in a manner consistent with the copyright law, also does not constitute infringement. Nor, on this record, is Google a contributory infringer.

amanaplanacanal 1 day ago
It seems like they pretty much don't care unless you distribute the copy. There is certainly precedent for it, going back to the Betamax case in the 1980s.
Ferret7446 1 day ago
Private reproductions are allowed (e.g. backups). Distributing them non-privately is not.
masfuerte 1 day ago
Backups are permitted (and not for all media) when you legally acquired the source. Scanning a physical book is not a permitted backup, and neither is downloading a book from Anna's archive.
fc417fc802 22 hours ago
> Scanning a physical book is not a permitted backup

On what basis do you claim that?

You're also missing critical legal context. When a would be consumer downloads pirated media in lieu of purchasing it he damages the would be seller. When my automated web scraper inadvertently archives some pirated content on my local disk no one is financially harmed.

The question is where the boundary between those things lies.

gruez 21 hours ago
>Distributing them non-privately is not.

You can even distribute them, to some limits.

https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,....

threethirtytwo 1 day ago
It does make sense. It’s controversial. Your memory memorizes things in the same way. So what nvidia does here is no different, the AI doesn’t actually copy any of the books. To call training illegal is similar to calling reading a book and remembering it illegal.

Our copyright laws are nowhere near detailed enough to specify anything in detail here so there is indeed a logical and technical inconsistency here.

I can definitely see these laws evolving into things that are human centric. It’s permissible for a human to do something but not for an AI.

What is consistent is that obtaining the books was probably illegal, but say if nvidia bought one kindle copy of each book from Amazon and scraped everything for training then that falls into the grey zone.

ckastner 1 day ago
> To call training illegal is similar to calling reading a book and remembering it illegal.

Perhaps, but reproducing the book from this memory could very well be illegal.

And these models are all about production.

roblabla 1 day ago
To be fair, that seems to be where some of the IA lawsuits are going. The argument goes that the models themselves aren't derivative works, but the output they produce can absolutely be - in much the same way that reproducing a book from memory could be copyright violation, trademark infringement, or generally go afoul of the various IP laws.
threethirtytwo 1 day ago
Models don’t reproduce books though. It’s impossible for a model to reproduce something word for word because the model never copied the book.

Most of the best fit curve runs along a path that doesn’t even touch an actual data point.

NicuCalcea 18 hours ago
Models absolutely do reproduce books.

> With a simple two-phase procedure, we show that it is possible to extract large amounts of in-copyright text from four production LLMs. While we needed to jailbreak Claude 3.7 Sonnet and GPT-4.1 to facilitate extraction, Gemini 2.5 Pro and Grok 3 directly complied with text continuation requests. For Claude 3.7 Sonnet, we were able to extract four whole books near-verbatim, including two books under copyright in the U.S.: Harry Potter and the Sorcerer’s Stone and 1984.

https://arxiv.org/abs/2601.02671

thedailymail 15 hours ago
The supplementary files in that paper—verbatim reproductions of the full texts of Frankenstein and The Great Gatsby—are pretty instructive. The research group highlighted all additions and omissions, but on most pages the differences are difficult to spot because they are only missing spaces, extra hyphens, and other typographical minutiae.
kalap_ur 1 day ago
If there is one exact sentence taken out of the book and not referenced in quotes and exact source, that triggers copyright laws. So model doesnt have to reproduce the entire book, it only required to reproduce one specific sentence (which may be a characteristic sentence to that author or to that book).
kelnos 21 hours ago
Sure, but that use would easily pass a fair use test, at least in the US.
CamperBob2 1 day ago
If there is one exact sentence taken out of the book and not referenced in quotes and exact source, that triggers copyright laws.

Yes, and that's stupid, and will need to be changed.

empath75 1 day ago
They do memorize some books. You can test this trivially by asking ChatGPT to produce the first chapter of something in the public domain -- for example a Tale of Two Cities. It may not be word for word exact, but it'll be very close.

These academics were able to get multiple LLMs to produce large amounts of text from Harry Potter:

https://arxiv.org/abs/2601.02671

threethirtytwo 1 day ago
In that case I would say it is the act of reproducing the books that is illegal. Training the AI on said books is not.

So the illegality rests at the point of output and not at the point of input.

I’m just speaking in terms of the technical interpretation of what’s in place. My personal views on what it should be are another topic.

ckastner 1 day ago
> So the illegality rests at the point of output and not at the point of input.

It's not as simple as that, as this settlement shows [1].

Also, generating output is what these models are primarily trained for.

[1]: https://www.bbc.com/news/articles/c5y4jpg922qo

kelnos 21 hours ago
Unfortunately a settlement doesn't really show you anything definitive about the legality or illegality of something.

It only shows you that the defendant thought it would be better for them to pay up rather than continue to be dragged through court, and that the plaintiff preferred some amount of certain money now over some other amount of uncertain money later, or never.

We cannot say with any amount of confidence how the court would have ruled on the legality, had things been allowed to play out without a settlement.

threethirtytwo 1 day ago
>Also, generating output is what these models are primarily trained for.

Yes but not generating illegal output. These models were trained with intent to generate legal output. The fact that it can generate illegal output is a side effect. That's my point.

If you use AI to generate illegal output, that act is illegal. If you use AI to generate legal output that act is not illegal. Thus the point of output is where the legal question lies. From inception up to training there is clear legal precedence for the existence of AI models.

lelanthran 1 day ago
> To call training illegal is similar to calling reading a book and remembering it illegal.

A type of wishful thinking fallacy.

In law scale matters. It's legal for you to possess a single joint. It's not legal to possess 400 tons of weed in a warehouse.

kalap_ur 1 day ago
It is not the scale that matters here, in your example, but intent. With 1 joint, you want to smoke yourself. With 400, you very possibly want to sell it to others. Scale in itself doesnt matter, scale matters only as to the extent it changes what your intention may be.
lelanthran 23 hours ago
> It is not the scale that matters here, in your example, but intent. With 1 joint, you want to smoke yourself. With 400, you very possibly want to sell it to others. Scale in itself doesnt matter, scale matters only as to the extent it changes what your intention may be.

It sounds then like you're saying that scale does indeed matter in this context, as using every single piece of writing in existence isn't being slurped up purely to learn, it's being slurped up to make a profit.

Do you think they'd be able to offer a usefull LLM if the model was trained only what what an average person could read in a lifetime?

threethirtytwo 20 hours ago
It's common knowledge among LLM experts that the current capabilities of LLMs are triggered as emergent properties of training transformers on reams and reams of data.

That is intent of scale. To trigger LLMs to reach this point of "emergence". Whether or not it's AGI is a debate I'm not willing to entertain but everyone pretty much agrees that there's a point where the scale flips from a transformer being an autocomplete machine to something more than that.

That is legal basis for why companies would go for scale with LLMs. It's the same reason why people are allowed to own knives even though knives are known to be useful for murder (as a side effect).

So technically speaking these companies have legal runway in terms of intent. Making an emergent and helpful AI assistant is not illegal, but also making a profit isn't illegal either.

kelnos 21 hours ago
Right, but in the weed analogy, the scale is used as a proxy to assume intent. When someone is caught with those 400 joints, the prosecution doesn't have to prove intent, because the law has that baked in already.

You could say the same in LLM training, that doing so at scale implies the intent to commit copyright infringement, whereas reading a single book does not. (I don't believe our current law would see it this way, but it wouldn't be inconsistent if it did, or if new law would be written to make it so.)

threethirtytwo 1 day ago
It’s clear nvidia and every single one of these big AI corps do not want their AIs to violate the law. The intent is clear as day here.

Scale is only used for emergence, openAI found that training transformers on the entire internet would make is more then just a next token predictor and that is the intent everyone is going for when building these things.

kelnos 21 hours ago
I don't think that's clear at all. Businesses routinely break the law if they believe the benefits in doing so will outweigh the consequences.

I think this is even more common and more brazen when it comes to "disruptive" businesses and technologies.

threethirtytwo 21 hours ago
>Businesses routinely break the law if they believe the benefits in doing so will outweigh the consequences.

I'm saying there's collective incentive among businesses to restrict the LLM from producing illegal output. That is aligned and ultra clear. THAT was my point.

But if LLMs produce illegal output as a side effect and it can't be controlled than your point comes into play here because now they have to weigh the cost + benefit as they don't have a choice in the matter. But that wasn't what I'm getting at. That's your new point, which you introduced here.

In short it is clear all corporations do not want LLMs to produce illegal content and are actively trying to restrict it.

threethirtytwo 1 day ago
Er no. I’ve read and remember hundreds of books in my life time. It’s not any more illegal based off scale. The law doesn’t differentiate whether I remember one book or a hundred then there’s no difference for thousands or millions.

No wishful thinking here.

lelanthran 23 hours ago
> Er no. I’ve read and remember hundreds of books in my life time. It’s not any more illegal based off scale.

I'm not sure you understood what you said, but superficially it appears that you are agreeing with me?

Just because it's legal to read 100s of books does not make it legal to slurp up every single piece of produced content ever recorded.

We're talking man many orders of magnitude in scale there, and you're the one who pointed out that scale :-/

threethirtytwo 21 hours ago
No I'm not agreeing with you.

>Just because it's legal to read 100s of books does not make it legal to slurp up every single piece of produced content ever recorded.

The law says you're perfectly in your legal right to slurp up every piece of content ever produced.

>We're talking man many orders of magnitude in scale there, and you're the one who pointed out that scale :-/

I'm aware, and the law doesn't talk about scale.

kelnos 21 hours ago
What is "scale" in this context? I think arguably 100 books over the span of decades is not "scale".

But tens (hundreds?) of thousands of books over the span of a few weeks? That's definitely "scale".

threethirtytwo 21 hours ago
the law doesn't talk about scale, so either is perfectly legal. Memorizing a billion books vs memorizing one book. Same laws apply.
kalap_ur 1 day ago
You can only read the book, if you purchased it. Even if you dont have the intent to reproduce it, you must purchase it. So, I guess NVDA should just purchase all those books, no?
threethirtytwo 1 day ago
Yep, I agree. That’s the part that’s clearly illegal. They should purchase the books, but they didn’t.
Nursie 1 day ago
This is the bit an author friend of mine really hates. They didn’t even buy a copy.

And now AI has killed his day job writing legal summaries. So they took his words without a license and used them to put him out of a job.

Really rubs in that “shit on the little guy” vibe.

ThrowawayR2 1 day ago
Obviously not; one can borrow books from libraries and read them as well.
threethirtytwo 1 day ago
That's true. But the book itself was legally purchased. So if nvidia went to the library and trained AI by borrowing books, that should be technically legal.
kelnos 21 hours ago
Do you have the same legal rights to something that you've borrowed as you do with something you've purchased, though?

Would it be legal for me to borrow a book from the library, then scan and OCR every page and create an EPUB file of the result? Even if I didn't distribute it, that sounds questionable to me. Whereas if I had purchased the book and done the same, I believe that might be ok (format shifting for personal use).

Back when VHS and video rental was a thing, my parents would routinely copy rented VHS tapes if we liked the movie (camcorder connected to VCR with composite video and audio cables, worked great if there wasn't Macrovision copy protection on the source). I don't think they were under any illusions that what they were doing was ok.

threethirtytwo 21 hours ago
Well If I copied it word for word maybe, but if I read it and "trained it" into my brain then it's clearly not illegal.

SO the grey area here is if I "trained" an LLM in a similar way and not copied it word for word then is it legal? Because fundamentally speaking it's literally the same action taken.

_trampeltier 1 day ago
But to train the models they have to download it first (make a copy)
threethirtytwo 21 hours ago
You had to do this for reading too. The words were burned onto your retina as volatile memory before getting processed by your brain.

You retina likely overwrote it's "memory" as soon as you looked at something else, but that's no different than copying and deleting or the more apt analogy: streaming.

codedokode 4 hours ago
The law makes a distinction between storing it on a disk and just remembering the content. The latter is not a "copy" and not a subject of law:

> “Copies” are material objects, other than phonorecords, in which a work is fixed by any method now known or later developed, and from which the work can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device. The term “copies” includes the material object, other than a phonorecord, in which the work is first fixed.

> A work is “fixed” in a tangible medium of expression when its embodiment in a copy or phonorecord, by or under the authority of the author, is sufficiently permanent or stable to permit it to be perceived, reproduced, or otherwise communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that are being transmitted, is “fixed” for purposes of this title if a fixation of the work is being made simultaneously with its transmission.

https://www.copyright.gov/title17/92chap1.html

threethirtytwo 1 hour ago
Interesting. How long is the transitory duration? The interpretation of that likely has yet to be determined by a court case and can evolve similar to how “all men are created equal” doesn’t just refer to men.

Seems to me a possible interpretation is just deleting the data after training is finished.

_trampeltier 6 hours ago
BS. Nvidia store use the copy for each training run, or do you really thing the just download it each time in real time for training?
godelski 1 day ago
You need to pay for the books before you memorize them
laterium 1 hour ago
You can sit down at a library or Barnes and Noble and memorize for free.
godelski just now
You should read the sibling comments before leaving your unique "contribution"
threethirtytwo 23 hours ago
Partially true. I can pay for a book then lend it out to people for free.

The government is in full support of this "lending" concept, in fact they have created entire facilities devoted to this very concept of lending out books.

godelski 14 hours ago
Okay, so go check out 500 TB worth of books from the library. I'll wait
threethirtytwo 13 hours ago
If I’m rich enough to employ thousands of people I can hire each one of them to borrow as many books as possible then use all the books to train an AI. Perfectly legal. And also very possible.

Point being that the library prevents you from checking out 500gb because of logistical issues. First how can you carry all those books and how can they let other patrons in the library check out books if you grabbed that many? These rules aren’t enforced to prevent “scale” hence why my methodology got around the rules.

godelski 10 hours ago
Great! Then it's perfectly legal.

As long as you obtain the books legally then it's legal

This really isn't that hard

threethirtytwo 9 hours ago
So you’re wrong when you said you have to pay for the books. You don’t.
godelski just now
You should go be a lawyer.

For the rest of us, we understand context

Nursie 1 day ago
But it’s not just about recall and reproduction. If they used Anna’s Archive the books were obtained and copied without a license, before they were fed in as training data.
lencastre 1 hour ago
I would love to see these nvidia designs as mere statistical correlations of graphic card design.
Bombthecat 1 day ago
Who cares? Only Disney had the money to fight them.

Everything else will be slurped up for and with AI and be reused.

HillRat 23 hours ago
It's not settled law as it pertains to LLMs, but, yes, creating a "statistical summary" of a book (consider, e.g., a concordance of Joyce's "Ulysses") is generally protected as fair use. However, illegally accessing pirated books to create that concordance is still illegal.
nancyminusone 1 day ago
When you're responsible for 4% of the global GDP, they let you do it.
qingcharles 1 day ago
They let you just grab any book you want.
bulbar 11 hours ago
Of course it does not make sense, it's just the framing of a multi billion dollar industry and people tend to buy those.
HWR_14 19 hours ago
Copyright laws are so undefined and NVIDIAs lawyers so plentiful that the statement works in their favor. You're allowed to copy part of a work in many cases, the easiest example is you can quote a line from a book in a review. The line is fuzzy.
tobwen 1 day ago
Books are databases, chars their elements. We have copyright for databases in EU :)
RGamma 1 day ago
The chicken is trying to become the egg.
postexitus 1 day ago
A quite good explanation of what copyright laws cover and should (and should not) cover is here by Cory Doctorow: https://www.theguardian.com/us-news/ng-interactive/2026/jan/...
Elfener 1 day ago
It seems so, stealing copyrighted content is only illegal if you do it to read it or allow others to read it. Stealing it to create slop is legal.

(The difference, is that the first use allows ordinary poeple to get smarter, while the second use allows rich people to get (seemingly) richer, a much more important thing)

poulpy123 1 day ago
I'm not saying it will change anything but going after Anna's archive while most of the big AI players intensely used it is quite something
gizajob 1 day ago
Library Genesis worked pretty great and unmolested until news came out about Meta using it, at which point a bunch of the main sites disappeared off the net. So not only do these companies take ALL the pirated material, their act of doing so even borks the pirates, ruining the fun of piracy for everyone else.
pjc50 1 day ago
NVIDIA are "legitimate", so anything they do is fine, while AA are "illegitimate", so it's not.
countWSS 1 day ago
Short-term thinking, they don't care about where the data comes from but how easy is to get it. Its probably decided at project-manager level.
haritha-j 1 day ago
Just to clarify, the most valuable company in the world refuses to pay for digital media?
rpdillon 1 day ago
I see this sentiment posted quite a bit, but have the publishers made any products available that would allow AI training on their works for payment? A naive approach would be to go to an online bookstore and pay $15 for every book, but then you have copyrighted content that is encrypted, that it's a violation of the DMCA to decrypt.

I assume you're expecting that they'll reach out and cut a deal with each publishing house separately, and then those publishing houses will have to somehow transfer their data over to NVIDIA. But that's a very custom set of discussions and deals that have to be struck.

I think they're going to the pirate libraries because the product they want doesn't exist.

haritha-j 23 hours ago
Perhaps because authors don't want their content to be used for this purpose? Because Microsoft refuses to give me a copy of the source code to Windows to 'inspire' my vibe-coded OS, Windowpanes 12, of which I will not give microsoft a single cent of revenue, its acceptable for me to pirate it? Someone doesn't want to sell me their work, so I'm justified in stealing it?
zaptheimpaler 21 hours ago
> I assume you're expecting that they'll reach out and cut a deal with each publishing house separately, and then those publishing houses will have to somehow transfer their data over to NVIDIA. But that's a very custom set of discussions and deals that have to be struck.

If this is the only legal way for them to train, then yes that is what they should do instead of breaking the law... just because its not easy doesn't mean piracy is fine.

rpdillon 17 hours ago
My comment is being misread as my support for piracy; my comment isn't meant to discuss anything at all about piracy. It's instead intended to look at everything that's not piracy, and examining their costs, and why the industry chose the path they did.

Existing rulings are beginning to suggest that if the books can be obtained legally, a separate license is not required for training. So I'm naturally interested in legal ways folks training models would get a lot of books, and whether the publishing industry has even considered the value there.

g947o 21 hours ago
Hmm, didn't Anthropic buy a bunch of used books (like, physical ones), scanned them, and then destroyed them? If Anthropic can do that, surely can NVIDIA
rpdillon 16 hours ago
Yes! And it was ruled legal by the courts, but the media spun it as "Anthropic destroys a million books to build AI". This is the only legal bulk approach I know of, hence my inquiry about such a product. I didn't expect such a harsh response from some of these comments.
dns_snek 20 hours ago
Do you believe in private property rights? If the product they want doesn't exist then they're shit out of luck and they must either make one or wait for one to get made. You're arguing that it's okay for them to break the law because doing business legally is really inconvenient.

That would be the end of discussion if we lived in a world governed by the rule of law but we're repeatedly reminded that we don't.

rpdillon 17 hours ago
Not arguing it's ok to break the law, but rather examining their incentives and alternatives, along with their associated costs.
trueismywork 22 hours ago
The product i want doesnt exist too. But if I pirate, straight to Alcataraz I go.
rpdillon 16 hours ago
Yeah, I wasn't discussing legality, simply the incentives and alternatives.
kelnos 21 hours ago
That's not relevant went it comes to copyright law. The copyright holder has the sole legal right to decide how the work is distributed.

If it isn't distributed in a manner to your liking, the only legal thing you can do is not have a copy of it at all.

rpdillon 16 hours ago
I was trying to find out if any product that was legal can bridge that gap other than buying books in print, in bulk, and scanning them and destroying them. From the responses here, it sounds like the answer is a vehement "no".

Wasn't asking for advice on copyright, but since we're here, your statement is slightly too strict, at least with respect to US copyright law. The copyright holder has sole distribution authority over the first sale of the work in the United States, but thereafter the first-sale doctrine allows it to be distributed by anyone thereafter. It is limited to the US, though, as far as I know. This is what allowed anthropic to train on printed books, which they then destroyed: they were able to purchase them in bulk because of the first-sale doctrine, as the publishers and authors would likely try to destroy the first-sale doctrine if they could, as evidenced by what's happened in the world of digital books.

GeorgeOldfield 4 hours ago
this is good. down with copyright.
nexle 1 day ago
they already paid 10x more to their lawyers to ensure that torrenting for LLM training is perfectly legal, why they want to pay more?
1over137 1 day ago
Not spending money (vs spending money) helps make one rich!
machomaster 20 hours ago
Not in the case of Nvidia. Famously, "the more you pay, the more you save".
NekkoDroid 1 day ago
Well... you don't want the good guys (Nvidia) giving money to the bad guys (Anna's Archive) right??? /s
flipped 1 day ago
Considering AA gave them ~500TB of books, which is astonishing (very expensive to even store for AA), I wonder how much nvidia paid them for it? It has to be atleast close to half a million?
qingcharles 1 day ago
I have a very large collection of magazines. AI companies were offering straight cash and FTP logins for them about a year or so ago. Then when things all blew up they all went quiet.
antonmks 1 day ago
NVIDIA executives allegedly authorized the use of millions of pirated books from Anna's Archive to fuel its AI training. In an expanded class-action lawsuit that cites internal NVIDIA documents, several book authors claim that the trillion-dollar company directly reached out to Anna's Archive, seeking high-speed access to the shadow library data.
utopiah 1 day ago
People HAVE to somehow notice how hungry for proper data AI companies are when one of the largest companies propping the fastest growing market STILL has to go to such length, getting actual approval for pirated content while they are hardware manufacturer.

I keep hearing how it's fine because synthetic data will solve it all, how new techniques, feedback etc. Then why do that?

The promises are not matching the resources available and this makes it blatantly clear.

derelicta 1 day ago
I feel like Nvidia's CEO would be the kind to snatch off sugary sachets from his local deli just to save up some more.
songodongo 17 hours ago
“Yes officer, it was the goober thinking he looked cool in the leather jacket.”
1over137 1 day ago
A great retaliation to Trump tariffs would be just cancelling copyright for American works in your country.
ronsor 21 hours ago
This would likely mean America canceling copyright for works in that country as well. I'm OK with that. Destroy copyright.
rtbruhan00 1 day ago
It's generous of them to ask for permission.
gizajob 1 day ago
They wanted access to a faster pipe to slurp 500 terabytes, and that access comes at a cost. It wasn’t about permission.

And yeah they should be sued into the next century for copyright infringement. $4Trillion company illegally downloading the entire corpus of published literature for reuse is clearly infringement, its an absurdity to say that it’s fair use just to look for statistical correlations when training LLMs that will be used to render human authors worthless. One or two books is fair use. Every single book published is not.

empath75 1 day ago
Whatever they get sued for would be pocket change.
breakingcups 1 day ago
It wasn't about permission, it was about high-speed access. They needed Anna's Archive to facilitate that for them, scraping was too slow. It's incredible that they were allowed to continue even after Anna's Archive themselves explicitly pointed out that the material was acquired illegally.
kristofferR 1 day ago
That's just normal US modus operandi. The court case against Maduro is allowed to continue even after everyone has acknowledged he was acquired illegally.
kristofferR 1 day ago
It's not permission, it's a service they offer:

https://annas-archive.li/llm

SanjayMehta 1 day ago
I'm wondering what Amazon is planning to do with their access to all those Kindle books.
quinncom 1 day ago
I was curious:

• Anna’s Archive: ~61.7 million “books” (plus ~95.7M papers) as of January 2026 https://en.wikipedia.org/wiki/Anna%27s_Archive • Amazon Kindle: “over 6 million titles” as of March 2018 https://en.wikipedia.org/wiki/Anna%27s_Archive

Hard to compare because AA contains duplicates, and the Kindle number is old, but at a glance it seems AA wins.

philipwhiuk 1 day ago
What do you mean 'planning'. You think they haven't already been sucked up?
embedding-shape 1 day ago
What do you mean 'sucked up'? It's data on their machines already, people willingly give them the data, so Amazon can process and offer it to readers. No sucking needed, just use the data people uploaded to you already.
sib 1 day ago
There's definitely a legal & contractual difference between (1) storing the books on your servers in order to provide them to end users who have purchased licenses to read them and (2) using that same data for training a model that might be used to create books that compete with the originals. I'm pretty sure that's why GP means by "sucking up."

This is analogous the difference between Gmail using search within your mail content to find messages that you are looking for vs Gmail providing ads inside Gmail based on the content of your email (which they don't do).

embedding-shape 1 day ago
Yeah, I guess the "err" is on my side, I've always took "suck up" as a synonym for scraping, not just "using data for stuff".

And yeah, you're most likely right about the first, and the contract writers have with Amazon most certainly anticipates this, and includes both uses in their contract. But! Never published on Amazon, so don't know, but I'm guessing they already have the rights for doing so with what people been uploading these last few years.

philipwhiuk 7 hours ago
They may not serve ads but you don't know they don't train their models on them.

If I still used Gmail I'd read the terms of service real close.

hollow-moe 23 hours ago
whatever, laws are for the poor anyways, you ought to think it would be common knowledge by now but nope
2OEH8eoCRo0 23 hours ago
I've always wondered about some of the torrent whales with multiple petabytes on private trackers. A lot of the whales auto dl every single new torrent that's uploaded. Perhaps even the sites themselves are allowed to operate as a way to get users to crowd source media.
wosined 1 day ago
Sounds like BS. Why would nvidia need the books. Do they even have a chatbot? I doubt the books help with framegen.
johndough 23 hours ago
From the top of the linked article:

    > NVIDIA is also developing its own models, including NeMo, Retro-48B, InstructRetro, and Megatron. These are trained using their own hardware and with help from large text libraries, much like other tech giants do.
You can download the models here: https://huggingface.co/nvidia
utopiah 1 day ago
The same reason Intel worked on OpenCV : they want to sell more hardware by pushing the state of the art of what software can do on THEIR hardware.

It's basically just a sales demonstrator, that optionally, if incredibly successful and costly they can still sell as SaaS, if not just offer for free.

Think of it as a tech ad.

voidUpdate 1 day ago
I cant see the whole relevant section in the article, but there is a screenshot of part of the legal documents that states "In response, NVIDIA sought to develop and demonstrate cutting edge LLMs at its fall 2023 developer day. In seeking to acquire data for what it internally called "NextLargeLLM", "NextLLMLarge" and-" (cuts off here)