this post was submitted on 02 Oct 2023
161 points (89.3% liked)

Technology

34987 readers
482 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] anachronist@midwest.social 5 points 1 year ago* (last edited 1 year ago) (1 children)

You can quote a work under fair use, and if it's legal depends on your intent. You have to be quoting it for such uses as "commentary, criticism, news reporting, and scholarly reports."

There is no cheat code here. There is no loophole that LLMs can slide on through. The output of LLMs is illegal. The training of LLMs without consent is probably illegal.

The industry knows that its activity is illegal and it strategy is not to win but rather to make litigation expensive, complex and slow through such tactics as:

  1. Diffusion of responsibility: (note the companies compiling the list of training works, gathering those works, training on those works and prompting the generation of output are all intentionally different entities). The strategy is that each entity can claim "I was only doing X, the actual infringement is when that guy over there did Y".
  2. Diffusion of infringement: so many works are being infringed that it becomes difficult, especially on the output side, to say who has been infringed and who has standing. What's more, even in clear cut cases like, for instance, when I give an LLM a prompt and it regurgitates some nontrivial recognizable copyrighted work, the LLM trainer will say you caused the infringement with your prompt! (see point 1)
  3. Pretending to be academic in nature so they could wrap themselves in the thick blanket of affirmative defense that fair use doctrine affords the academy, and then after the training portion of the infringement has occurred (insisting that was fair use because it was being used in an academic context) "whoopseeing" it into a commercial product.
  4. Just being super cagey about the details of the training sets that were actually used and how they were used. This kind of stuff is discoverable but you have to get to discovery first.
  5. and finally magic brain box arguments. These is typically some variation of "all artists have influences." It is a rhetorical argument that would be blown right past in court, but it muddies the public discussion and is useful to them in that way.

Their purpose is not to win. It's to slow everything down, and limit the number of people who are being infringed who have the resources to pursue them. The goal is that if they can get LLMs to "take over" quickly then they can become, you know, too big and too powerful to be shut down even after the inevitable adverse rulings. It's classic "ask for forgiveness, not permission" silicon valley strategy.

Sam Altman's goal in creeping around Washington is to try to get laws changed to carve out exceptions for exactly the types of stuff he is already doing. And it is just the same thing SBF was doing when he was creeping around Washington trying to get a law that would declare his securitized ponzi tokens to be commodities.

[โ€“] echodot@feddit.uk 3 points 1 year ago

There is no cheat code here.

No one said there was one. This isn't about looking for way to break the law and get away with it, this is about the people who want the law to work a particular way not understanding that it doesn't actually work that way.

The output of LLMs is illegal.

No its not. There is no way in which the output of an AI can be illegal. All can be proven is that the various providers did not pay for the various licences but that's not the same as saying the output is automatically a crime, if it was then we'd not even be needing the case. The law is incredibly vague in this area.

Sam Altman's goal in creeping around Washington is to try to get laws changed to carve out exceptions for exactly the types of stuff he is already doing.

Yes and that's a good thing. Think about it for 15 seconds. If it weren't for people like him AI would be limited to the mega corporations who can afford the licensees, we don't want that, we want a AI technology to be available to anyone, we want AI technology to be open source. None of that can happen if the law does not change.

You seem to be under the impression there is some evil sadistic overlord here trying to force artificial intelligence on the world when it does not wanted, but nothing could be further from the truth, if anything artificial intelligence is being developed in a way that is surprisingly egalitarian considering the corporations that are investing in it, and vague unclear unhelpful broken copyright law is getting in the way of that.