this post was submitted on 11 Jan 2024
237 points (100.0% liked)

Technology

37740 readers
650 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Apparently, stealing other people's work to create product for money is now "fair use" as according to OpenAI because they are "innovating" (stealing). Yeah. Move fast and break things, huh?

"Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials," wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit "misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence."

(page 3) 50 comments
sorted by: hot top controversial new old
[–] randomaside@lemmy.dbzer0.com 5 points 10 months ago

OpenAI now needs to go to court and argue fair use forever. That's the burden of our system. Private ownership is valued higher than anything else so ... Good luck we're all counting on you (unfortunately).

[–] DavidGarcia@feddit.nl 5 points 10 months ago

ip protections are a spook anyway

[–] java@beehaw.org 4 points 10 months ago (2 children)
load more comments (2 replies)
[–] autotldr@lemmings.world 3 points 10 months ago

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summaryFurther, OpenAI writes that limiting training data to public domain books and drawings "created more than a century ago" would not provide AI systems that "meet the needs of today's citizens."

OpenAI responded to the lawsuit on its website on Monday, claiming that the suit lacks merit and affirming its support for journalism and partnerships with news organizations.

OpenAI's defense largely rests on the legal principle of fair use, which permits limited use of copyrighted content without the owner's permission under specific circumstances.

"Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents," OpenAI wrote in its Monday blog post.

In August, we reported on a similar situation in which OpenAI defended its use of publicly available materials as fair use in response to a copyright lawsuit involving comedian Sarah Silverman.

OpenAI claimed that the authors in that lawsuit "misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence."


Saved 58% of original text.

[–] SloppySol@lemm.ee 3 points 10 months ago (4 children)

I would just like to say, with open curiosity, that I think a nice solution would be for OpenAI to become a nonprofit with clear guidelines to follow.

What does that make me? Other than an idiot.

Of that at least, I’m self aware.

I feel like we’re disregarding the significance of artificial intelligence’s existence in our future, because the only thing anybody that cares is trying to do is get back control to DO something about it. But news is becoming our feeding tube for the masses. They’ve masked that with the hate of all of us.

Anyways, sorry, diatribe, happy new year

[–] MagicShel@programming.dev 3 points 10 months ago (1 children)

I think OpenAI (or some part of it) is a non-profit. But corporate fuckery means it can largely be funded by for profit companies which then turn around and profit from that relationship. Corporate law is so weak and laxly enforced that's it's a bit of a joke unfortunately.

I agree that AI has an important role to play in the future, but it's a lot more limited in the current form than a lot of people want to believe. I'm writing a tool that leverages AI as a sort of auto-DM for roleplaying, but AI hasn't written a line of code in it because the output is garbage. And frankly I find the fun and value of the tool comes from the other humans you play with, not the AI itself. The output just isn't that good.

load more comments (1 replies)
load more comments (3 replies)
[–] Esqplorer@lemmy.zip 3 points 10 months ago

The amount of second hand content an LLM needs to consume to train inevitably includes copyrighted material. If they used this thread, the quotes OP included would end up in the training set.

The amount of fan forums and wikis on copy written material provide copious amounts of information about the stories and facilitate the retelling. They're right that it is impossible for a general purpose LLM.

My personal experience so far though has been that general purpose and multiple modality LLMs are less consistently useful to me than GPT4 was at launch. I think small, purpose built LLMs with trusted content providers have a better chance of success for most users, but we will see if anyone can make that work given the challenge of bringing users to the right one for the right task.

load more comments
view more: ‹ prev next ›