this post was submitted on 15 Mar 2024
16 points (90.0% liked)

Technology

55691 readers
16 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] redditReallySucks@lemmy.dbzer0.com 5 points 3 months ago (1 children)

I hope this is gonna become a new meme template

[–] driving_crooner@lemmy.eco.br 2 points 3 months ago (1 children)

She looks like she just talked to the waitress about a fake rule in eating nachos and got caught up by her date.

[–] bigMouthCommie@kolektiva.social 1 points 3 months ago (1 children)

this is incomprehensible to me. can you try it with two or three sentences?

[–] driving_crooner@lemmy.eco.br 1 points 3 months ago (4 children)

Her date was eating all the fully loaded nachos, so she went up and ask to the waitress to make up a rule about how one person cannot eat all the nacho with meat and cheese. But her date knew that rule was bullshit and called her out about it. She's trying to look confused and sad because they're going to be too soon for the movie.

[–] Thcdenton@lemmy.world 1 points 3 months ago (2 children)
[–] Plopp@lemmy.world 1 points 3 months ago

Lmao that's wonderful, scrolling down from those weird ass comments only to be greeted by my own exact facial expression.

[–] Buttons@programming.dev 1 points 3 months ago

"No... Hell no... Man, I believe you'd get your ass kicked if you said something like that..."

load more comments (3 replies)
[–] Buttons@programming.dev 2 points 3 months ago (11 children)

If I were the reporter my next question would be:

"Do you feel that not knowing the most basic things about your product reflects on your competence as CTO?"

[–] RatBin@lemmy.world 1 points 3 months ago

Also about this line:

Others, meanwhile, jumped to Murati's defense, arguing that if you've ever published anything to the internet, you should be perfectly fine with AI companies gobbling it up.

No I am not fine. When I wrote that stuff and those researches in old phpbb forums I did not do it with the knowledge of a future machine learning system eating it up without my consent. I never gave consent for that despite it being publicly available, because this would be a designation of use that wouldn't exist back than. Many other things are also publicly available, but some a re copyrighted, on the same basis: you can publish and share content upon conditions that are defined by the creator of the content. What's that, when I use zlibrary I am evil for pirating content but openai can do it just fine due to their huge wallets? Guess what, this will eventually creating a crisis of trust, a tragedy of the commons if you will when enough ai generated content will build the bulk of your future Internet search! Do we even want this?

load more comments (10 replies)
[–] phoneymouse@lemmy.world 2 points 3 months ago (1 children)

There is no way in hell it isn’t copyrighted material.

[–] abhibeckert@lemmy.world 2 points 3 months ago* (last edited 3 months ago) (1 children)

Every video ever created is copyrighted.

The question is — do they need a license? Time will tell. This is obviously going to court.

[–] Kazumara@feddit.de 3 points 3 months ago

Don't downvote this guy. He's mostly right. Creative works have copyright protections from the moment they are created. The relevant question is indeed if they have the relevant permissions for their use, not wether it had protections in the first place.

Maybe some surveillance camera footage is not sufficiently creative to get protections, but that's hardly going to be good for machine reinforcement learning.

[–] Fisk400@feddit.nu 2 points 3 months ago (2 children)

They know what they fed the thing. Not backing up their own training data would be insane. They are not insane, just thieves

[–] echodot@feddit.uk 3 points 3 months ago (25 children)

Everyone says this but the truth is copyright law has been unfit for purpose for well over 30 years now. And the lords were written no one expected something like the internet to ever come along and they certainly didn't expect something like AI. We can't just keep applying the same old copyright laws to new situations when they already don't work.

I'm sure they did illegally obtain the work but is that necessarily a bad thing? For example they're not actually making that content available to anyone so if I pirate a movie and then only I watch it, I don't think anyone would really think I should be arrested for that, so why is it unacceptable for them but fine for me?

[–] A_Very_Big_Fan@lemmy.world 1 points 3 months ago (2 children)

if I pirate a movie and then only I watch it, I don't think anyone would really think I should be arrested for that, so why is it unacceptable for them but fine for me?

Because it's more analogous to watching a video being broadcasted outdoors in the public, or looking at a mural someone painted on a wall, and letting it inform your creative works going forward. Not even recording it, just looking at it.

As far as we know, they never pirated anything. What we do know is it was trained on data that literally anybody can go out and look at yourself and have it inform your own work. If they're out here torrenting a bunch of movies they don't own or aren't licencing, then the argument against them has merit. But until then, I think all of this is a bunch of AI hysteria over some shit humans have been doing since the first human created a thing.

load more comments (2 replies)
load more comments (24 replies)
load more comments (1 replies)
[–] dezmd@lemmy.world 1 points 3 months ago (2 children)

LLM is just another iteration of Search. Search engines do the same thing. Do we outlaw search engines?

[–] AliasAKA@lemmy.world 1 points 3 months ago (2 children)

SoRA is a generative video model, not exactly a large language model.

But to answer your question: if all LLMs did was redirect you to where the content was hosted, then it would be a search engine. But instead they reproduce what someone else was hosting, which may include copyrighted material. So they’re fundamentally different from a simple search engine. They don’t direct you to the source, they reproduce a facsimile of the source material without acknowledging or directing you to it. SoRA is similar. It produces video content, but it doesn’t redirect you to finding similar video content that it is reproducing from. And we can argue about how close something needs to be to an existing artwork to count as a reproduction, but I think for AI models we should enforce citation models.

load more comments (2 replies)
[–] dantheclamman@lemmy.world 1 points 3 months ago

I feel conflicted about the whole thing. Technically it's a model. I don't feel that people should be able to sue me as a scientist for making a model based on publicly available data. I myself am merely trying to use the model itself to explain stuff about the world. But OpenAI are also selling access to the outputs of the model, that can very closely approximate the intellectual property of people. Also, most of the training data was accessed via scraping and other gray market methods that were often explicitly violating the TOU of the various places they scraped from. So it all is very difficult to sort through ethically.

[–] _haha_oh_wow_@sh.itjust.works 1 points 3 months ago (5 children)

Gee, seems like something a CTO would know. I'm sure she's not just lying, right?

load more comments (5 replies)
[–] ZILtoid1991@lemmy.world 1 points 3 months ago

I have a feeling that the training material involves cheese pizza...

[–] anon_8675309@lemmy.world 1 points 3 months ago (1 children)

CTO should definitely know this.

[–] ItsMeSpez@lemmy.world 1 points 3 months ago

They do know this. They're avoiding any legal exposure by being vague.

load more comments
view more: next ›