this post was submitted on 04 Jan 2024
358 points (97.6% liked)

Technology

59569 readers
4416 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

cross-posted from: https://programming.dev/post/8121669

Taggart (@mttaggart) writes:

Japan determines copyright doesn't apply to LLM/ML training data.

On a global scale, Japan’s move adds a twist to the regulation debate. Current discussions have focused on a “rogue nation” scenario where a less developed country might disregard a global framework to gain an advantage. But with Japan, we see a different dynamic. The world’s third-largest economy is saying it won’t hinder AI research and development. Plus, it’s prepared to leverage this new technology to compete directly with the West.

I am going to live in the sea.

www.biia.com/japan-goes-all-in-copyright-doesnt-apply-to-ai-training/

you are viewing a single comment's thread
view the rest of the comments
[–] LWD@lemm.ee 27 points 10 months ago (2 children)

Painting and selling an exact copy of a recent work, such as Banksy, is a crime.

… however making an exact copy of Banksy for personal use, or to learn, or to teach other people, or copying the style… that’s all perfectly legal.

And that was the bait and switch of OpenAI! They sold themselves as being a non-profit simply doing research, for which it would be perfectly legal to consume and reproduce large quantities of data... And then, once they had the data, they started selling access to it.

I would say that that alone, along with the fact that they function as gatekeepers to the technology (One does not simply purchase the model from OpenAI, after all) they are hardly free of culpability... But it definitely depends on the person trying to use their black box too.

[–] abhibeckert@lemmy.world 6 points 10 months ago* (last edited 10 months ago)

Huh? What does being non profit have to do with it? Private companies are allowed to learn from copyrighted work. Microsoft and Apple, for example, look at each other's software and copy ideas (not code, just ideas) all the time. The fact Linux is non-profit doesn't give them any additional rights or protection.

[–] iegod@lemm.ee 4 points 10 months ago (1 children)

They're not gatekeeping llms though, there are publicly available models and data sets.

[–] LWD@lemm.ee 1 points 10 months ago* (last edited 10 months ago) (1 children)

If it's publicly available, why didn't Microsoft just download and use it rather than paying them for a partnership?
(And where at?)

IIRC they only open-sourced some old stuff.

[–] iegod@lemm.ee 1 points 10 months ago

Stability diffusion is open source. You can run local instances with provided and free training sets to query against and generate your own outputs.

https://stability.ai/