this post was submitted on 11 Jan 2024
237 points (100.0% liked)

Technology

37740 readers
598 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Apparently, stealing other people's work to create product for money is now "fair use" as according to OpenAI because they are "innovating" (stealing). Yeah. Move fast and break things, huh?

"Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials," wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit "misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence."

you are viewing a single comment's thread
view the rest of the comments
[–] luciole@beehaw.org 20 points 10 months ago* (last edited 10 months ago) (1 children)

There's this linguistic problem where one word is used for two different things, it becomes difficult to tell them apart. "Training" or "learning" is a very poor choice of word to describe the calibration of a neural network. The actor and action are both fundamentally different from the accepted meaning. To start with, human learning is active whereas machining learning is strictly passive: it's something done by someone with the machine as a tool. Teachers know very well that's not how it happens with humans.

When I compare training a neural network with how I trained to play clarinet, I fail to see any parallel. The two are about as close as a horse and a seahorse.

[–] intensely_human@lemm.ee 1 points 10 months ago (1 children)

Not sure what you mean by passive. It takes a hell of a lot of electricity to train one of these LLMs so something is happening actively.

I often interact with ChatGPT 4 as if it were a child. I guide it through different kinds of mental problems, having it take notes and evaluate its own output, because I know our conversations become part of its training data.

It feels very much like teaching a kid to me.

[–] luciole@beehaw.org 7 points 10 months ago* (last edited 10 months ago)

I mean passive in terms of will. Computers want and do nothing. They’re machines that function according to commands.

The way you feel like teaching a child when you feed input in natural language to a LLM until you’re satisfied with the output is known as the ELIZA effect. To quote Wikipedia:

In computer science, the ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface. The effect is a category mistake that arises when the program's symbolic computations are described through terms such as "think", "know" or "understand."