this post was submitted on 28 Aug 2024
59 points (100.0% liked)
Open Source
31265 readers
294 users here now
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
- !libre_culture@lemmy.ml
- !libre_software@lemmy.ml
- !libre_hardware@lemmy.ml
- !linux@lemmy.ml
- !technology@lemmy.ml
Community icon from opensource.org, but we are not affiliated with them.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I have to admit - my initial outrage over Copilot training on open-source code has vanished.
Now that these networks are trained on literally anything they can grab, including extremely copyrighted movies... we've seen that they're either thoroughly transformative soup, or else the worst compression and search tools you've ever seen. There's not really a middle ground. The image models where people have teased out lookalike frames for Dune or whatever aren't good at much else. The language models that try to answer questions as more than dream-sequence autocomplete poetry will confidently regurgitate dangerous nonsense because they're immune to sarcasm.
The comparisons to a human learning from code by reading it are half-right. There are systems that discern relevant information without copying specific examples. They're just utterly terrible at applying that information. Frankly, so are the ones copying specific examples. Once again, we've advanced the state of "AI," and the A went a lot further than the I.
And I cannot get offended on Warner Brothers' behalf if a bunch of their DVDs were sluiced into a model that can draw Superman. I don't even care when people copy their movies wholesale. Extracting the essence of an iconic character from those movies is obviously a transformative use. If some program will emit "slow motion zoom on Superman slapping Elon Musk," just from typing that, that's cool as hell and I refuse to pretend otherwise. It's far more interesting than whatever legal fictions both criminalized 1700s bootlegging and encouraged Walt Disney's corpse to keep drawing.
So consider the inverse:
Someone trains a Copilot clone on a dataset including the leaked Windows source code.
Do you expect these corporations to suddenly claim their thing is being infringed upon, in front of any judge with two working eyes?
More importantly - do you think that stupid robot would be any help what-so-ever to Wine developers? I don't. These networks are good at patterns, not specifics. Good is being generous. If I wanted that illicit network to shamelessly clone Windows code, I expect the brace style would definitely match, the syntax might parse, and the actual program would do approximately dick.
Neural networks feel like magic when hideously complex inputs have sparse approximate outputs. A zillion images could satisfy the request, "draw a cube." Deep networks given a thousand human examples will discern some abstract concept of cube-ness... and also the fact you handed those thousand humans a blue pen. It's simply not a good match for coding. Software development is largely about hideously complex outputs that satisfy sparse inputs in a very specific way. One line, one character, can screw things up in ways that feel incomprehensible. People have sneered about automation taking over coding since the punched-tape era, and there's damn good reasons it keeps taking their jobs instead of ours. We're not doing it on purpose. We're always trying to make our work take less work. We simply do not know how to tell the machine to do what we do with machines. And apparently - neither do the machines.
Excellent post. Thanks for sharing. I pretty much completely agree.
Everything is a Remix