this post was submitted on 26 Jul 2024
236 points (96.1% liked)
Technology
59419 readers
4423 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ah, to clarify: Model Collapse is still an issue - one for which mitigation techniques are already being developed and applied, and have been for a while. While yes currently LLM content is harder to train against, there's no reason that must always hold true - this paper actually touches on that weird aspect! Right now, we have to be careful to design with model collapse in mind and work to mitigate it manually, but as the technology improves it's theorized that we'll hit a point at which models coalesce towards stability, not collapse, even when fed training data that was generated by an LLM. I've seen the concept called Generative Bootstrapping or the Bootstrap Ladder (it's a new enough concept that we haven't all agreed on a name for it yet. we can only hope someone comes up with something better because wow the current ones suck...). We're even seeing some models that are starting to do this coalesce-towards-stability thing, though only in some extremely niche applications. Only time will tell if all models are able to do this stable-coalescing or if it's only possible in some cases.
My original point though was just that this headline is fairly sensationalist, and that people shouldn't take too much hope from this collapse because we're both aware of it, and are working to mitigate it (exactly like the paper itself cautions us to do)
Thanks for the reply.
I guess we'll see what happens.
I still find it difficult to get my head around how a decrease in novel training data will not eventually cause problems (even with techniques to work around this in the short term, which I am sure work well on a relative basis).
A bit of an aside, I also have zero trust in the people behind current LLM, both the leadership (e.g. Altman) or the rank and file. If it's in their interests do downplay the scope and impact of model degeneracy, they will not hesitate to lie about it.
Yikes. Well. I'll be over here, conspiring with the other NASA lizard people on how best to deceive you by politely answering questions on a site where maaaaybe 20 total people will actually read it. Good luck getting your head around it, there's lots of papers out there that might help (well, assuming I'm not lying to you about those, too).
This was a general comment, not aimed at you. Honestly, it wasn't my intention to accuse you specifically. Apologies for that.