this post was submitted on 04 Dec 2023
849 points (98.0% liked)
Technology
59389 readers
3950 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't think that would trigger it. There's too much context remaining when repeating something like that. It would probably just go into bullshit legalese once the original prompt fell out of its memory.
It looks like there are some safeguards now against it. https://chat.openai.com/share/1dff299b-4c62-4eae-88b2-0d209e66b479
It also won't count to a billion or calculate pi.
Isn't that beyond a LLM's capabilities anyway? It doesn't calculate anything, it just spits out the next most likely word in a sequence
Right, but it could dump out a large sequence if it's seen it enough times in the past.
Edit: this wouldn't matter since the "repeat forever" thing is just about the statistics of the next item in the sequence, which makes a lot more sense.
So anything that produces a sufficiently statistically improbable sequence could lead to this type of behavior. The size of the content is a red herring.
https://chat.openai.com/share/6cbde4a6-e5ac-4768-8788-5d575b12a2c1