this post was submitted on 10 Dec 2023
211 points (97.3% liked)

Technology

57853 readers
6718 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI says it is investigating reports ChatGPT has become ‘lazy’::OpenAI says it is investigating complaints about ChatGPT having become “lazy”.

you are viewing a single comment's thread
view the rest of the comments
[–] backgroundcow@lemmy.world 16 points 9 months ago (2 children)

Was this around the time right after "custom GPTs" was introduced? I've seen posts since basically the beginning of ChatGPT claming it got stupid and thinking it was just confirmation bias. But somewhere around that point I felt a shift myself in GPT4:s ability to program; where it before found clever solutions to difficult problems, it now often struggles with basics.

[–] Linkerbaan@lemmy.world 17 points 9 months ago (1 children)

Maybe they're crippling it so when GPT5 releases it looks better. Like Apple did with cpu throttling of older iphones

[–] tagliatelle@lemmy.world 16 points 9 months ago* (last edited 9 months ago) (1 children)

They probably have to scale down the resources used for each query as they can't scale up their infrastructure to handle the load.

[–] backgroundcow@lemmy.world 4 points 9 months ago

This is my guess as well. They have been limiting new signups for the paid service for a long time, which must mean they are overloaded; and then it makes a lot of sense to just degrade the quality of GPT-4 so they can serve all paying users. I just wish there was a way to know the "quality level" the service is operating at.

[–] Meowoem@sh.itjust.works 2 points 9 months ago

I do think part of it is expectation creep but also that it's got better at some harder elements which aren't as noticeable - it used to invent functions which should exist but don't, I haven't seen it do that in a while but it does seem to have limited the scope it can work with. I think it's probably like how with images you can have it make great images OR strictly obey the prompt but the more you want it to do one the less it can do the other.

I've been using 3.5 to help code and it's incredibly useful for things it's good at like reminding me what a certain function call does and what my options are with it, it's got much better at that and tiny scripts like 'a python script that reads all the files in a folder and sorts the big images into a separate folder' or something like that. Getting it to handle anything with more complexity it's got worse at, it was never great at it tbh so I think maybe it's getting to s block where now it knows it can't do it so rejects the answers with critical failures (like when it makes up function of a standard library because it'd be useful) and settles on a weaker but less wrong one - a lot of the making up functions errors were easy to fix because you could just say 'pil doesn't have a function to do that can you write one'

So yeah I don't think it's really getting worse but there are tradeoffs - if only openAI lived by any of the principles they claimed when setting up and naming themselves then we'd be able to experiment and explore different usage methods for different tasks just like people do with stable diffusion. But capitalists are going to lie, cheat, and try to monopolize so we're stuck guessing.