this post was submitted on 10 Sep 2023
671 points (95.6% liked)

Technology

59311 readers
4528 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] GlendatheGayWitch@lib.lgbt 2 points 1 year ago (5 children)

Couldn't you just ask ChapGPT whether it wrote something specific?

[–] vale@sh.itjust.works 29 points 1 year ago (1 children)

Then you have that time that a professor tried to fail his whole class because he asked chatGPT if it wrote the essays.

https://wgntv.com/news/professor-attempts-to-fail-students-after-falsely-accusing-them-of-using-chatgpt-to-cheat/

[–] wedeworps@sh.itjust.works 1 points 1 year ago (1 children)

Could you please provide a brief overview? This article is not available in my country/region.

[–] T156@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)
[–] wedeworps@sh.itjust.works 1 points 1 year ago

Thank you very much

[–] 4AV@lemmy.world 21 points 1 year ago (1 children)

It doesn't have "memory" of what it has generated previously, other than the current conversation. The answer you get from it won't be much better than random guessing.

[–] echodot@feddit.uk 8 points 1 year ago

That doesn't really work because it just says whatever half the time. It's very good at making stuff up. It doesn't really get that it needs to tell the truth because all it's doing is optimising for a good narrative.

That's why it says slavery is good, because the only people asking that question clearly have an answer in mind, and it's optimising for that answer.

Also it doesn't have access to other people's sessions (because that would be hella dodgy) so it can't tell you definitively if it did or did not say something in another session, even if it were inclined to tell the truth.

[–] mwguy@infosec.pub 7 points 1 year ago

No. The model doesn't have a record of everything it wrote.

[–] Kazumara@feddit.de 0 points 1 year ago

Obviously not. Its a language generator with a bit of chat modeling and reinforcement learning, not an Artificial General Intelligence.

It doesn't know anything, it doesn't retain memory long term, it doesn't have any self identity. There is no way it could ever truthfully respond "I know that I wrote that".