That would be right if they understood/knew what they were talking about. It’s more akin to really advanced autocorrect that sounds/reads like something the ai was trained on. So it sounds correct but really has 0 basis on truth other than “the model predicts a human would say X next”. Truth is rarely the goal of any of these machine learning language models afaik.
CineMaddie
joined 1 year ago
Why are we relying on language models to answer questions. These things don’t really “know” anything right?
Thank god someone who understands. I hated how towards the end reddit was so full of misinformation and people talking out of their ass with confidence. Hope Lemmy can steer away from those tendencies. It’s okay if we don’t have the answer sometimes.