this post was submitted on 26 Aug 2023
297 points (85.6% liked)

Technology

59118 readers
6622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.

you are viewing a single comment's thread
view the rest of the comments
[–] SirGolan@lemmy.sdf.org 9 points 1 year ago (12 children)

What's with all the hit jobs on ChatGPT?

Prompts were input to the GPT-3.5-turbo-0301 model via the ChatGPT (OpenAI) interface.

This is the second paper I've seen recently to complain ChatGPT is crap and be using GPT3.5. There is a world of difference between 3.5 and 4. Unfortunately news sites aren't savvy enough to pick up on that and just run with "ChatGPT sucks!" Also it's not even ChatGPT if they're using that model. The paper is wrong (or it's old) because there's no way to use that model in the ChatGPT interface. I don't think there ever was either. It was probably ChatGPT 0301 or something which is (afaik) slightly different.

Anyway, tldr, paper is similar to "I tried running Diablo 4 on my Windows 95 computer and it didn't work. Surprised Pikachu!"

[–] eggymachus@sh.itjust.works -3 points 1 year ago (11 children)

And this tech community is being weirdly luddite over it as well, saying stuff like "it's only a bunch of statistics predicting what's best to say next". Guess what, so are you, sunshine.

[–] amki@feddit.de 2 points 1 year ago (3 children)

Might be true for you but most people do have a concept of true and false and don't just dream up stuff to say.

[–] Dultas@lemmy.world 3 points 1 year ago

Do they? *Laughs nervously in American.

[–] eggymachus@sh.itjust.works 1 points 1 year ago

Yeah, I was probably a bit too caustic, and there's more to (A)GI than an LLM can achieve on its own, but I do believe that some, and perhaps a large, part of human consciousness works in a similar manner.

I also think that LLMs can have models of concepts, otherwise they couldn't do what they do. Probably also of truth and falsity, but perhaps with a lack of external grounding?

[–] markr@lemmy.world 1 points 1 year ago

Actually we ‘dream up’ things to say quite a lot. As in our unconscious functions are far more important to our mental processes than we like to admit. Also we are basically not very good at evaluating the truth value of complex expressions.

load more comments (7 replies)
load more comments (7 replies)