this post was submitted on 04 Jan 2024
182 points (91.0% liked)

Technology

58108 readers
4981 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate | It was bad at recognizing relationships and needs selective training, researchers say.::It was bad at recognizing relationships and needs selective training, researchers say.

you are viewing a single comment's thread
view the rest of the comments
[–] FinishingDutch@lemmy.world 16 points 8 months ago* (last edited 8 months ago) (1 children)

The thing that really annoys me is the people who are most enamoured with Chat GPT also seem to be the ones least capable of judging its accuracy and actual output quality.

I write for a living; a newspaper. So naturally, some of the people in our company - sales people - wanted to test it. And they were delighted with the stuff it wrote. Which was terrible to read, factually incorrect, repetitive and just not something we’d put in the paper. But they loved it. Because they weren’t writers and don’t know how to write an engaging article with proper sources.

I tested it as well. Wanted to form my own opinion and read up on the limitations, how to write good prompts, etc. So I could give it a fair chance.

I had it write a basic 500 word article about things to see in our city, with information about the tourist info office. That’s something a first year intern can do in his second week with us.

Basically, it ended up ‘inventing’ two museums that don’t exist, it listed info for a museum on the other side of the country, it listed an ‘Olympic stadium’ (we never hosted the Olympics) and it gave a completely wrong address for the tourist info, even though it should have it.

It was factually incorrect in just about every sentence. But it all sounded plausible enough and was written with such confidence that anyone not from this city might assume it to be true.

I don’t want that fucking thing anywhere NEAR my newspaper. The sales people are pretty much monkeys with Chat GPT-typewriters, churning out drivel instead of Shakespeare.

[–] LWD@lemm.ee 6 points 8 months ago

Sounds like the Gell-Mann Amnesia Effect. Except instead of a newspaper, you're reading something not generated by humans.

You open the newspaper to an article on some subject you know well... You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward... and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read.

Like the newspaper, though, I would argue that generative AI is being presented as if it knows everything about everything already, or at least collective inertia implies it does.