this post was submitted on 10 Sep 2023
671 points (95.6% liked)

Technology

59288 readers
5253 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] hellothere@sh.itjust.works 19 points 1 year ago (2 children)

Regardless of if they do or don't, surely it's in the interests of the people making the "AI" to claim that their tool is so good it's indistinguishable from humans?

[–] stevedidWHAT@lemmy.world 13 points 1 year ago (3 children)

Depends if they’re more researchers or a business imo. Scientists generally speaking are very cautious about making shit claims bc if they get called out that’s their career really.

[–] hellothere@sh.itjust.works 6 points 1 year ago* (last edited 1 year ago)

It's literally a marketing blog posted by OpenAI on their site, not a study in a journal.

[–] BetaDoggo_@lemmy.world 5 points 1 year ago (1 children)

OpenAI hasn't been focused on the science since the Microsoft investment. A science focused company doesn't release a technical report that doesn't contain any of the specs of the model they're reporting on.

[–] Zeth0s@lemmy.world 5 points 1 year ago* (last edited 1 year ago) (1 children)

Few decades ago probably, nowadays "scientists" make a lot of bs claims to get published. I was in the room when a "scientist" publishing several nature per year asked to her student to write a paper for a research without any result in a way that it looked like it had something important for a relatively good IF publication.

That day I decided I was done with academia. I had seen enough.

[–] pc_admin@aussie.zone -2 points 1 year ago (1 children)
[–] stevedidWHAT@lemmy.world 1 points 1 year ago (1 children)

You did not just drop arguably one of the most stale, dead memes of all time to try and look fucking cool

Thanks for the laugh

[–] pc_admin@aussie.zone 0 points 1 year ago
[–] Kolrami@lemmy.world 0 points 1 year ago (1 children)

Yes, but it's such a falsifiable claim that anyone is more than welcome to prove them wrong. There's a lot of slightly different LLMs out there. If you or anyone else can definitively show there's a machine that can identify AI writing vs human writing, it will either result in better AI writing or it would be an amazing breakthrough in understanding the limits of AI.

[–] hellothere@sh.itjust.works 2 points 1 year ago

People like to view the problem as a paradox - can an all powerful God create a rock they cannot lift? - but I feel that's too generous, it's more marking your own homework.

If a system can both write text, and detect whether it or another system wrote that text, then "all" it needs to do is change that text to be outside of the bounds of detection. That is to say, it just needs to convince itself.

I'm not wanting to imply that that is easy, because it isn't, but it's a very different thing to convincing someone else, especially a human, that understands the topic.

There is also a false narrative involved here, that we need an AI to detect AI which again serves as a marketing benefit to OpenAI.

We don't, because they aren't that good, at least, not yet anyway.