this post was submitted on 10 Sep 2023
671 points (95.6% liked)

Technology

59311 readers
4528 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] cheese_greater@lemmy.world 106 points 1 year ago* (last edited 1 year ago) (3 children)

I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I'm not joke answering or shitposting.

[–] Steeve@lemmy.ca 40 points 1 year ago

We found the source

[–] TropicalDingdong@lemmy.world 23 points 1 year ago* (last edited 1 year ago) (2 children)

I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering.

It's not unusual for well-constructed human writing to resemble the output of advanced language models like ChatGPT. After all, language models like GPT-4 are trained on vast amounts of human text, and their main goal is to replicate and generate human-like text based on the patterns they've observed.

/gpt-4

[–] cheese_greater@lemmy.world 11 points 1 year ago* (last edited 1 year ago)

Be me

well-constructed human writing

You guys?! 🤗

load more comments (1 replies)
[–] BananaOnionJuice@lemmy.dbzer0.com 7 points 1 year ago (3 children)

Do you also need help from a friend to prove you are not a robot?

load more comments (3 replies)
[–] cheesorist@lemmy.world 70 points 1 year ago (26 children)

they never did, they never will.

load more comments (26 replies)
[–] ReallyKinda@kbin.social 56 points 1 year ago (12 children)

I know a couple teachers (college level) that have caught several gpt papers over the summer. It’s a great cheating tool but as with all cheating in the past you still have to basically learn the material (at least for narrative papers) to proof gpt properly. It doesn’t get jargon right, it makes things up, it makes no attempt to adhere to reason when it’s making an argument.

Using translation tools is extra obvious—have a native speaker proof your paper if you attempt to use an AI translator on a paper for credit!!

[–] SpikesOtherDog@ani.social 14 points 1 year ago (1 children)

it makes things up, it makes no attempt to adhere to reason when it’s making an argument.

It doesn't hardly understand logic. I'm using it to generate content and it continuously will assert information in ways that don't make sense, relate things that aren't connected, and forget facts that don't flow into the response.

[–] mayonaise_met@feddit.nl 10 points 1 year ago* (last edited 1 year ago) (1 children)

As I understand it as a layman who uses GPT4 quite a lot to generate code and formulas, it doesn't understand logic at all. Afaik, there is currently no rational process which considers whether what it's about to say makes sense and is correct.

It just sort of bullshits it's way to an answer based on whether words seem likely according to its model.

That's why you can point it in the right direction and it will sometimes appear to apply reasoning and correct itself. But you can just as easily point it in the wrong direction and it will do that just as confidently too.

[–] Aceticon@lemmy.world 7 points 1 year ago (1 children)

It has no notion of logic at all.

It roughly works by piecing together sentences based on the probability of the various elements (mainly words but also more complex) being there in various relations to each other, the "probability curves" (not quite probability curves but that's a good enough analog) having been derived from the very large language training sets used to train them (hence LLM - Large Language Model).

This is why you might get things like pieces of argumentation which are internally consistent (or merelly familiar segments from actual human posts were people are making an argument) but they're not consistent with each other - the thing is not building an argument following a logic thread, it's just putting together language tokens in common ways which in its training set were found associate with each other and with language token structures similar to those in your question.

load more comments (1 replies)
load more comments (11 replies)
[–] Nioxic@lemmy.dbzer0.com 32 points 1 year ago* (last edited 1 year ago) (1 children)

I have to hand in a short report

I wrote parts of it and asked chatgpt for a conclusion.

So i read that, adjusted a few points. Added another couple points..

Then rewrote it all in my own wording. (Chatgpt gave me 10 lines out of 10 pages)

We are allowed to use chatgpt though. Because we would always have internet access for our job anyway. (Computer science)

[–] TropicalDingdong@lemmy.world 12 points 1 year ago (1 children)

I found out on the last screen of a travel grant application I needed a coverletter.

I pasted in the requirements for the cover letter and what I had put in my application.

I pasted the results in as the cover letter without review.

I got the travel grant.

[–] Blurrg@lemmy.world 8 points 1 year ago (1 children)

Who reads cover letters? At most they are skimmed over.

[–] TropicalDingdong@lemmy.world 9 points 1 year ago

Exactly. But they still need to exist. That's what chat gpt is for. Letters, bullshit emails, applications. The shit that's just tedious.

[–] Boddhisatva@lemmy.world 29 points 1 year ago (3 children)

OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

If you ask this thing whether or not some given text is AI generated, and it is only right 26% of the time, then I can think of a real quick way to make it 74% accurate.

[–] Leate_Wonceslace@lemmy.dbzer0.com 14 points 1 year ago (3 children)

I feel like this must stem from a misunderstanding of what 26% accuracy means, but for the life of me, I can't figure out what it would be.

[–] dartos@reddthat.com 11 points 1 year ago* (last edited 1 year ago)

Looks like they got that number from this quote from another arstechnica article ”…OpenAI admitted that its AI Classifier was not "fully reliable," correctly identifying only 26 percent of AI-written text as "likely AI-written" and incorrectly labeling human-written works 9 percent of the time”

Seems like it mostly wasn’t confident enough to make a judgement, but 26% it correctly detected ai text and 9% incorrectly identified human text as ai text. It doesn’t tell us how often it labeled AI text as human text or how often it was just unsure.

EDIT: this article https://arstechnica.com/information-technology/2023/07/openai-discontinues-its-ai-writing-detector-due-to-low-rate-of-accuracy/

load more comments (2 replies)
load more comments (2 replies)
[–] doublejay1999@lemmy.world 27 points 1 year ago (1 children)

AI company says their AI is smart, but other companies are sell snake oil.

Gottit

[–] canihasaccount@lemmy.world 26 points 1 year ago (1 children)

They tried training an AI to detect AI, too, and failed

load more comments (1 replies)
[–] Blackmist@feddit.uk 20 points 1 year ago (2 children)

The only thing AI writing seems to be useful for is wasting real people's time.

[–] itsmaxyd@lemm.ee 12 points 1 year ago

True -

  1. Write points/summary
  2. Have AI expand in many words
  3. Post
  4. Reader uses AI to generate summarize post preferably in points
  5. Profit??
load more comments (1 replies)
[–] hellothere@sh.itjust.works 19 points 1 year ago (3 children)

Regardless of if they do or don't, surely it's in the interests of the people making the "AI" to claim that their tool is so good it's indistinguishable from humans?

[–] stevedidWHAT@lemmy.world 13 points 1 year ago (7 children)

Depends if they’re more researchers or a business imo. Scientists generally speaking are very cautious about making shit claims bc if they get called out that’s their career really.

load more comments (7 replies)
load more comments (2 replies)
[–] Matriks404@lemmy.world 19 points 1 year ago (6 children)

Did human-generated content really become so low quality that it is distinguishable from AI-generated content?

[–] technicalogical@lemmy.world 15 points 1 year ago (1 children)

Should I be able to detect whether or not this is an AI generated comment?

[–] nodsocket@lemmy.world 16 points 1 year ago

As an AI language model, I am unable to confirm whether or not the above post was written by an AI.

[–] funktion@lemm.ee 7 points 1 year ago

People kind of just suck at writing in general. It's not a skill that's valued so much, otherwise writers, editors, and proofreaders would be paid more.

[–] DogMuffins@discuss.tchncs.de 7 points 1 year ago

Not necessarily. It's just that AI's can't tell the difference.

Although I don't know whether humans can.

load more comments (3 replies)
[–] irotsoma@lemmy.world 17 points 1 year ago (1 children)

A lot of these relied on common mistakes that "AI" algorithms make but humans generally don't. As language models are improving, it's harder to detect.

[–] Cethin@lemmy.zip 13 points 1 year ago

They're also likely training on the detector's output. That why they build detectors. It isn't for the good of other people. It's to improve their assets. A detector is used to discard some inputs it knows are written by AI so it doesn't train on that data, which leads to it out competing the detection AI.

[–] Shameless@lemmy.world 16 points 1 year ago (1 children)

I just realised that especially in teaching, people are treating these LLM's the same way that I remember teachers in school used to treat computers and later the internet.

"Now class you need a 5 page essay on Hamlet by next Friday, it should be hand written and no copying from the internet!! It needs to be hand written because you can't always rely on computers to be there..."

[–] Turun@feddit.de 14 points 1 year ago (3 children)

Or, because you can't rely on computers to tell you the truth. Which is exactly the issue with LLMs as well.

load more comments (3 replies)
[–] Absolutemehperson@lemmy.world 12 points 1 year ago

mfw just asking ChatGPT to write an undetectable essay.

Later, losers!

load more comments
view more: next ›