this post was submitted on 07 Nov 2023
123 points (82.9% liked)

Technology

58131 readers
4063 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] joystick@lemmy.world 71 points 10 months ago (1 children)
[–] fmstrat@lemmy.nowsci.com 7 points 10 months ago

Believable because:

However, the system is highly specialized for scientific journal articles. When presented with real articles from university newspapers, it failed to recognize them as being written by humans.

So outside of its purview? Agree.

[–] EvilBit@lemmy.world 58 points 10 months ago (1 children)

As I understand it, one of the ways AI models are commonly trained is basically to run them against a detector and train against it until they can reliably defeat it. Even if this was a great detector, all it’ll really serve to do is teach the next model to beat it.

[–] magic_lobster_party@kbin.social 21 points 10 months ago (2 children)

That’s how GANs are trained, and I haven’t seen anything about GPT4 (or DALL-E) being trained this way. It seems like current generative AI research is moving away from GANs.

[–] KingRandomGuy@lemmy.world 3 points 10 months ago

Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won't be able to perform backpropagation and therefore can't generate gradients to update your generator's weights.

That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.

[–] EvilBit@lemmy.world 2 points 10 months ago

I know it’s intrinsic to GANs but I think I had read that this was a flaw in the entire “detector” approach to LLMs as well. I can’t remember the source unfortunately.

[–] CthulhuOnIce@sh.itjust.works 37 points 10 months ago (1 children)

I really really doubt this, openai said recently that ai detectors are pretty much impossible. And in the article they literally use the wrong name to refer to a different AI detector.

Especially when you can change Chatgpt's style by just asking it to write in a more casual way, "stylometrics" seems to be an improbable method for detecting ai as well.

[–] Fredthefishlord@lemmy.blahaj.zone 3 points 10 months ago (1 children)

It's in openai's best interests to say they're impossible. Completely regardless of the truth of if they are, that's the least trustworthy possible source to take into account when forming your understanding of this.

[–] CthulhuOnIce@sh.itjust.works 5 points 10 months ago

openai had their own ai detector so I don't really think it's in their best interest to say that their product being effective is impossible

[–] simple@lemm.ee 30 points 10 months ago (2 children)

Willing to bet it also catches non-AI text and calls it AI-generated constantly

[–] snooggums@kbin.social 15 points 10 months ago

The best part of that if AI does a good job of summarizing, then anyone who is good at summarizing will look like AI. Like if AI news articles look like a human wrote it, then a human written news article will look like AI.

[–] floofloof@lemmy.ca 10 points 10 months ago* (last edited 10 months ago)

The original paper does have some figures about misclassified paragraphs of human-written text, which would seem to mean false positives. The numbers are higher than for misclassified paragraphs of AI-written text.

[–] TropicalDingdong@lemmy.world 26 points 10 months ago (3 children)

This is kind-of silly.

We will 100% be using AI to generate papers now and in the future. If the AI can catch any wrong conclusions or misleading interpretations, that would be helpful.

Not using AI to help you write at this point is you wasting valuable time.

[–] theluddite@lemmy.ml 10 points 10 months ago* (last edited 10 months ago) (3 children)

I do a lot of writing of various kinds, and I could not disagree more strongly. Writing is a part of thinking. Thoughts are fuzzy, interconnected, nebulous things, impossible to communicate in their entirety. When you write, the real labor is converting that murky thought-stuff into something precise. It's not uncommon in writing to have an idea all at once that takes many hours and thousands of words to communicate. How is an LLM supposed to help you with that? The LLM doesn't know what's in your head; using it is diluting your thought with statistically generated bullshit. If what you're trying to communicate can withstand being diluted like that without losing value, then whatever it is probably isn't meaningfully worth reading. If you use LLMs to help you write stuff, you are wasting everyone else's time.

[–] Excrubulent@slrpnk.net 6 points 10 months ago* (last edited 10 months ago) (1 children)

Yeah, I agree. You can see this in all AI generated stuff - none of it has any purpose, no intention.

People who say it's saving them time, I mean I have to ask what these people are doing that can be replaced by AI and whether they're actually any good at it, and whether the AI has improved their work or just made it happen faster at the expense of quality.

I have turned off all predictive writing of any kind on my devices, it gets in my head and stops me from forming my own thoughts. I want my authentic voice and I can't stand the idea of a machine prompting me with its own idea of what I want to say.

Like... we're prompting the AI, but are they really prompting us?

[–] theluddite@lemmy.ml 3 points 10 months ago

Amen. In fact, I wrote a whole thing about exactly this -- without an LLM! Like most things I write, it took me many hours and evolved many times, but I take pleasure in communicating something to the reader, in the same way that I take pleasure in learning interesting things reading other people's writing.

load more comments (2 replies)
[–] Fixbeat@lemmy.ml 7 points 10 months ago

They’re just mad that the drudgery of writing papers is coming to an end and they have one less tool to torment students.

[–] Laticauda@lemmy.ca 2 points 10 months ago (1 children)

Not using AI to help you write at this point is you wasting valuable time.

Bro WHAT are you smoking. In academia the process of writing the paper is just as important as the paper itself, and in creative writing why would you even bother being a writer if you just had an ai do it for you? Wasting valuable time? The act of writing it is inherently valuable.

load more comments (1 replies)
[–] Deckweiss@lemmy.world 13 points 10 months ago (4 children)

I don't understand. Are there places where using chatGPT for papers is illegal?

The state where I live explicitly allows it. Only plagiarism is prohibited. But making chatGPT formulate the result of your scientific work, or correct the grammar or improve the style, etc. doesn't bother anybody.

[–] alienanimals@lemmy.world 19 points 10 months ago (2 children)

It's not a big deal. People are just upset that kids have more tools/resources than they did. They would prefer kids wrote on paper with pencil and did not use calculators or any other tool that they would have available to them in the workforce.

[–] Phanatik@kbin.social 7 points 10 months ago (2 children)

There's a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you. One invokes plagiarism which schools/universities are strongly against.

The problem is being able to differentiate between a paper that's been written by a human (which may or may not be written with ChatGPT's assistance) and a paper entirely written by ChatGPT and presented as a student's own work.

I want to strongly stress that in the latter situation, it is plagiarism. The argument doesn't even involve the plagiarism that ChatGPT does. The definition of plagiarism is simple, ChatGPT wrote a paper, you the student did not and you are presenting ChatGPT's paper as your own, ergo plagiarism.

[–] RiikkaTheIcePrincess@kbin.social 1 points 10 months ago (1 children)

There's a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you.

Yeah, one is what many "AI" fans insist is what's happening, and the other is what people actually do because humans are lazy, intellectually dishonest piles of crap. "Just a little GPT," they say. "I don't see a problem, we'll all just use it in moderation," they say. Then somehow we only see more garbage full of errors; we get BS numbers, references to studies or legal cases or anything else that simply don't exist, images of people with extra rows of teeth and hands where feet should be, gibberish non-text where text could obviously be... maybe we'll even get ads injected into everything because why not screw up our already shitty world even more?

So now people have this "tool" they think is simultaneously smarter and more creative than humans at all of the things humans have historically claimed makes them better than not only machines but other animals, but is also "just a tool" that they're only going to use a little bit, to help out but not replace. They'll trust this tool to be smarter than they are, which it will arguably impressively turn out to not be. They'll expect everyone else to accept the costs this incurs, from environmental damage due to running the damn things to social, scientific, economic, and other harms caused by everything being generated by "hallucinating" "AI" that's incapable of thinking.

It's all very tiring.

(And now I'm probably going to get more crap for both things I've said and things I haven't, because people are intellectually lazy/dishonest and can't take criticism. Even more tiring! Bleh.)

[–] Phanatik@kbin.social 1 points 10 months ago

Everything you've said I agree with wholeheartedly. This kind of cornercutting isn't good for us as a species. When you eliminate the struggle involved in developing skills, it cheapens whatever you've produced. Just soulless garbage and it'll proliferate the most in art spaces.

The first thing that happened was that Microsoft implemented ChatGPT into Windows as part of their Copilot feature. It can now use your activity on your pc as data points and the next step is sure as shit going to be an integration with Bing Ads. I know this because Microsoft presented this to our company.

I distrusted it then and I want it to burn now.

[–] BraveLittleToaster@lemm.ee 6 points 10 months ago (2 children)

Teachers when I was little "You won't always have a calculator with you" and here I am with a device more powerful than what sent astronauts to the moon in my pocket 24/7

[–] kambusha@feddit.ch 3 points 10 months ago

1% battery intensifies

[–] LukeMedia@lemmy.world 1 points 10 months ago

Fun fact for you, many credit-card/debit-card chips alone are comparably powerful to the computers that sent us to the moon.

It's mentioned a bit in this short article about how EMV chips are made. This summary of compute power does come from a company that manufactures EMV chips, so there is bias present.

[–] gullible@kbin.social 3 points 10 months ago

I don’t think people are arguing against minor corrections, just wholesale plagiarism via AI. The big deal is wholesale plagiarism via AI. Your argument is as reasonable as it adjacent to the issue, which is to say completely.

[–] kirklennon@kbin.social 3 points 10 months ago (2 children)

Why should someone bother to read something if you couldn’t be bothered to write it in the first place? And how can they judge the quality of your writing if it’s not your writing?

[–] Deckweiss@lemmy.world 1 points 10 months ago (2 children)

Science isn't about writing. It is about finding new data through scientific process and communicating it to other humans.

If a tool helps you do any of it better, faster or more efficiently, that tool should be used.

But I agree with your sentiment when it comes to for example creative writing.

[–] sab@kbin.social 2 points 10 months ago* (last edited 10 months ago)

Science is also creative writing. We do research and write the results, in something that is an original product. Something new is created; it's creative.

An LLM is just reiterative. A researcher might feel like they're producing something, but they're really just reiterating. Even if the product is better than what they would have produced themselves it is still more worthless, as it is not original and will not make a contribution that haven't been made already.

And for a lot of researchers, the writing and the thinking blend into each other. Outsource the writing, and you're crippling the thinking.

load more comments (1 replies)
load more comments (1 replies)
[–] TropicalDingdong@lemmy.world 1 points 10 months ago (1 children)

If you use chatGPT you should still read over it, because it can say something wrong about your results and run a plagiarism tool on it because it could unintentionally do that. So whats the big deal?

There isnt one. Not that I can see.

[–] Jesusaurus@lemmy.world 7 points 10 months ago (1 children)

At least within a higher level education environment, the problem is who does the critical thinking. If you just offload a complex question to chat gpt and submit the result, you don't learn anything. One of the purposes of paper-based exercises is to get students thinking about topics and understanding concepts to apply them to other areas.

[–] TropicalDingdong@lemmy.world 1 points 10 months ago (1 children)

You are considering it from a student perspective. I'm considering it from a writing and communication/ publishing perspective. I'm a scientist, I think a decent one, but I'm a only a proficient writer and I don't want to be a good one. Its just not where I want to put my professional focus. However, you can not advance as a scientist without being a 'good' writer (and I don't just mean proficient). I get to offload all kind of shit to chat GPT. I'm even working on some stuff where I can dump in a folder of papers, and have it go through and statistically review all of them to give me a good idea of what the landscape I'm working in looks like.

Things are changing ridiculously fast. But if you are still relying on writing as your pedagogy, you're leaving a generation of students behind. They will not be able to keep up with people who directly incorporate AI into their workflows.

[–] KingRandomGuy@lemmy.world 1 points 10 months ago (1 children)

I'm curious what field you're in. I'm in computer vision and ML and most conferences have clauses saying not to use ChatGPT or other LLM tools. However, most of the folks I work with see no issue with using LLMs to assist in sentence structure, wording, etc, but they generally don't approve of using LLMs to write accuracy critical sections (such as background, or results) outside of things like rewording.

I suspect part of the reason conferences are hesitant to allow LLM usage has to do with copyright, since that's still somewhat of a gray area in the US AFAIK.

load more comments (1 replies)
[–] LunchEnjoyer@lemmy.world 13 points 10 months ago

Didn't OpenAI themselves state some time ago that it isn't possible to detect it?

[–] Something_Complex@lemmy.world 8 points 10 months ago (1 children)

I'm gonna need something more then that too belive it

[–] macarthur_park@lemmy.world 4 points 10 months ago (1 children)

The article is reporting on a published journal article. Surely that’s a good start?

[–] KingRandomGuy@lemmy.world 2 points 10 months ago

I haven't read the article myself, but it's worth noting that in CS as a whole and especially ML/CV/NLP, selective conferences are generally seen as the gold standard for publications compared to journals. The top conferences include NeurIPS, ICLR, ICML, CVPR for CV and EMNLP for NLP.

It looks like the journal in question is a physical sciences journal as well, though I haven't looked much into it.

[–] cyborganism@lemmy.ca 3 points 10 months ago* (last edited 10 months ago) (1 children)

I say we develop a Voight-Kampff test as soon as possible for detecting if we're speaking to an AI or an actual human being when chatting or calling a customer representative of a company.

Edit: I made a mistake.

[–] agent_flounder@lemmy.world 2 points 10 months ago (1 children)

if we're speaking to a real person or an actual human being

Ummm ...

load more comments (1 replies)
[–] nfsu2@feddit.cl 2 points 10 months ago (2 children)

Isnt this like a constant fight between people who develop anti-ai-content and the internet pirates who develop anti-anti-ai-content? Pretty sure the piratea always win.

[–] Overzeetop@kbin.social 2 points 10 months ago (1 children)

You sully the good name of Internet Pirates, sir or madam. I'll have you know that online pirates have a code of conduct and there is no value in promulgating an anti-ai or anti-anti-ai stance within the community which merely wishes information to be free (as in beer) and readily accessible in all forms and all places.

You are correct that the pirates will always win, but they(we) have no beef with ai as a content generation source. ;-)

load more comments (1 replies)
[–] Satish@fedia.io 1 points 10 months ago

they still can't capture data written from Ai over websites like ' https://themixnews.com/' https://themixnews.com/cj-amos-height-age-brother/

load more comments
view more: next ›