this post was submitted on 09 Jun 2023
20 points (100.0% liked)

Technology

37707 readers
482 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

I made this and thought you all might enjoy it, happy hacking!

you are viewing a single comment's thread
view the rest of the comments
[–] semibreve42@lemmy.dupper.net 2 points 1 year ago (1 children)

Super cool approach. I wouldn't have guessed it would be that effective if someone had explained it to me without the data.

I'm curious how easy it is to "defeat". If you take an AI generated text that is successfully identified with high confidence and superficially edit it to include something an LLM wouldn't usually generate (like a few spelling errors), is that enough to push the text out of high confidence?

I ask because I work in higher ed, and have been sitting on the sidelines watching the chaos. My understanding is that there's probably no way to automate LLM detection to a high enough certainty for it to be used in an academic setting as cheat detection, the false positives are way too high.

[–] ranok@sopuli.xyz 3 points 1 year ago

ZipPy is much less robust to defeat attempts than larger model-based detectors. Earlier I asked ChatGPT to write in the voice of a highschool student and it fooled the detectors. The web-UI let's you add LLM-generated text in the style that you're looking at to improve the accuracy of those types of content.

I don't think we'll ever be able to detect it reliably enough to fail students, if they co-write with a LLM.