this post was submitted on 05 Jun 2023
4 points (100.0% liked)

Technology

37844 readers
1 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

and as always, the culprit is ChatGPT. Stack Overflow Inc. won't let their mods take down AI-generated content

you are viewing a single comment's thread
view the rest of the comments
[–] kevin@beehaw.org 1 points 2 years ago* (last edited 2 years ago) (1 children)

I imagine it'll be possible in the near future to improve the accuracy of technical AI content somewhat easily. It'd go something along these lines: have an LLM generate a candidate response, then have a second LLM capable of validating that response. The validator would have access to real references it can use to ensure some form of correctness, ie a python response could be plugged into a python interpreter to make sure it, to some extent, does what it is proported to do. The validator then decides the output is most likely correct, or generates some sort of response to ask the first LLM to revise until it passes validation. This wouldn't catch 100% of errors, but a process like this could significantly reduce the frequency of hallucinations, for example.

[–] Tutunkommon@beehaw.org 1 points 2 years ago

Best description I've heard is that LLM is good at figuring out what the correct answer should look like, not necessarily what it is.