this post was submitted on 22 Dec 2024
519 points (95.8% liked)
Technology
60112 readers
2096 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
And so, the problem wasn't the ai/llm, it was the person who said "looks good" without even looking at the generated code, and then the person who read that pull request and said, again without reading the code, "lgtm".
If you have good policies then it doesn't matter how many bad practice's are used, it still won't be merged.
The only overhead is that you have to read all the requests but if it's an internal project then telling everyone to read and understand their code shouldn't be the issue.
The problem here is that a lot of the time looking for hidden problem is harder than writing good code from scratch. And you will always be at a danger that llm snuck some sneaky undefined behaviour past you. There is a whole plethora of standards, conventions, and good practices that help humans to avoid it, which llm can ignore at any random point.
So you're either not spending enough time on review or missing whole lot of bullshit. In my experience, in my field, right now, this review time is more time consuming and more painful than avoiding it in the first place.
Don't underestimate how degrading and energy sucking it is for a professional to spend most of the working time sitting through autogenerated garbage, and how inefficient it is.