this post was submitted on 21 Sep 2024
85 points (71.8% liked)

Technology

59118 readers
6622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash's syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

(page 3) 39 comments
sorted by: hot top controversial new old
[–] lvxferre@mander.xyz 1 points 1 month ago

[NB: I'm no programmer. I can write some few lines of bash because Linux, I'm just relaying what I've read. I do use those bots but for something else - translation aid.]

The reasons that I've seen programmers complaining about LLM chatbots are:

  1. concerns that AI will make human programmers obsolete
  2. concerns that AI will reduce the market for human programmers
  3. concerns about the copyright of the AI output
  4. concerns about code quality (e.g. it assumes libraries and functions out of thin air)
  5. concerns about the environmental impact of AI

In my opinion the first one is babble, the third one is complicated, but the other three are sensible.

[–] tal@lemmy.today 1 points 1 month ago* (last edited 1 month ago)

I don't think that the current approaches being used by generative AIs are sufficient to reliably produce correct code; I think that they're more-amenable to human-consumable output (and even there, I'm much more enthusiastic about their use for images than text, as things stand). A human needs approximately-correct material to cue their brain; CPUs are more particular.

We'll probably get there, in the same sense that we can ultimately produce human-level AI for anything, but I think that it'll entail higher-level reasoning about a problem, which present generative text approaches don't do.

I did start with internet search....I could not find how to pass values into the function and return from a function easily,

So, now, this I have a hard time with.

When I search for "pass value function bash", this is the first page I get, which clearly shows an example:

https://stackoverflow.com/questions/6212219/passing-parameters-to-a-bash-function

This isn't where I'd consider generative AI to be a useful example; it's something that there will be existing material already readily-available via a search.

The other issue with using generative AI for coding is that for taking pre-existing code for common tasks and using it in multiple programs, we already have an approach: use libraries. That way code gets maintained and such, but doesn't need to be reimplemented by humans over-and-over.

Say someone says "I need linked-list code". Okay, I mean, that's a pretty common, plain Jane thing to need.

But if you use a library, and there's a bug in that code, and it gets fixed, then the bugfix propagates when you update to a newer library. If you generate a linked-list implementation, even if you wind up with working linked-list code at the end, then that isn't gonna happen.

My workplace of 5 employees and 2 owners have embraced it as an additional tool.

We have Copilot inside Visual studio professional and it’s a great time saver. We have a lot of boiler plate code that it can learn from and why would i want to waste valuable time writing the same things over and over. If every list page follows the same pattern then it’s boring we are paid to solve problems not just write the same things.

We even have a tool powered by AI made by the owner which we can type commands and it will scaffold all our boiler plate. Or it can watch the project and if I update a model it will do the mutations and queries in c# set up the graphql layer and then implement some views in react typescript.

[–] Rhaedas@fedia.io 1 points 1 month ago

Keep in mind that at the core of an LLM is it being a probability autocompletion mechanism using the vast training data is was fed. A fine tuned coding LLM would have data more in line to suit an output of coding solutions. So when you ask for generation of code for very specific purposes, it's much more likely to find a mesh of matches that will work well most of the time. Be more generic in your request, and you could get all sorts of things, some that even look good at first glance but have flaws that will break them. The LLM doesn't understand the code it gives you, nor can it reason if it will function.

Think of an analogy where you Googled a coding question and took the first twenty hits, and merged all the results together to give an answer. An LLM does a better job that this, but the idea is similar. If the data it was trained on was flawed from the beginning, such as what some of the hits you might find on Reddit or Stack Overflow, how can it possibly give you perfect results every time? The analogy is also why a much narrow query for coding may work more often - if you Google a niche question you will find more accurate, or at least more relevant results than if you just try a general search and past together anything that looks close.

Basically, if you can help the LLM hone in its probabilities on the better data from the start, you're more likely to get what may be good code.

[–] KairuByte@lemmy.dbzer0.com -1 points 1 month ago

As someone who just delved into a related but unfamiliar language for a small project, it was relatively correct and easy to use.

There were a few times it got itself into a weird “loop” where it insisted on doing things in a ridiculous way, but prior knowledge of programming was enough for me to reword and “suggest” different, simpler, solutions.

Would I have ever got to the end of that project without knowledge of programming and my suggestions? Likely, but it would have taken a long time and been worse off code.

The irony is, without help from copilot, I’d have taken at least three times as long.

[–] Smokeydope@lemmy.world -2 points 1 month ago* (last edited 1 month ago) (3 children)

Its not just AI code but AI stuff in general.

It boils down to lemmy having a disproportionate amount of leftist liberal arts college student types. Thats just the reality of this platform.

Those types tend to see AI as a threat to their creative independent business. As well as feeling slighted that their data may have been used to train a model.

Its understandable why lots of people denounce AI out of fear, spite, or ignorance. Its hard to remain fair and open to new technology when its threatening your livelihood and its early foundations may have scraped your data non-consentually for training.

So you'll see AI hate circle jerk post every couple days from angry people who want to poison models and cheer for the idea that its just trendy nonesense. Dont debate them. Dont argue. Just let them vent and move on with your day.

[–] rolling_resistance@lemmy.world -2 points 1 month ago

I see you like when something threatens your livelihood.

load more comments (2 replies)
[–] cy_narrator@discuss.tchncs.de -2 points 1 month ago (3 children)

Also if you are interested, here are those scripts I wrote with chatGPT

https://gitlab.com/cy_narrator/lukshelper

load more comments (3 replies)
[–] socsa@piefed.social -3 points 1 month ago

Because most people on Lemmy have never actually had to write code professionally.

[–] NuXCOM_90Percent@lemmy.zip -5 points 1 month ago (3 children)

Lemmy is an outlier where anything "AI" immediately triggers the luddites to scream and rant (and occasionally send threats over PMs...) that it is bad because it is "AI" and so forth. So... massive grain of salt.

Speaking as (for simplicity's sake) a software engineer who wears both a coder and a manager hat?

"AI" is incredibly useful for charlie work. Back in the day you would hire an intern or entry level staff to write your unit tests and documentation and utility functions. But, for well over a decade now, documentation and even many unit tests can be auto-generated by scripts for vim or plugins for an IDE. They aren't necessarily great but... the stuff that Fred in Accounting's son wrote was pretty dogshit too.

What LLMs+RAG do is step that up a few notches. You still aren't going to have them write the critical path code. But you can farm off a LOT more charlie work to the point where you just need to do the equivalent of review an MR that came from a plugin rather than a kid who thinks we don't know he reeks of weed.

And... that is good and bad. Good in that it means smaller companies/teams are capable of much bigger projects. And bad because it means a lot fewer entry level jobs to teach people how to code.

So that is the manager/mentor perspective. Let's dig a bit deeper on your example:

I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

Honestly? That sounds to me like foundational issues. You already articulated what you need but you wanted to find an all in one guide rather than googing "bash function input example" or "bash function return example" or "strip trailing strash from directory path linux" and so forth. Also, I am pretty sure I very regularly find a guide that covers every one of those questions except for string processing every time I forget the syntax to a for loop in bash and need to google it.

And THAT is the problem with relying on these tools. I know plenty of people who fundamentally can't write documentation because their IDE has always generated (completely worthless) doxygen for them. And it sounds like you don't know how to self-educate on how to solve a problem.

Which is why, generally speaking:

I still prefer to offload the charlie work to newbies because it helps them learn (and it lets me justify their paycheck). And usually what I do is tell them I want to "walk you through our SDLC. it is kind of annoying" to watch over their shoulder and make sure they CAN do this by hand. Then... whatever. I don't care if they pass everything through whatever our IT/Cybersecurity departments deem legit.

Which... personally? I generally still prefer "dumb" scripts to generate the boilerplate for myself. And when I do ask chatgpt or a "local" setup: I ask general questions. I don't paste our codebase in. I say "Hey chatgpt, give me an example of setting the number of replicas of a pod based upon specific metrics collected with prometheus". And I adapt that. Partially to make sure I understand what we are adding to our codebase and mostly because I still don't trust those companies with my codebase and prompts. Which... is probably going to mean moving away from VSCode within the next year (yay Copilot) but... yeah.

load more comments (3 replies)
load more comments
view more: ‹ prev next ›