this post was submitted on 26 Aug 2024
105 points (88.9% liked)

No Stupid Questions

35281 readers
841 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

By "good" I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

top 50 comments
sorted by: hot top controversial new old
[–] edgemaster72@lemmy.world 67 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

understanding what the machine spits out

This is exactly why people will still need to learn to code. It might write good code, but until it can write perfect code every time, people should still know enough to check and correct the mistakes.

[–] chknbwl@lemmy.world 13 points 3 weeks ago (1 children)

I very much agree, thank you for indulging my question.

[–] 667@lemmy.radio 12 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I used an LLM to write some code I knew I could write, but was a little lazy to do. Coding is not my trade, but I did learn Python during the pandemic. Had I not known to code, I would not have been able to direct the LLM to make the required corrections.

In the end, I got decent code that worked for the purpose I needed.

I still didn’t write any docstrings or comments.

[–] adespoton@lemmy.ca 9 points 2 weeks ago (2 children)

I would not trust the current batch of LLMs to write proper docstrings and comments, as the code it is trained on does not have proper docstrings and comments.

And this means that it isn’t writing professional code.

It’s great for quickly generating useful and testable code snippets though.

load more comments (2 replies)
[–] visor841@lemmy.world 8 points 3 weeks ago (1 children)

For a very long time people will also still need to understand what they are asking the machine to do. If you tell it to write code for an impossible concept, it can't make it. If you ask it to write code to do something incredibly inefficiently, it's going to give you code that is incredibly inefficient.

load more comments (1 replies)
load more comments (1 replies)
[–] EmilyIsTrans@lemmy.blahaj.zone 41 points 3 weeks ago (3 children)

After a certain point, learning to code (in the context of application development) becomes less about the lines of code themselves and more about structure and design. In my experience, LLMs can spit out well formatted and reasonably functional short code snippets, with the caveate that it sometimes misunderstands you or if you're writing ui code, makes very strange decisions (since it has no special/visual reasoning).

Anyone a year or two of practice can write mostly clean code like an LLM. But most codebases are longer than 100 lines long, and your job is to structure that program and introduce patterns to make it maintainable. LLMs can't do that, and only you can (and you can't skip learning to code to just get on to architecture and patterns)

[–] jacksilver@lemmy.world 7 points 2 weeks ago* (last edited 2 weeks ago)

I think this is the best response in this thread.

Software engineering is a lot more than just writing some lines of code and requires more thought and planning than can be realistically put into a prompt.

[–] chknbwl@lemmy.world 5 points 3 weeks ago (1 children)
[–] adespoton@lemmy.ca 3 points 2 weeks ago

The other thing is, an LLM generally knows about all the existing libraries and what they contain. I don’t. So while I could code a pretty good program in a few days from first principles, an LLM is often able to stitch together some elegant glue code using a collection of existing library functions in seconds.

load more comments (1 replies)
[–] MajorHavoc@programming.dev 30 points 3 weeks ago* (last edited 2 weeks ago) (4 children)

Great question.

is there any legit reason anyone should learn advanced coding techniques?

Don't buy the hype. LLMs can produce all kinds of useful things but they don't know anything at all.

No LLM has ever engineered anything. And there's ~~no~~ sparse (concession to a good point made in response) current evidence that any AI ever will.

Current learning models are like trained animals in a circus. They can learn to do any impressive thing you an imagine, by sheer rote repetition.

That means they can engineer a solution to any problem that has already been solved millions of times already. As long as the work has very little new/novel value and requires no innovation whatsoever, learning models do great work.

Horses and LLMs that solve advanced algebra don't understand algebra at all. It's a clever trick.

Understanding the problem and understanding how to politely ask the computer to do the right thing has always been the core job of a computer programmer.

The bit about "politely asking the computer to do the right thing" makes massive strides in convenience every decade or so. Learning models are another such massive stride. This is great. Hooray!

The bit about "understanding the problem" isn't within the capabilities of any current learning model or AI, and there's no current evidence that it ever will be.

Someday they will call the job "prompt engineering" and on that day it will still be the same exact job it is today, just with different bullshit to wade through to get it done.

[–] chknbwl@lemmy.world 6 points 3 weeks ago

I appreciate your candor, I had a feeling it was cock and bull but you've answered my question fully.

load more comments (3 replies)
[–] muntedcrocodile@lemm.ee 24 points 2 weeks ago (3 children)

I worry for the future generations of people who can use chatgpt to write code but have absolutely no idea what said code is doing.

[–] SolOrion@sh.itjust.works 16 points 2 weeks ago (1 children)

That's some 40k shit.

"What does it mean?" "I do not know, but it appeases the machine spirit. Quickly, recite the canticles."

load more comments (1 replies)
[–] finestnothing@lemmy.world 8 points 2 weeks ago (4 children)

My CTO thoroughly believes that within 4-6 years we will no longer need to know how to read or write code, just how to ask an AI to do it. Coincidentally, he also doesn't code anymore and hasn't for over 15 years.

load more comments (4 replies)
load more comments (1 replies)
[–] Nomecks@lemmy.ca 22 points 2 weeks ago (2 children)

I use it to write code, but I know how to write code and it probably turns a week of work for me into a day or two. It's cool, but not automagic.

load more comments (2 replies)
[–] gravitas_deficiency@sh.itjust.works 18 points 2 weeks ago* (last edited 2 weeks ago)

LLMs are just computerized puppies that are really good at performing tricks for treats. They’ll still do incredibly stupid things pretty frequently.

I’m a software engineer, and I am not at all worried about my career in the long run.

In the short term… who fucking knows. The C-suite and MBA circlejerk seems to have decided they can fire all the engineers because wE CAn rEpLAcE tHeM WitH AI 🤡 and then the companies will have a couple absolutely catastrophic years because they got rid of all of their domain experts.

[–] recapitated@lemmy.world 16 points 2 weeks ago (2 children)

I'm my experience they do a decent job of whipping out mindless minutea and things that are well known patterns in very popular languages.

They do not solve problems.

I think for an "AI" product to be truly useful at writing code it would need to incorporate the LLM as a mere component, with something facilitating checks through static analysis and maybe some other technologies, maybe even mulling the result through a loop over the components until they're all satisfied before finally delivering it to the user as a proposal.

[–] Croquette@sh.itjust.works 4 points 2 weeks ago (1 children)

It's a decent starting point for a new language. I had to learn webdev as an embedded C coder, and using a LLM and cross-referencing the official documentation makes a new language much more approachable.

[–] recapitated@lemmy.world 3 points 2 weeks ago

I agree, LLMs have been helpful in pointing me in the right direction and helping me rethink what questions I actually want to ask in disciplines I'm not very familiar with.

load more comments (1 replies)
[–] slazer2au@lemmy.world 15 points 2 weeks ago (2 children)

No, because that would require it being trained on good code. Which is rather rare.

[–] barsquid@lemmy.world 4 points 2 weeks ago (1 children)

If it is trained on Stack Overflow there is no chance.

load more comments (1 replies)
load more comments (1 replies)
[–] nous@programming.dev 11 points 3 weeks ago* (last edited 3 weeks ago)

They can write good short bits of code. But they also often produce bad and even incorrect code. I find it more effort to read and debug its code then just writing it myself to begin with the vast majority of the time and find overall it just wastes more of my time overall.

Maybe in a couple of years they might be good enough. But it looks like their growth is starting to flatten off so it is up for debate as to if they will get there in that time.

[–] xmunk@sh.itjust.works 10 points 3 weeks ago

No, a large part of what "good code" means is correctness. LLMs cannot properly understand a problem so while they can produce grunt code they can't assemble a solution to a complex problem and, IMO, it is impossible for them to overtake humans unless we get really lazy about code expressiveness. And, on that point, I think most companies are underinvesting into code infrastructure right now and developers are wasting too much time on unexpressive code.

The majority of work that senior developers do is understanding a problem and crafting a solution appropriate to it - when I'm working my typing speed usually isn't particularly high and the main bottleneck is my brain. LLMs will always require more brain time while delivering a savings on typing.

At the moment I'd also emphasize that they're excellent at popping out algorithms I could write in my sleep but require me to spend enough time double checking their code that it's cheaper for me to just write it by hand to begin with.

[–] TootSweet@lemmy.world 9 points 3 weeks ago (1 children)

A broken clock is right twice a day.

load more comments (1 replies)
[–] bionicjoey@lemmy.ca 6 points 3 weeks ago (2 children)

This question is basically the same as asking "Are 2d6 capable of rolling a 9?"

[–] chknbwl@lemmy.world 7 points 3 weeks ago (3 children)

I have no knowledge of coding, my bad for asking a stupid question in NSQ.

load more comments (3 replies)
[–] etchinghillside@reddthat.com 7 points 3 weeks ago (2 children)

Yes, two six-sided dice (2d6) are capable of rolling a sum of 9. Here are the possible combinations that would give a total of 9:

  • 3 + 6
  • 4 + 5
  • 5 + 4
  • 6 + 3

So, there are four different combinations that result in a roll of 9.

See? LLMs can do everything!

[–] xmunk@sh.itjust.works 3 points 3 weeks ago (1 children)

Now ask it how many r's are in Strawberry!

load more comments (1 replies)
load more comments (1 replies)
[–] DeLacue@lemmy.world 6 points 2 weeks ago (4 children)

That all depends on where the data set comes from. The code you'll get out of an LLM is the average code of the data set. If it's scraped from the internet (which is very likely) the code you'll get will be an amalgam of concise examples from one website, incorrect examples from another, bits from blogs with all the typos and all the gunk and garbage that's out there.

Getting LLM code to work well takes an understanding of what the code it gives you actually does and why it's bad. It will always be bad because it cannot be better than the dataset and in order for a dataset to be big enough to train an LLM it'll have to have everything they can get including all the trash. But it can be good for providing you a framework to start with. It is however never going to replace actual programming and understanding of programming. The talk of LLMs completely replacing programers is mostly coming from people who do not understand coding or LLMs at all.

load more comments (4 replies)
[–] orcrist@lemm.ee 6 points 2 weeks ago (1 children)

I think your wording is something to consider. If you want something that's written professionally, by definition it needs to be written by a professional. So that's clearly not what you're asking for, but that's what you wrote. And that kind of detail does matter, because LLMs are very good at getting part of the format correct and then messing up small details in random places, which makes them precisely useless on their own. But if you want to use them to produce templates that you're later going to modify, of course you can do that.

I'm not clear what you think an advanced coding technique would be. But if your system breaks and you don't understand it well enough to fix it, then I sure hope a competent programmer is on staff who can help you.

Finally, if you rely on automation to write your programs for you and somehow they magically seem to work most of the time, how do you know that they actually work all of the time? If they're giving you numbers, can you believe the numbers? When? Why? Who is guaranteeing you quality in product? Of course nobody is.

load more comments (1 replies)
[–] Ookami38@sh.itjust.works 6 points 2 weeks ago

Of course it can. It can also spit out trash. AI, as it exists today, isn't meant to be autonomous, simply ask it for something and it spits it out. They're meant to work with a human on a task. Assuming you have an understanding of what you're trying to do, an AI can probably provide you with a pretty decent starting point. It tends to be good at analyzing existing code, as well, so pasting your code into gpt and asking it why it's doing a thing usually works pretty well.

AI is another tool. Professionals will get more use out of it than laymen. Professionals know enough to phrase requests that are within the scope of the AI. They tend to know how the language works, and thus can review what the AI outputs. A layman can use AI to great effect, but will run into problems as they start butting up against their own limited knowledge.

So yeah, I think AI can make some good code, supervised by a human who understands the code. As it exists now, AI requires human steering to be useful.

[–] GBU_28@lemm.ee 6 points 2 weeks ago

For basic boiler plate like routes for an API, an etl script from sample data to DB tables, or other similar basics, yeah, it's perfectly acceptable. You'll need to swap out dummy addresses, and maybe change a choice or two, but it's fine.

But when you're trying to organize more complicated business logic or debug complicated dependencies it falls over

[–] daniskarma@lemmy.dbzer0.com 6 points 2 weeks ago

For small boilerplate or very common small pieces of code, for instance a famous algorithm implementation. Yes. As they are just probably giving you the top stack overflow answer for a classic question.

Anything that the LLM would need to mix or refactor would be terrible.

[–] Arbiter@lemmy.world 5 points 3 weeks ago (1 children)

No LLM is trust worthy.

Unless you understand the code and can double check what it’s doing I wouldn’t risk running it.

And if you do understand it any benefit of time saved is likely going to be offset by debugging and verifying what it actually does.

[–] FlorianSimon@sh.itjust.works 7 points 2 weeks ago

Since reviewing code is much harder than checking code you wrote, relying on LLMs too heavily is just plain dangerous, and a bad practice, especially if you're working with specific technologies with lots of footguns (cf C or C++). The amount of crazy and hard to detect bad things you can write in C++ is insane. You won't catch CVE-material by just reading the output ChatGPT or Copilot spits out.

And there's lots of sectors like aerospace, medical where that untrustworthiness is completely unacceptable.

[–] saltesc@lemmy.world 5 points 3 weeks ago

In my experience, not at all. But sometimes they help with creativity when you hit a wall or challenge you can't resolve.

They have been trained off internet examples where everyone has a different style/method of coding, like writing style. It's all very messy and very unreliable. It will be years for LLMs to code "good" and will require a lot of training that isn't scraping.

[–] Septimaeus@infosec.pub 4 points 2 weeks ago* (last edited 2 weeks ago)

Theoretically, I would say yes it’s possible, insofar as we could break down most subtasks of the development process into training parameters. But we are a long way from that currently.

ETA: I suspect LLM’s best use-case in this hypothetical would not be in architecting or implementation, but rather limited to tasks with human interfaces (requirements gathering, project planning and logistics, test scaffolding, feedback collection/distribution, etc).

If the unironic goal is to develop things without any engineering oversight (mistake) then there’s no point to using programming languages at all. The machine might as well just output assembly or bin code.

What’s more likely in the short term are software LLMs generating partial solutions that human engineers then are asked to “finish” (fix) and maintain. The effort and hours required to do so will, at a guess, balloon terribly and will often be at best proportional to the resources saved by the use of the automatic spaghetti generator.

I eagerly await these post mortems.

[–] Red_October@lemmy.world 4 points 2 weeks ago

Technically it's possible, but it's neither probable nor likely, and it's especially not effective. From what I understand, a lot of devs who do try to use something like ChatGPT to write code end up spending as much or more time debugging it, and just generally trying to get it to work, than they would have if they'd just written it themselves. Additionally, you have to know how to code to be able to figure out why it's not working, and even when all of that is done, it's almost impossible to get it to integrate with a larger project without just rewriting the whole thing anyway.

So to answer the question you intend to ask, no, LLMs will not be replacing programmers any time soon. They may serve as a tool of dubious value, but the idea that programmers will be replaced is only taken seriously by by people who manage programmers, and not the programmers themselves.

[–] Angry_Autist@lemmy.world 4 points 2 weeks ago

Yes, in small bits, after several tries, with human supervision. For now.

No in large amounts, too hard to human review, they're still doing it anyway.

[–] JeeBaiChow@lemmy.world 4 points 2 weeks ago

Dunno. I'd expect to have to make several attempts to coax a working snippet from the ai, then spending the rest of the time trying to figure out what it's done and debugging the result. Faster to do it myself.

E.g. I once coded Tetris on a whim (45 min) and thought it'd be a good test for ui/ game developer, given the multi disciplinary nature of the game (user interaction, real time engine, data structures, etc) Asked copilot to give it a shot and while the basic framework was there, the code simply didn't work as intended. I figured if we went into each of the elements separately, it would have taken me longer than if i'd done it from scratch anyway.

[–] PlzGivHugs@sh.itjust.works 4 points 3 weeks ago

AI can only really complete tasks that are both simple and routine. I'd compare the output skill to that of a late-first-year University student, but with the added risk of halucination. Anything too unique or too compex tends to result in significant mistakes.

In terms of replacing programmers, I'd put it more in the ballpark of predictive text and/or autocorrect for a writer. It can help speed up the process a little bit, and point out simple mistakes but if you want to make a career out of it, you'll need to actually learn the skill.

[–] Rookeh@startrek.website 4 points 2 weeks ago

I've tried Copilot and to be honest, most of the time it's a coin toss, even for short snippets. In one scenario it might try to autocomplete a unit test I'm writing and get it pretty much spot on, but it's also equally likely to spit out complete garbage that won't even compile, never mind being semantically correct.

To have any chance of producing decent output, even for quite simple tasks, you will need to give an LLM an extremely specific prompt, detailing the precise behaviour you want and what the code should do in each scenario, including failure cases (hmm...there used to be a term for this...)

Even then, there are no guarantees it won't just spit out hallucinated nonsense. And for larger, enterprise scale applications? Forget it.

[–] WraithGear@lemmy.world 3 points 3 weeks ago

Its the most ok’est coder with the attention span of a 5 year old.

load more comments
view more: next ›