Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
You should see 52% of the first version of my code.
It doesn’t have to be right to be useful.
Yeah, but the non-tech savvy business leaders see they can generate code with AI and think 'why do I need a developer if I have this AI?' and have no idea whether the code it produces is right or not. This stat should be shared broadly so leaders don't overestimate the capability and fire people they will desperately need.
I say let it happen. If someone is dumb enough to fire all their workers... They deserve what will happen next
Well the firing’s happening so, i guess let's hope you’re right about the other part.
It won’t happen like that. Leadership will just under-hire and expect all their developers to be way more efficient. Working will be really stressful with increased deadlines and people questioning why you couldn’t meet them.
Yeah management are all for this, the first few years here are rough with them immediately hitting the "fire the engineers we have ai now". They won't realize their fuckup until they've been promoted away from it
Mentioned it before but:
LLMs program at the level of a junior engineer or an intern. You already need code review and more senior engineers to fix that shit for them.
What they do is migrate that. Now that junior engineer has an intern they are trying to work with. Or... companies realize they don't benefit from training up those newbie (or stupid) engineers when they are likely to leave in a year or two anyway.
Programming jobs will be safe for a while. They've been trying to eliminate those positions since at least the 90s. Because coders are expensive and often lack social skills.
But I do think the clock is ticking. We will see more and more sophisticated AI tools that are relatively idiot-proof and can do things like modify Salesforce, or create complex new Tableau reports with a few mouse clicks, and stuff like that. Jobs will be chiseled away like our unfortunate friends in graphic design.
You, along with most people, are still looking at automation wrong. It's never been about removing people entirely, even AI, it's about doing the same work with less cost.
If you can eliminate one programmers from your four person team by giving the other three AI to produce the same amount of work, congrats you've just automated one programming job.
Programming jobs aren't going anywhere, but either the amount of code produced is about to skyrocket, or the number of employed programmers is going to drop (or most likely both of those things).
I wonder if this will also have a reverse tail end effect.
Company uses AI (with devs) to produce a large amount of code -> code is in prod for a few years with incremental changes -> dev roles rotate or get further reduced over time -> company now needs to modernize and change very large legacy codebase that nobody really understands well enough to even feed it Into the AI -> now hiring more devs than before to figure out how to manage a legacy codebase 5-10x the size of what the team could realistically handle.
Writing greenfield code is relatively easy, maintaining it over years and keeping it up to date and well understood while twisting it for all new requirements - now that's hard.
AI will help with that too, it's going to be able to process entire codebases at a time pretty shortly here.
Given the visual capabilities now emerging, it can likely also do human-equivalent testing.
One of the biggest AI tricks we haven't started seeing much of yet in mainstream use is this kind of automated double-checking. Where it generates an answer, and then validates if the answer is valid before actually giving it to a human. Especially in coding bases, there really isn't anything stopping it from coming up with an answer compiling, running into an error, re-generating, and repeating until the code passes all unit tests or even potentially visual inspection.
The big limit on this right now is sheer processing cost and context lengths for the models. However, costs for this are dropping faster than any new tech we've seen, and it will likely be trivial in just a few years.
Right on. AI feels like a looming paradigm shift in our field that we can either scoff at for its flaws or start learning how to exploit for our benefit. As long as it ends up boosting productivity it's probably something we're going to have to learn to work with for job security.
It's already boosting productivity in many roles. That's just going to accelerate as the models get better, the processing gets cheaper, and (as you said) people learn to use it better.
There are some areas I'm hoping get addressed by the coming skyrocket in programmer productivity:
I think that last one stands to be strongly enabled by AI code assist tools. It might not be the sexiest or highest paying job, but it'll be work that matters that largely isn't even being done today.
And they'll find out very soon that they need devs when they actually try to test something and nothing works.
Generally you want to the reference material used to improve that first version to be correct though. Otherwise it’s just swapping one problem for another.
I wouldn’t use a textbook that was 52% incorrect, the same should apply to a chatbot.
Bad take. Is the first version of your code the one that you deliver or push upstream?
LLMs can give great starting points, I use multiple LLMs each for various reasons. Usually to clean up something I wrote (too lazy or too busy/stressed to do manually), find a problem with the logic, or maybe even brainstorm ideas.
I rarely ever use it to generate blocks of code like asking it to generate "a method that takes X inputs and does Y operations, and returns Z value". I find that those kinds of results are often vastly wrong or just done in a way that doesn't fit with other things I'm doing.
Impressed some folks think LLMs are useless. Not sure if their lives/workflows/brains are that different from ours or they haven’t given at the college try.
I almost always have to use my head before a language model’s output is useful for a given purpose. The tool almost always saves me time, improves the end result, or both. Usually both, I would say.
It’s a very dangerous technology that is known to output utter garbage and make enormous mistakes. Still, it routinely blows my mind.