this post was submitted on 17 Mar 2024
132 points (94.0% liked)

Technology

58055 readers
4766 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Rentlar@lemmy.ca 39 points 6 months ago (3 children)

"Replacing Talent" is not what AI is meant for, yet, it seems to be every penny-pinching, bean counting studio's long term goal with it.

[–] darthsid@lemmy.world 15 points 6 months ago (6 children)

Yep AI at best can supplement talent, not replace it.

[–] 9488fcea02a9@sh.itjust.works 19 points 6 months ago (3 children)

I'm not a developer, but I use AI tools at work (mostly LLMs).

You need to treat AI like a junior intern.... You give it a task, but you still need to check the output and use critical thinking. You cant just take some work from an intern, blindly incorporate it into your presentation, and then blame the intern if the work is shoddy....

AI should be a time saver for certain tasks. It cannot (currently) replace a good worker.

[–] Lmaydev@programming.dev 10 points 6 months ago* (last edited 6 months ago) (1 children)

As a developer I use it mainly for learning.

What used to be a Google followed by skimming a few articles or docs pages is now a question.

It pulls the specific info I need, sources it and allows follow up questions.

I've noticed the new juniors can get up to speed on new tech very quickly nowadays.

As for code I don't trust it beyond snippets I can use as a base.

[–] FiniteBanjo@lemmy.today 0 points 6 months ago* (last edited 6 months ago) (1 children)

JFC they've certainly got the unethical shills out in full force today. Language Models do not and will never amount to proper human work. It's almost always a net negative everywhere it is used, final products considered.

[–] Lmaydev@programming.dev 1 points 6 months ago (1 children)
[–] FiniteBanjo@lemmy.today 1 points 6 months ago (1 children)

Its intended use is to replace human work in exchange for lower accuracy. There is no ethical use case scenario.

[–] Lmaydev@programming.dev 1 points 6 months ago (1 children)

It's intended to show case its ability to generate text. How people use it is up to them.

As I said it's great for learning as it's very accurate when summarising articles / docs. It even sources it so you can read up more if needed.

[–] FiniteBanjo@lemmy.today 0 points 6 months ago (1 children)

It's been known to claim commands and documentation exist when they don't. It very commonly gets simple addition wrong.

[–] Lmaydev@programming.dev 1 points 6 months ago (1 children)

That's because it's a language processor not a calculator. As I said you're using it wrong.

[–] FiniteBanjo@lemmy.today 1 points 6 months ago (1 children)

So the correct usage is to have documents incorrectly explained to you? I fail to see how that does any good.

[–] Lmaydev@programming.dev 1 points 6 months ago

I know you do buddy.

[–] Gradually_Adjusting@lemmy.ca 5 points 6 months ago (1 children)

It's clutch for boring emails with several tedious document summaries. Sometimes I get a day's work done in 4 hours.

Automation can be great, when it comes from the bottom-up.

[–] isles@lemmy.world 2 points 5 months ago

Honestly, that's been my favorite - bringing in automation tech to help me in low-tech industries (almost all corporate-type office jobs). When I started my current role, I was working consistently 50 hours a week. I slowly automated almost all the processes and now usually work about 2-3 hours a day with the same outputs. The trick is to not increase outputs or that becomes the new baseline expectation.

[–] fidodo@lemmy.world 1 points 6 months ago

I am a developer and that's exactly how I see it too. I think AI will be able to write PRs for simple stories but it will need a human to review those stories to give approval or feedback for it to fix it, or manually intervene to tweak the output.

[–] Rentlar@lemmy.ca 12 points 6 months ago (1 children)

I do think given time, AI can improve to the level that it can do nearly all of the same things junior level people in many different sectors can.

The problem and unfortunate thing for companies I forsee is that it can't turn juniors into seniors if the AI "replaces" juniors, which means that company will run out of seniors with retirement or will have to pay piles and piles of cash for people just to hire the few non-AI people left with industry knowledge to babysit the AIs.

[–] Pyr_Pressure@lemmy.ca 10 points 6 months ago

It's very short sighted, but capitalism doesn't reward long term thinking.

[–] assassinatedbyCIA@lemmy.world 4 points 6 months ago

The problem is the crazy valuations of AI companies is based on it replacing talent and soon. Supplementing talent is far less exciting and far less profitable.

[–] altima_neo@lemmy.zip 1 points 6 months ago

Not even that, it's a tool. Like the same way Photoshop, or 3ds max are tools . You still need the talent to use the tools.

[–] Thorny_Insight@lemm.ee 0 points 6 months ago (1 children)

Current AI*

I don't see any reason to expect this to be the case indefinitely. It has been getting better all the time and lately been doing so at a quite rapid pace. In my view it's just a matter of time untill it surpasses human capabilities. It can already do so in specific narrow fields. Once we reach AGI all bets are off.

[–] thundermoose@lemmy.world 4 points 6 months ago (2 children)

Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there's a ton of hype around them. They're also easy to build tools around, so everyone in tech is trying to get their piece of AI now.

However, LLMs are chat interfaces to searching a large dataset, and that's about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it's pattern-matching.

A lot of people will say that's intelligence, but it's different; the LLM isn't capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don't make the LLM smarter.

AGI needs something new, we aren't going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.

[–] KeenFlame@feddit.nu 0 points 6 months ago

How does this amazing prediction engine discovery that basically works like our brain does not fit in a larger solution?

The way emergent world simulation can be found in the larger models definitely point to this being a cornerstone, as it provides functional value in both image and text recall.

Nevermid that tools like memgpt doesn't satisfy long term memory and context windows doesn't satisfy attention functions properly, I need a much harder sell on LLM technology not proving an important piece of agi

[–] Thorny_Insight@lemm.ee 0 points 6 months ago

Yeah LLMs might very well be a dead-end when it comes to AGI but just like chatGPT seemingly came out of nowhere and took the world by surprise, this might just aswell be the case with an actual AGI aswell. My comment doesn't really make any claims about the timescale of it but rather just tires to point out the inevitability of it.

[–] Defaced@lemmy.world 0 points 6 months ago (2 children)

https://www.cognition-labs.com/introducing-devin There are people out there deliberately working to make that vision a reality. Replacing software engineers is the entire point of Devin AI.

[–] time_fo_that@lemmy.world 2 points 6 months ago (1 children)

I saw this the other day and I'm like well fuck might as well go to trade school before it gets saturated like what happened with tech in the last couple years.

[–] Defaced@lemmy.world 2 points 6 months ago

Yeah, the sad thing about Devin AI is that they're clearly doing it for the money, they have absolutely no intentions on bettering humanity, they just want to build this up and sell it off for that fat entrepreneur paycheck. If they really cared about bettering humanity they would open it up to everyone, but they're only accepting inquiries from businesses.

[–] brbposting@sh.itjust.works 2 points 6 months ago

One single comment when I posted this on the technology community:

[–] gravitas_deficiency@sh.itjust.works 11 points 6 months ago* (last edited 6 months ago) (1 children)
sed “s/studio’s/tech industry c-suite’s/“

As an engineer, the amount of non-engineering idiots in tech corporate leadership trying to apply inappropriate technical solutions to something because it became a buzzword is just absurdly high.

[–] Ragnarok314159@sopuli.xyz 1 points 6 months ago

Just make the modulus of elasticity more agile. Problem solved!

[–] ZILtoid1991@lemmy.world 4 points 6 months ago (1 children)

But that's pretty much why AI is developed.

[–] KeenFlame@feddit.nu 0 points 6 months ago (1 children)

It was more like a scientific discovery

[–] FiniteBanjo@lemmy.today 4 points 6 months ago* (last edited 6 months ago)

Not really, no, all of the current models built to intended scale are selling it as a product, especially OpenAI, Microsoft, and Google. It was built with a purpose and that purpose was to potentially replace expensive human assets.