this post was submitted on 22 Dec 2024
520 points (95.9% liked)

Technology

60091 readers
2560 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Zementid@feddit.nl 6 points 2 days ago* (last edited 2 days ago) (1 children)

I really like the idea of an LLM being narrowly configured to filter, summarize data which comes in at a irregular/organic form.

You would have to do it multiples in parallel with different models and slightly different configurations to reduce hallucinations (Similar to sensor redundancies in Industrial Safety Levels) but still, ... that alone is a game changer in "parsing the real world" .... that energy amount needed to do this "right >= 3x" is cut short by removing the safety and redundancy because the hallucinations only become apparent down the line somewhere and only sometimes.

They poison their own well because they jump directly to the enshittyfication stage.

So people talking about embedding it into workflow... hi... here I am! =D

[–] AA5B@lemmy.world 3 points 2 days ago (1 children)

A buddy of mine has been doing this for months. As a manager, his first use case was summarizing the statuses of his team into a team status. Arguably hallucinations aren’t critical

[–] sudneo@lemm.ee 4 points 2 days ago

I would argue that this makes the process microscopically more efficient and macroscopically way less efficient. That whole process probably is useless, and imagine wasting so much energy, water and computing power just to speed this useless process up and saving a handful of minutes (I am a lead and it takes me 2/3 minutes to put together a status of my team, and I don't usually even request a status from each member).

I keep saying this to everyone in my company who pushes for LLMs for administrative tasks: if you feel like LLMs can do this task, we should stop doing it at all because it means we are just going through the motions and pleasing a process without purpose. You will have people producing reports via LLM from a one-line prompt, the manager assembling it together with LLM and at vest someone reading it distilling it once again with LLMs. It is all a great waste of money, energy, time, cognitive effort that doesn't benefit anybody.

As soon as someone proposes to introduce LLMs in a process, raise with cutting that process altogether. Let's produce less bullshit, instead of more while polluting even more in the process.