this post was submitted on 11 Nov 2024
52 points (93.3% liked)

Technology

59373 readers
8352 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] Voroxpete@sh.itjust.works 46 points 4 days ago (1 children)

Translation: "We told everyone we could turn glorified autocomplete into artificial general intelligence and then they gave us a bunch of money for that, so now we actually have to try to deliver something and we've got no idea how."

[–] random_character_a@lemmy.world 14 points 4 days ago (1 children)

How about giving billions to those guys simulating brains of a small worms and fruit flies, so we can have very slow "brain in a bottle" that will be equally useless.

[–] Voroxpete@sh.itjust.works 18 points 4 days ago (2 children)

You know what? Sure, fuck it, why not? I don't even have a problem with OpenAI getting billions of dollars to do R&D on LLMs. They might actually turn out to have some practical applications, maybe.

My problem is that OpenAI basically stopped doing real R&D the moment ChatGPT became a product, because now all their money goes into their ridiculous backend server costs and putting increasingly silly layers of lipstick on a pig so that they can get one more round of investment funding.

AI is a really important area of technology to study, and I'm all in favour of giving money to the people actually studying it. But that sure as shit ain't Sam Altman and his band of carnival barkers.

[–] Sergio@slrpnk.net 4 points 4 days ago (1 children)

I mean this respectfully. The character Everett True is known as someone who tells the truth even when it's not popular.

[–] Voroxpete@sh.itjust.works 6 points 4 days ago

Being compared to Everett True is the greatest compliment I have ever been given, and am honour of which I am in no way worthy.

[–] lemmeBe@sh.itjust.works 3 points 4 days ago

Carnival barkers 🤣

[–] lvxferre@mander.xyz 17 points 4 days ago

Predictable outcome for anyone not wallowing in wishful belief.

[–] simple@lemm.ee 14 points 4 days ago (2 children)

We've known this for a while. LLMs are a dead end, lots of companies have tried throwing more data at it but it's becoming clear the differences between each model and the next are getting too small to notice, and none of them fix the major underlying issue that chat models keep spreading BS because it can't differentiate between right and wrong

[–] tee9000@lemmy.world 4 points 4 days ago

So an infant technology is showing a glimmer of maturation?

[–] CosmoNova@lemmy.world 1 points 4 days ago* (last edited 4 days ago)

And the thing is the architecture of LLMs was already a huge breakthrough in the field. Now these companies are basically trying to come up with another by - and that's just my guess - throwing tons of cash at it and hoping for the best. I think that's like trying to come up with a building material that outperforms steel concrete in every aspect. Just because it was discovered by some guy doesn't mean multi billion dollar companies can force something better with all the money in the world.

[–] brucethemoose@lemmy.world 7 points 4 days ago* (last edited 4 days ago) (1 children)

Yeah, well Alibaba nearly (and sometimes) beat GPT-4 with a comparatively microscopic model you can run on a desktop. And released a whole series of them. For free! With a tiny fraction of the GPUs any of the American trainers have.

Bigger is not better, but OpenAI has also just lost their creative edge, and all Altman's talk about scaling up training with trillions of dollars is a massive con.

o1 is kind of a joke, CoT and reflection strategies have been known for awhile. You can do it for free youself, to an extent, and some models have tried to finetune this in: https://github.com/codelion/optillm

But one sad thing OpenAI has seemingly accomplished is to "salt" the open LLM space. Theres way less hacky experimentation going on than there used to be, which makes me sad, as many of its "old" innovations still run circles around OpenAI.

[–] A_A@lemmy.world 3 points 4 days ago (2 children)

... "Alibaba (LLM)" ... is it this ? ... ?
Qwen2.5: A Party of Foundation Models!
https://qwenlm.github.io/blog/qwen2.5/

[–] brucethemoose@lemmy.world 2 points 4 days ago* (last edited 4 days ago) (1 children)

BTW, as I wrote that post, Qwen 32B coder came out.

Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.

[–] A_A@lemmy.world 2 points 4 days ago

Great news 😁🥂, someone should make a new post on this !

[–] brucethemoose@lemmy.world 2 points 4 days ago

Yep.

32B fits on a "consumer" 3090, and I use it every day.

72B will fit neatly on 2025 APUs, though we may have an even better update by then.

I've been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.