this post was submitted on 01 Mar 2024
39 points (89.8% liked)

Technology

58108 readers
5153 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

so it's GANAM now (from GAFAM or GAMAM)

you are viewing a single comment's thread
view the rest of the comments
[–] cholesterol@lemmy.world 15 points 6 months ago (1 children)
[–] conciselyverbose@sh.itjust.works 6 points 6 months ago

LLMs are a bubble.

But the uses of massively parallel math are still in their infancy. Scientific compute, machine learning, all kind of different simulations. Nvidia has been setting themselves up for all of it with cuda for years. At least until we get better options to physically replicate neurons (primarily how interconnected they are in a brain), GPUs and cuda specifically are how most AI is going to happen. And as the power increases, the ability to do increasing complex physics simulations of increasingly complex phenomena is going to become more and more relevant. Right now, it's stuff like proton folding, fluid dynamics, whatever. But there's way more coming. And all of it is going to use GPUs.