this post was submitted on 14 Feb 2024
566 points (95.2% liked)

Technology

59441 readers
3713 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] venoft@lemmy.world 6 points 9 months ago* (last edited 9 months ago) (6 children)

What if you don't have a decent graphics card? Wait 5 minutes for your URL completion to finish?

[–] gentooer@programming.dev -5 points 9 months ago (4 children)

Using an LLM is quite fast, especially if it's optimised to run on normal hardware

[–] cley_faye@lemmy.world 3 points 9 months ago (3 children)

Decent models are huge; an average one requires 8GB to be kept in memory (better models requires something like 40 to 70 GB), and most currently available engines are extremely slow on a CPU and requires dedicated hardware (and even relatively powerful GPU requires a few seconds of "thinking" time). It is unlikely that these requirements will be easily squeezable in current computers, and more likely that dedicated hardware will be required.

[–] __matthew__@lemmy.world 1 points 9 months ago

Sorry but has anyone in this thread actually tried running local LLMs on CPU? You can easily run a 7B model at varying levels of quantization (ie. 5 bit quantization) and get a generalized prompt-able LLM. Yeah, of course it's going to take ~4GB of RAM (which is mem-mapped and paged into memory), but you can easily fine tune smaller more specific models (like the translation one mentioned above) and have surprising intelligence at a fraction of the resources.

Take, for example, phi-2 which performs as well as 13B param models but with 2.7B params. Yeah, that's still going to take 1.5GB RAM which Firefox wouldn't reasonably ship, but many lighter weight specialized tasks could easily use something like a fine tuned 0.3B model with quantization.

load more comments (2 replies)
load more comments (2 replies)
load more comments (3 replies)