this post was submitted on 11 Oct 2023
477 points (92.5% liked)

Technology

59092 readers
6622 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] DocRekd@lemm.ee 3 points 1 year ago (1 children)

Nowdays LLM can be ran on consumer hardware, so the "dead battery" analogy fall short here too.

[–] FLX@lemmy.world 2 points 1 year ago (2 children)

With the same efficiency ? I'm interested in an example

Why everyone using these crappy SaaS then ?

[–] AdrianTheFrog@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

Llama 2 and its derivatives, mostly. Simple local ui available here.

Not as good as chatGPT 3.5 in my experience. Just kinda falls apart on anything too complex, and is a lot more likely to get things wrong.

I tried it out using the 'Open-Orca/OpenOrcaxOpenChat-Preview2-13B' 4 bit 32g model. Its surprisingly fast to generate. It seems significantly faster than ChatGPT on my 3060. (with ExLlama)

There are also some models tuned specifically to actually answer your requests instead of the 'As an AI language model' kind of stuff.

Edit: just tried a newer model and its a lot better. (dolphin-2.1-mistral-7b)

[–] DocRekd@lemm.ee 1 points 1 year ago

For the same reason SaaS is popular in general: yes, you could get a VPS, install all the needed software on it, keep it up to date, oor you could pay a company to do all that for you.