this post was submitted on 12 Jun 2023
14 points (100.0% liked)

LocalLLaMA

2244 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Let's talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

you are viewing a single comment's thread
view the rest of the comments
[–] Kerfuffle@sh.itjust.works 2 points 1 year ago (1 children)

guanaco-65B is my favorite. It's pretty hard to go back to 33B models after you've tried a 65B.

It's slow and requires a lot of resources to run though. Also, not like there are a lot of 65B model choices.

[–] planish@sh.itjust.works 2 points 1 year ago* (last edited 1 year ago) (1 children)

What do you even run a 65b model on?

[–] Kerfuffle@sh.itjust.works 4 points 1 year ago (1 children)

With a quantized GGML version you can just run on it on CPU if you have 64GB RAM. It is fairly slow though, I get about 800ms/token on a 5900X. Basically you start it generating something and come back in 30minutes or so. Can't really carry on a conversation.

[–] planish@sh.itjust.works 3 points 1 year ago (1 children)

Is it smart enough that it can get the thread of what you are looking for without as much rerolling or handholding, so this comes out better?

[–] Kerfuffle@sh.itjust.works 2 points 1 year ago (1 children)

That's the impression I got from playing with it. I don't really use LLMs for anything practical, so I haven't done anything too serious with it. Here's are a couple examples of having it write fiction: https://gist.github.com/KerfuffleV2/4ead8be7204c4b0911c3f3183e8a320c

I also tried with plain old llama-65B: https://gist.github.com/KerfuffleV2/46689e097d8b8a6b3a5d6ffc39ce7acd

You can see it makes some weird mistakes (although the writing style itself is quite good).

If you want to give me a prompt, I can feed it to guanaco-65B and show you the result.

[–] planish@sh.itjust.works 1 points 1 year ago (1 children)

These are, indeed, pretty good, and quite coherent.

[–] Kerfuffle@sh.itjust.works 2 points 1 year ago

I was pretty impressed by guanaco-65B, especially how it was able to remain coherent even way past the context limit (with llama.cpp's context wrapping thing). You can see the second story is definitely longer than 2,048 tokens.