this post was submitted on 11 Jan 2025
10 points (100.0% liked)

LocalLLaMA

2410 readers
44 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

I was experimenting with oobabooga trying to run this model but due to it's size it wasn't going to fit in ram, so i tried to quantize it using llama.cpp, and that worked, but due to the gguf format it was only running on the cpu. searching for ways to quantize the model while keeping it in safetensors returned nothing; so is there any way to do that?

I'm sorry if this is a stupid question, i still know almost nothing of this field

you are viewing a single comment's thread
view the rest of the comments
[–] brokenlcd@feddit.it 1 points 8 hours ago

I think i may try this way if kobold uses vulkan instead of rocm, It's most likely going to be way less of a headache.

As for the model, it's what came out of a random search for a decent small model on reddit. No reason in particular, thanks for the suggestion.