this post was submitted on 11 Jan 2025
10 points (100.0% liked)
LocalLLaMA
2410 readers
44 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I believe exllama and vllm offer quantization. But llama.cpp should be able to run on a graphics card as well, maybe the default settings are wrong for your computer. Or you have like an AMD card and need a different build of llama.cpp?
And by the way, you don't need to quantize that model yourself. Some people already uploaded that in several quantized formats to Huggingface. AWQ, GGUF, exl2 ...