this post was submitted on 01 Feb 2024
36 points (95.0% liked)

LocalLLaMA

2402 readers
6 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] BetaDoggo_@lemmy.world 7 points 11 months ago (2 children)

It's not open source it's weight available(for now). As of now there's nothing you can do with it publicly because it lacks a license and is known to be stolen.

[–] doodlebob@lemmy.world 5 points 11 months ago

I'm sure that's not that big of a deal to some people. For example, I'm mainly using LLMs for use in my home assistant instance

[–] Secret300@sh.itjust.works 3 points 11 months ago

So what I'm hearing is I can use it but just make sure I don't tell anyone

[–] breakingcups@lemmy.world 4 points 11 months ago (1 children)

Has anyone tried to see how censored it is yet?

[–] toxuin@lemmy.ca 3 points 11 months ago (1 children)

Definitely. It has some alignment, but it won’t straight up refuse to do anything. It will sometimes add notes saying that what you’ve asked is kinda maybe against the law, but will produce a great response regardless. It’s a 70b, so running it locally is kind of a challenge, but for those who can run it - there is simply no other LLM that you can run at home that gets even close to it. It follows instructions amazingly, it’s very consistent and barely hallucinates. There is some special mistral sauce in it for sure, even if it’s “just” a llama2-70b.

[–] fhein@lemmy.world 1 points 9 months ago

GGUF q2_K works quite well IMO, I've run it with 12GB vram + 32GB ram

[–] CryptoKitten@sh.itjust.works 1 points 11 months ago

How does one leak open source material?