Serge I've heard good things about this one as well.
Self Hosted - Self-hosting your services.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules
- No harassment
- crossposts from c/Open Source & c/docker & related may be allowed, depending on context
- Video Promoting is allowed if is within the topic.
- No spamming.
- Stay friendly.
- Follow the lemmy.ml instance rules.
- Tag your post. (Read under)
Important
Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!
- Lemmy doesn't have tags yet, so mark it with [Question], [Help], [Project], [Other], [Promoting] or other you may think is appropriate.
Cross-posting
- !everything_git@lemmy.ml is allowed!
- !docker@lemmy.ml is allowed!
- !portainer@lemmy.ml is allowed!
- !fediverse@lemmy.ml is allowed if topic has to do with selfhosting.
- !selfhosted@lemmy.ml is allowed!
If you see a rule-breaker please DM the mods!
I've tried https://github.com/oobabooga/text-generation-webui with LLaMA, didn't have enough VRAM to run it though.
Llama.cpp + Wizard Vicuna (Uncensored, if you want to get the real thing) + one of the web interfaces that are compatible. Should be listed in the readme.
Or try gpt4all which is much easier to use and even offers a selection of downloadable models.
7B/13B/30B+ depends on your hardware, especially GPU.
I use koboldcpp with the vicuna model. Reasonably fast generation (<1 minute) on a 4th gen i7, would probably be on par with chatgpt in terms of speed if you used a GPU.
I believe gpt4all has a self-hostable web interface but I could be wrong. Still it can run on relatively low end hardware (relatively because it still needs a decent amount) and you could just use it on your local computer.
Codestar for dev