this post was submitted on 18 Jan 2024
10 points (77.8% liked)

Selfhosted

40261 readers
871 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi,

I wanted to run some Large Language Models locally. Something like Private GPT or Medium Article on my local Apple Silicon to enhance my privacy but also get some additional help.

Does anyone have recommendations or guides I could follow?

Thank you very much.

top 5 comments
sorted by: hot top controversial new old
[–] Guenther_Amanita@feddit.de 2 points 10 months ago* (last edited 10 months ago) (1 children)

I don't know what's your intention.
I'm no expert or highly qualified in any way, so please correct me, but I don't know if that's the right way.

LLMs usually need lots of computing power, optimally in form of a GPU.
I use GPT4All, and when I send a prompt, I notice the temps/ fan speed and usage of my GPU turning up instantly to almost 100%. If it's a longer one, my PC sounds like a helicopter 😁

In terms of hosting a server, you want something barely good enough for your service, e.g. running your cloud. This results in way less power draw, which is what you want, since it runs 24/7. Something powerful enough to run LLMs comfortably would likely draw lots of power, even an Apple Silicon.

I think, you're better off just using GPT4All on your gaming PC if you need it.

I hope I'm wrong, and that M1s draw barely any power, especially in idle.
And even if I am, they (almost) can only run MacOS, which wouldn't be a good server OS.

[–] moonpiedumplings@programming.dev 3 points 10 months ago (1 children)

The tldr as I understand it is that Mac M1/M2 devices are unique in that the vram (gpu ram) is the same as the normal ram. This sharing allows LLM models to run on the gpu of those chips, and in their "vram" as well.

Llama.cpp was the software that users do this. I can't find the original guide/article I looked at, but here is a github gist, where the commenters have done benchmarks:

https://gist.github.com/cedrickchee/e8d4cb0c4b1df6cc47ce8b18457ebde0

[–] Guenther_Amanita@feddit.de 1 points 10 months ago

Alright, interesting... As I said, I'm no expert or anything and this was just my noob optinion.

Thank you for the correction and further resources!

[–] c10l@lemmy.world 2 points 10 months ago

On macOS I’ve been using Ollama. It’s very easy to setup, can run as a service and expose an API.

You can talk to it directly from the CLI (ollama run ) or via applications and plugins (like https://continue.dev ) that consume the API.

It can run on Linux but I haven’t personally tried it.

https://ollama.ai/

[–] tagginator@utter.online -4 points 10 months ago

New Lemmy Post: Recommendations on running GPTs on Asahi - M1 Ultra? (https://lemmy.world/post/10881869)
Tagging: #SelfHosted

(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)

I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md