this post was submitted on 31 Dec 2024
82 points (72.0% liked)

Firefox

18166 readers
2 users here now

A place to discuss the news and latest developments on the open-source browser Firefox

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] KarnaSubarna@lemmy.ml 21 points 1 month ago (1 children)

In such scenario you need to host your choice of LLM locally.

[–] ReversalHatchery@beehaw.org 5 points 1 month ago (1 children)

does the addon support usage like that?

[–] KarnaSubarna@lemmy.ml 7 points 1 month ago (1 children)

No, but the “AI” option available on Mozilla Lab tab in settings allows you to integrate with self-hosted LLM.

I have this setup running for a while now.

[–] cmgvd3lw@discuss.tchncs.de 4 points 1 month ago (1 children)

Which model you are running? Who much ram?

[–] KarnaSubarna@lemmy.ml 4 points 1 month ago* (last edited 1 month ago)

My (docker based) configuration:

Software stack: Linux > Docker Container > Nvidia Runtime > Open WebUI > Ollama > Llama 3.1

Hardware: i5-13600K, Nvidia 3070 ti (8GB), 32 GB RAM

Docker: https://docs.docker.com/engine/install/

Nvidia Runtime for docker: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Open WebUI: https://docs.openwebui.com/

Ollama: https://hub.docker.com/r/ollama/ollama