this post was submitted on 31 Dec 2024
82 points (72.0% liked)

Firefox

19035 readers
185 users here now

A place to discuss the news and latest developments on the open-source browser Firefox

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] jeena@piefed.jeena.net 27 points 2 months ago (7 children)

Thanks for the summary. So it still sends the data to a server, even if it's Mozillas. Then I still can't use it for work, because the data is private and they wouldn't appreciate me sending their data toozilla.

[–] KarnaSubarna@lemmy.ml 21 points 2 months ago (4 children)

In such scenario you need to host your choice of LLM locally.

[–] ReversalHatchery@beehaw.org 5 points 2 months ago (3 children)

does the addon support usage like that?

[–] KarnaSubarna@lemmy.ml 7 points 2 months ago (1 children)

No, but the “AI” option available on Mozilla Lab tab in settings allows you to integrate with self-hosted LLM.

I have this setup running for a while now.

[–] cmgvd3lw@discuss.tchncs.de 4 points 2 months ago (1 children)

Which model you are running? Who much ram?

[–] KarnaSubarna@lemmy.ml 4 points 2 months ago* (last edited 2 months ago)

My (docker based) configuration:

Software stack: Linux > Docker Container > Nvidia Runtime > Open WebUI > Ollama > Llama 3.1

Hardware: i5-13600K, Nvidia 3070 ti (8GB), 32 GB RAM

Docker: https://docs.docker.com/engine/install/

Nvidia Runtime for docker: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Open WebUI: https://docs.openwebui.com/

Ollama: https://hub.docker.com/r/ollama/ollama

load more comments (1 replies)
load more comments (1 replies)
load more comments (3 replies)