this post was submitted on 31 Dec 2024
82 points (72.0% liked)

Firefox

18166 readers
2 users here now

A place to discuss the news and latest developments on the open-source browser Firefox

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] adarza@lemmy.ca 141 points 1 month ago (3 children)
  • no account or login required.
  • it's an addon (and one you have to go get), not baked-in.
  • limited to queries about content you're currently looking at.
    (it's not a general 'search' or queries engine)
  • llm is hosted by mozilla, not a third party.
  • session histories are not retained or shared, not even with mistral (it's their model).
  • user interactions are not used to train.
[–] jeena@piefed.jeena.net 27 points 1 month ago (3 children)

Thanks for the summary. So it still sends the data to a server, even if it's Mozillas. Then I still can't use it for work, because the data is private and they wouldn't appreciate me sending their data toozilla.

[–] KarnaSubarna@lemmy.ml 21 points 1 month ago (1 children)

In such scenario you need to host your choice of LLM locally.

[–] ReversalHatchery@beehaw.org 5 points 1 month ago (1 children)

does the addon support usage like that?

[–] KarnaSubarna@lemmy.ml 7 points 1 month ago (1 children)

No, but the “AI” option available on Mozilla Lab tab in settings allows you to integrate with self-hosted LLM.

I have this setup running for a while now.

[–] cmgvd3lw@discuss.tchncs.de 4 points 1 month ago (1 children)

Which model you are running? Who much ram?

[–] KarnaSubarna@lemmy.ml 4 points 1 month ago* (last edited 1 month ago)

My (docker based) configuration:

Software stack: Linux > Docker Container > Nvidia Runtime > Open WebUI > Ollama > Llama 3.1

Hardware: i5-13600K, Nvidia 3070 ti (8GB), 32 GB RAM

Docker: https://docs.docker.com/engine/install/

Nvidia Runtime for docker: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Open WebUI: https://docs.openwebui.com/

Ollama: https://hub.docker.com/r/ollama/ollama

[–] LWD@lemm.ee 12 points 1 month ago

Technically it's a server operated by Google, leased by Mozilla. Mistral 7b could technically work locally, if Mozilla cared about doing such a thing.

I guess you can basically use the built-in AI chatbot functionality Mozilla rushed out the door, enable a secret setting, and use Mistral locally, but what a missed opportunity from the Privacy Browser Company

[–] Hamartiogonic@sopuli.xyz -2 points 1 month ago* (last edited 1 month ago)

According to Microsoft, you can safely send your work related stuff to Copilot. Besides, most companies already use a lot of their software and cloud services, so LLM queries don’t really add very much. If you happen to be working for one of those companies, MS probably already knows what you do for a living, hosts your meeting notes, knows your calendar etc.

If you’re working for Purism, RedHat or some other company like that, you might want to host your own LLM instead.

[–] fruitycoder@sh.itjust.works 9 points 1 month ago

That's really cool to see. A trusted hosted open source model is really missing in the ecosystem to me. I really like the idea of web centric integration too.

[–] thingsiplay@beehaw.org 1 points 1 month ago* (last edited 1 month ago)