78
submitted 3 days ago by KarnaSubarna@lemmy.ml to c/firefox@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] ReversalHatchery@beehaw.org 5 points 3 days ago

does the addon support usage like that?

[-] KarnaSubarna@lemmy.ml 7 points 3 days ago

No, but the “AI” option available on Mozilla Lab tab in settings allows you to integrate with self-hosted LLM.

I have this setup running for a while now.

[-] cmgvd3lw@discuss.tchncs.de 4 points 3 days ago

Which model you are running? Who much ram?

[-] KarnaSubarna@lemmy.ml 4 points 3 days ago* (last edited 3 days ago)

My (docker based) configuration:

Software stack: Linux > Docker Container > Nvidia Runtime > Open WebUI > Ollama > Llama 3.1

Hardware: i5-13600K, Nvidia 3070 ti (8GB), 32 GB RAM

Docker: https://docs.docker.com/engine/install/

Nvidia Runtime for docker: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Open WebUI: https://docs.openwebui.com/

Ollama: https://hub.docker.com/r/ollama/ollama

this post was submitted on 31 Dec 2024
78 points (71.4% liked)

Firefox

18106 readers
27 users here now

A place to discuss the news and latest developments on the open-source browser Firefox

founded 5 years ago
MODERATORS