view the rest of the comments
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Unfortunately Nvidia is, by fair, the best choice for local LLM coder hosting, and there are basically two tiers:
Buy a used 3090, limit the clocks to like 1400 Mhz, and then host Qwen 2.5 coder 32B.
Buy a used 3060, host Arcee Medius 14B.
Both these will expose an OpenAI endpoint.
Run tabbyAPI instead of ollama, as it’s far faster and more vram efficient.
You can use AMD, but the setup is more involved. The kernel has to be compatible with the rocm package, and you need a 7000 card and some extra hoops for TabbyAPI compatibility.
Aside from that, an Arc B570 is not a terrible option for 14B coder models.