Beginner questions thread
(sh.itjust.works)
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 02 Oct 2023
29 points (96.8% liked)
LocalLLaMA
2374 readers
6 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 2 years ago
MODERATORS
I have two 3090 Turbo GPUs and it seems like oobabooga doesn't split the load between the two cards when I try to run TheBloke/dolphin-2.7-mixtral-8x7b-AWQ.
Does anyone know how to make text generation webui use both cards? Do I need an nvlink between the two cards?
You shouldn't need nvlink, I'm wondering if it's something to do with AWQ since I know that exllamav2 and llama.cpp both support splitting in oobabooga
I think you're right. Saw a post on Reddit basically mentioning the same things I'm seeing.
It looks like autoawq supports it but it might be an issue with how oobabooga implements it or something...