76

But in all fairness, it's really llama.cpp that supports AMD.

Now looking forward to the Vulkan support!

top 4 comments
sorted by: hot top controversial new old
[-] sardaukar@lemmy.world 7 points 6 months ago

I've been using it with a 6800 for a few months now, all it needs is a few env vars.

[-] stsquad@lemmy.ml 4 points 6 months ago

That's cool. I've just recently gotten hold of an interesting Ampere system and it's got an AMD card in it. I must give it a spin.

[-] stsquad@lemmy.ml 1 points 6 months ago

I was sadly stymied by the fact the rocm driver install is very much x86 only.

[-] turkishdelight@lemmy.ml 2 points 6 months ago

It's improving very fast. Give it a little time.

this post was submitted on 16 Mar 2024
76 points (100.0% liked)

LocalLLaMA

2220 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS