67
submitted 9 months ago by ylai@lemmy.ml to c/localllama@sh.itjust.works
you are viewing a single comment's thread
view the rest of the comments
[-] raldone01@lemmy.world 5 points 9 months ago

Maybe they will have a 30-40b model that would be a nice compromise between capability and performance on my machine.

this post was submitted on 10 Apr 2024
67 points (92.4% liked)

LocalLLaMA

2402 readers
23 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS