0
submitted 2 years ago* (last edited 2 years ago) by dtlnx@beehaw.org to c/localllama@sh.itjust.works

Let's talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

you are viewing a single comment's thread
view the rest of the comments
[-] dtlnx@beehaw.org 0 points 2 years ago

I'd have to say I'm very impressed with WizardLM 30B (the newer one). I run it in GPT4ALL, and even though it is slow the results are quite impressive.

Looking forward to Orca 13b if it ever releases!

Which one is the "newer" one? Looking at the quantised releases by TheBloke, I only see one version of 30B WizardLM (in multiple formats/quantisation sizes, plus the unofficial uncensored version).

this post was submitted on 12 Jun 2023
0 points (NaN% liked)

LocalLLaMA

2328 readers
9 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS