[-] dtlnx@beehaw.org 1 points 2 years ago

I'd say mainly privacy concerns. Everything you type is sent to Grammarly servers. I'm not sure what is done with that data.

[-] dtlnx@beehaw.org 2 points 2 years ago

Wow thank you. I had no idea this was ready!

[-] dtlnx@beehaw.org 1 points 2 years ago* (last edited 2 years ago)

Decentraleyes serves popular resources locally, reducing reliance on external content delivery networks and enhancing privacy; however, LocalCDN offers a more extensive list of supported libraries, provides additional privacy features, and is actively maintained, making it a better choice for comprehensive protection.

[-] dtlnx@beehaw.org 3 points 2 years ago

Wonder what steam deck support will look like.

[-] dtlnx@beehaw.org 0 points 2 years ago

I'd have to say I'm very impressed with WizardLM 30B (the newer one). I run it in GPT4ALL, and even though it is slow the results are quite impressive.

Looking forward to Orca 13b if it ever releases!

0
submitted 2 years ago* (last edited 2 years ago) by dtlnx@beehaw.org to c/localllama@sh.itjust.works

Let's talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

2
submitted 2 years ago* (last edited 2 years ago) by dtlnx@beehaw.org to c/localllama@sh.itjust.works

I figured I'd post this. It's a great way to get an LLM set up on your computer and is extremely easy for folks that don't have that much technical knowledge!

2
submitted 2 years ago by dtlnx@beehaw.org to c/technology@beehaw.org

dtlnx

joined 2 years ago