[-] tkw8@lemm.ee 5 points 1 week ago* (last edited 1 week ago)

I’m shocked. There must be an error in this analysis. /s

Maybe engage an AI coding assistant to massage the data analysis lol

[-] tkw8@lemm.ee 3 points 2 weeks ago

I was just teasing. But you received some really interesting replies. Good post!

[-] tkw8@lemm.ee 17 points 2 weeks ago

The Holocene.

[-] tkw8@lemm.ee 10 points 3 weeks ago

Jellyfin + Apple TV + Infuse = Bliss

[-] tkw8@lemm.ee 18 points 4 weeks ago

If you aren’t using either proton vpn, Mullvad or ivpn, you’re asking for trouble imo.

[-] tkw8@lemm.ee 4 points 1 month ago* (last edited 1 month ago)

Gemma 27B Is actually quite good, but "narrow." Its super low context and seems to be hyper optimized for short chatbot-arena style questions.

This is the stuff I love to know so thanks for sharing. I will be pulling Command R tomorrow.

[-] tkw8@lemm.ee 5 points 1 month ago

I manually specify what models to pull. I’m not running anything too crazy. My largest model is gemma27B. But I’ve worked with dolphin-mistral which was fun.

[-] tkw8@lemm.ee 4 points 1 month ago

I read localllama through redlib but I don’t contribute. I am not technical enough to contribute and I don’t understand the math.

I have been looking at YouTube for some videos to try to explain it, but I haven’t found anything that is in the sweet spot between “video for non-technical people” and “video for people with PhD and quantum physics”

[-] tkw8@lemm.ee 6 points 1 month ago

I’m running Nvidia on Ubuntu. I’ll give exllama a shot.

[-] tkw8@lemm.ee 27 points 1 month ago

I think it’s amazing. I’m running Ollama with a bunch of open-source llms. You’re right. It’s so good. The problem is keeping up to date on what the newest development is.

The pace of progress is so fast and it’s really difficult to know what the cool kids are experimenting with this moment.

view more: next ›

tkw8

joined 1 month ago