[-] behohippy@lemmy.world 2 points 1 year ago

The advancements in this space have moved so fast, it's hard to extract a predictive model on where we'll end up and how fast it'll get there.

Meta releasing LLaMA produced a ton of innovation from open source that showed you could run models that were nearly the same level as ChatGPT with less parameters, on smaller and smaller hardware. At the same time, almost every large company you can think of has prioritized integrating generative AI as a high strategic priority with blank cheque budgets. Whole industries (also deeply funded) are popping up around solving the context window memory deficiencies, prompt stuffing for better steerability, better summarization and embedding of your personal or corporate data.

We're going to see LLM tech everywhere in everything, even if it makes no sense and becomes annoying. After a few years, maybe it'll seem normal to have a conversation with your shoes?

2
Happy Barkday (lemmy.world)
submitted 2 years ago by behohippy@lemmy.world to c/aww@lemmy.ml

He's 5 today

3

Ryzen 5900X, 64 gig DDR4-3200, 2tb ssd,10tb hdd and an RTX2070. Hosting Stable Diffusion, various llama.cpp instances with python bindings, jellyfin, sonarr, multiple modded minecraft servers, and a network file share.

1
Attitude Dog (lemmy.world)
submitted 2 years ago by behohippy@lemmy.world to c/aww@lemmy.ml

She's mostly good. Mostly.

[-] behohippy@lemmy.world 0 points 2 years ago

I hate these filthy neutrals...

behohippy

joined 2 years ago