523
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Sep 2023
523 points (96.8% liked)
Technology
60112 readers
2591 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
yes, I am very sure, I work directly with this tech. It's very good at making something that looks impressive but falls apart with any level of scrutiny.
My favourite part right now is that AI doesn't actually translate, it is just constantly dreaming up text that looks like what you might expect, and it's trained on a model that hopefully will impact that text to make it be valid.
but it's often not, so it will hallucinate something totally untrue, or just absolutely made up and then make all the following text entirely about that thing. You might have some text about the fall of the soviet union. but the AI hallucinates the existence of a clown at some point because of some bias in the model maybe, now suddenly the fall of the soviet union was because of a vast clown plot.
Often it just gets totally screwed over by it's own biases, like counting. god forbid your input text has something to do with counting, the AI's will get stuck on counting things that don't exist on that kind of thing so easily
all of this absolutely misses the fact that all the nuance is lost and the institutional knowledge is lost too.
To be absolutely clear, the current state of AI is very good at fooling middle managers and decision makers that it is good, because it's built to look good. but it's not even 5% the quality that we can have real people do things. and there is a mountain to get it there.
My guess is that over the next few years content quality online is going to go to shit and that will negatively impact those companies and sectors that utilize AI foolishly.
Hopefully we enter Gartner's trough of disillusionment and companies back off from wholesale replacing humans with LLMs and recognize that in most cases they aren't fit for purpose.
I think AI will have to go (far?) beyond LLMs to have any chance of replacing humans artists and writers with output of somewhat reasonable quality. (I may be wrong; maybe it is simply a matter of training very topic-specific LLMs)
Meanwhile if the impact isn't sufficient the greedy and moronic will plow forward to the detriment of us all. Writers will struggle to support themselves and the Internet will become a lot less useful. It may be a very rough decade or two.
I'm hoping this'll lead to a refreshed appreciation of expertise