221
you are viewing a single comment's thread
view the rest of the comments
[-] Technus@lemmy.zip 103 points 4 days ago

These models are nothing more than glorified autocomplete algorithms parroting the responses to questions that already existed in their input.

They're completely incapable of critical thought or even basic reasoning. They only seem smart because people tend to ask the same stupid questions over and over.

If they receive an input that doesn't have a strong correlation to their training, they just output whatever bullshit comes close, whether it's true or not. Which makes them truly dangerous.

And I highly doubt that'll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won't ever want their "state of the art AI chatbot" to answer a customer's question with "sorry, I don't know."

I can't wait for this stupid AI craze to eat its own tail.

[-] neshura@bookwormstory.social 26 points 4 days ago* (last edited 4 days ago)

Last I checked (which was a while ago) "AI" still can't pass the most basic of tasks such as "show me a blank image"/"show me a pure white image". the LLM will output the most intense fever dream possible but never a simple rectangle filled with #fff coded pixels. I'm willing to debate the potentials of AI again once they manage to do that without those "benchmarks" getting special attention in the training data.

[-] GBU_28@lemm.ee 30 points 4 days ago* (last edited 4 days ago)
[-] Womble@lemmy.world 2 points 2 days ago* (last edited 2 days ago)

Thats actually quite interesting, you could make the argument that that is an image of "a pure white completely flat object with zero content", its just taken your description of what you want the image to be and given an image of an object that satisfies that.

[-] GBU_28@lemm.ee 19 points 4 days ago

I will say the next attempt was interesting, but even less of a good try.

[-] Technus@lemmy.zip 19 points 4 days ago

Problem is, AI companies think they could solve all the current problems with LLMs if they just had more data, so they buy or scrape it from everywhere they can.

That's why you hear every day about yet more and more social media companies penning deals with OpenAI. That, and greed, is why Reddit started charging out the ass for API access and killed off third-party apps, because those same APIs could also be used to easily scrape data for LLMs. Why give that data away for free when you can charge a premium for it? Forcing more users onto the official, ad-monetized apps was just a bonus.

[-] rottingleaf@lemmy.world 6 points 4 days ago* (last edited 4 days ago)

Yep. In cryptography there was a moment when cryptographers realized that the key must be secret, the message should be secret, but the rest of the system can not be secret. For the social purpose of refining said system. EDIT: And that these must be separate entities.

These guys basically use lots of data instead of algorithms. Like buying something with oil money instead of money made on construction.

I just want to see the moment when it all bursts. I'll be so gleeful. I'll go and buy an IPA and will laugh in every place in the Internet I'll see this discussed.

[-] gr3q@lemmy.ml 5 points 4 days ago* (last edited 4 days ago)

I tested chatgpt, it needed some nagging but it could do it. Needed the size, blank and white keywords.

Obviously a lot harder than it should be, but not impossible.

[-] rottingleaf@lemmy.world 3 points 4 days ago

Because it's not AI, it's sophisticated pattern separation, recognition, lossy compression and extrapolation systems.

Artificial intelligence, like any intelligence, has goals and priorities. It has positive and negative reinforcements from real inputs.

Their AI will be possible when it'll be able to want something and decide something, with that moment based on entropy and not extrapolation.

[-] InternetPerson@lemmings.world 2 points 3 days ago

Artificial intelligence, like any intelligence, has goals and priorities

No. Intelligence does not necessitate goals. You are able to understand math, letters, words, meaning of those without pursuing a specific goal.

Because it's not AI, it's sophisticated pattern separation, recognition, lossy compression and extrapolation systems.

And our brains work in a similar way.

[-] rottingleaf@lemmy.world 0 points 3 days ago

Our brains work in various ways. Somewhere in there a system similar to those "AI"'s exists, I agree. It's just only one part. Artificial dicks are not the same thing as artificial humans

[-] InternetPerson@lemmings.world -3 points 3 days ago

I'm willing to debate the potentials of AI again once they manage to do that without those "benchmarks" getting special attention in the training data.

You sound like those guys who doomed AI, because a single neuron wasn't able to solve the XOR problem. Guess what, build a network out of neurons and the problem is solved.

What potentials are you talking about? The potentials are tremendous. There are a plethora of algorithms, theoretic knowledge and practical applications where AI really shines and proves its potential. Just because LLMs currently still lack several capabilities, this doesn't mean that some future developments can't improve on that and this by maybe even not being a contemporary LLM. LLMs are just one thing in the wide field of AI. They can do really cool stuff. This points towards further potential in that area. And if it's not LLMs, then possibly other types of AI architectures.

[-] theterrasque@infosec.pub 12 points 4 days ago

I generally agree with your comment, but not on this part:

parroting the responses to questions that already existed in their input.

They're quite capable of following instructions over data where neither the instruction nor the data was anywhere in the training data.

They're completely incapable of critical thought or even basic reasoning.

Critical thought, generally no. Basic reasoning, that they're somewhat capable of. And chain of thought amplifies what little is there.

[-] AliasAKA@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

I don’t believe this is quite right. They’re capable of following instructions that aren’t in their data but appear like things which were (that is, it can probabilistically interpolate between what it has seen in training and what you prompted it with — this is why prompting can be so important). Chain of thought is essentially automated prompt engineering; if it’s seen a similar process (eg from an online help forum or study materials) it can emulate that process with different keywords and phrases. The models themselves however are not able to perform a is to b therefore b is to a, arguably the cornerstone of symbolic reasoning. This is in part because it has no state model or true grounding, only probabilities you could observe a token given some context. So even with chain of thought, it is not reasoning, it’s just doing very fancy interpolation of the words and phrases used in the initial prompt to generate a prompt that is probably going to give a better answer, not because of reasoning, but because of a stochastic process.

[-] rottingleaf@lemmy.world 5 points 4 days ago

Synthesis versus generation. Yes.

And I highly doubt that’ll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won’t ever want their “state of the art AI chatbot” to answer a customer’s question with “sorry, I don’t know.”

It's a tower of Babel IRL.

[-] ContrarianTrail@lemm.ee 1 points 4 days ago

The current AI discussion I’m reading online has eerie similarities to the debate about legalizing cannabis 15 years ago. One side praises it as a solution to all of society’s problems, while the other sees it as the devil’s lettuce. Unsurprisingly, both sides were wrong, and the same will probably apply to AI. It’ll likely turn out that the more dispassionate people in the middle, who are neither strongly for nor against it, will be the ones who had the most accurate view on it.

[-] lvxferre@mander.xyz 2 points 2 days ago

It’ll likely turn out that the more dispassionate people in the middle, who are neither strongly for nor against it, will be the ones who had the most accurate view on it.

I believe that some of the people in the middle will have more accurate views on the subject, indeed. However, note that there are multiple ways to be in the "middle ground", and some are sillier than the extremes.

For example, consider the following views:

  1. That LLMs are genuinely intelligent, but useless.
  2. That LLMs are dumb, but useful.

Both positions are middle grounds - and yet they can't be accurate at the same time.

this post was submitted on 12 Oct 2024
221 points (95.5% liked)

Technology

58706 readers
4040 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS