267

Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.

you are viewing a single comment's thread
view the rest of the comments
[-] Spzi@lemm.ee 18 points 1 year ago

Something trained only on form” — as all LLMs are, by definition — “is only going to get form; it’s not going to get meaning. It’s not going to get to understanding.”

I had lengthy and intricate conversations with ChatGPT about philosophy and religious concepts. It allowed me to playfully peek into Spinoza's worldview, with a few errors.

I have no problem to accept it is form, but cannot deny it conveys meaning as if it understands.

The article is very opinionated and dismissive in that regard. It even goes so far that it predicts what future research and engineering cannot achieve; untrustworthy.

We cannot pin down what we even mean with intelligence and meaning. While being way too long, the article doesn't even mention emergent capabilities, or quote any of the many contrary scientific views.

Apart from the unnecessarily long anecdotes about autistic and disabled people, did anybody learn anything from this article? I feel it's an uncritical parroting of what people like to think anyways to feel supreme and secure.

[-] kaffiene@lemmy.world 11 points 1 year ago

LLMs are definitely not intelligent. If you understand how they work, you'll realise why that is. LLMs reflect the intelligence in the work which they are trained on. No more, no less.

[-] SlopppyEngineer@lemmy.world 8 points 1 year ago

That's especially fun when you ask the same question in two different languages and get different results or even just gibberish in the other, usually non-English language. It clearly has more training data in English than it does for some other languages.

[-] Spzi@lemm.ee 5 points 1 year ago

That very much depends on what you define as "intelligent". We lack a clear definition.

I agree: These early generations of specific AIs are clearly not on the same level as human intelligence.

And still, we can already have more intelligent conversations with them than with most humans.

It's not a fair comparison though. It's as if we'd compare the language region of a toddler with a complete brain of an adult. Let's see what the next few years bring.

I'm not making that point, just mentioning it can be made on an academic level: There's a paper about the surprising emergent capabilities of ChatGPT 4.0, titled "Sparks of AGI".

[-] SkepticalButOpenMinded@lemmy.ca 3 points 1 year ago

That might seem plausible until you read deeply into the latest cognitive science. Nowadays, the growing consensus is around “predictive coding” theory of cognition, and the idea is that human cognition also works by minimizing prediction error. We have models in our brains that reflect input that we’ve been trained on. I think anyone who understands human cognition and LLMs cannot confidently say that LLMs are or are not intelligent yet.

[-] qyron@lemmy.pt 6 points 1 year ago

I've read a few texts from the same source and they read quite childish.

It felt like reading essays from very young children: there is some degree of coherence, some information is there but it lacks actual advancement on the subject.

this post was submitted on 06 Aug 2023
267 points (92.4% liked)

Technology

60123 readers
2719 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS