365
submitted 2 days ago by misk@sopuli.xyz to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] Mikina@programming.dev 180 points 1 day ago

Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.

If we ever get it, it won't be through LLMs.

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

[-] billwashere@lemmy.world 3 points 5 hours ago

I’m pretty sure the simplest way to look at is an LLM can only respond, not generate anything on its own without prompting. I wish humans were like that sometimes, especially a few in particular. I would think an AGI would be capable of independent thought, not requiring the prompt.

[-] GamingChairModel@lemmy.world 26 points 1 day ago

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

They did! Here's a paper that proves basically that:

van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.

This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

[-] naught101@lemmy.world 2 points 6 hours ago

Doesn't that just say that AI will never be cheap? You can still brute force it, which is more or less how back propagation works.

I don't think "intelligence" needs to have a perfect "solution", it just needs to do things well enough to be useful. Which is how human intelligence developed, evolutionarily - it's absolutely not optimal.

[-] 7rokhym@lemmy.ca 10 points 1 day ago

Roger Penrose wrote a whole book on the topic in 1989. https://www.goodreads.com/book/show/179744.The_Emperor_s_New_Mind

His points are well thought out and argued, but my essential takeaway is that a series of switches is not ever going to create a sentient being. The idea is absurd to me, but for the people that disagree? They have no proof, just a religious furver, a fanaticism. Simply stated, they want to believe.

All this AI of today is the AI of the 1980s, just with more transistors than we could fathom back then, but the ideas are the same. After the massive surge from our technology finally catching up with 40-60 year old concepts and algorithms, most everything has been just adding much more data, generalizing models, and other tweaks.

What is a problem is the complete lack of scalability and massive energy consumption. Are we supposed to be drying our clothes at a specific our of the night, and join smart grids to reduce peak air conditioning, to scorn bitcoin because it uses too much electricity, but for an AI that generates images of people with 6 fingers and other mangled appendages, that bullshit anything it doesn't know, for that we need to build nuclear power plants everywhere. It's sickening really.

So no AGI anytime soon, but I am sure Altman has defined it as anything that can make his net worth 1 billion or more, no matter what he has to say or do.

[-] DavidDoesLemmy@aussie.zone 1 points 1 hour ago

What do you think Sam Altman's net worth is currently?

[-] RoidingOldMan@lemmy.world 7 points 7 hours ago

a series of switches is not ever going to create a sentient being

Is the goal to create a sentient being, or to create something that seems sentient? How would you even tell the difference (assuming it could pass any test a normal human could)?

[-] HawlSera@lemm.ee 0 points 6 hours ago

Until you can see the human soul under a microscope, we can't make rocks into people.

[-] SlopppyEngineer@lemmy.world 37 points 1 day ago

There are already a few papers about diminishing returns in LLM.

[-] rottingleaf@lemmy.world 8 points 1 day ago* (last edited 1 day ago)

I mean, human intelligence is ultimately too "just" something.

And 10 years ago people would often refer to "Turing test" and imitation games in the sense of what is artificial intelligence and what is not.

My complaint to what's now called AI is that it's as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.

But I disagree that this technology will not be present in a real AGI if it's achieved. I think that it will be.

[-] TheFriar@lemm.ee 16 points 1 day ago

The only text predictor I want in my life is T9

[-] Edgarallenpwn@midwest.social 4 points 1 day ago

I still have fun memories of typing "going" in T9. Idk why but it 46464 was fun to hit.

[-] BreadstickNinja@lemmy.world 4 points 1 day ago

I remember that the keys for "good," "gone," and "home" were all the same, but I had the muscle memory to cycle through to the right one without even looking at the screen. Could type a text one-handed while driving without looking at the screen. Not possible on a smartphone!

[-] suy@programming.dev 9 points 1 day ago

Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

This is correct, and I don't think many serious people disagree with it.

If we ever get it, it won’t be through LLMs.

Well... depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The "trick" is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do "fine". The key for generality is trying to learn after you've been trained, to try to solve something that you've not been prepared for.

Even OpenAI's O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.

I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

I'm not sure if it's already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can't go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.

[-] feedum_sneedson@lemmy.world 14 points 1 day ago

I just tried Google Gemini and it would not stop making shit up, it was really disappointing.

[-] zerozaku@lemmy.world 2 points 1 day ago

Gemini is really far behind. For me it's Chatgpt > Llama >> Gemini. I haven't tried Claude since they require mobile number to use it.

[-] treverflume@lemmy.ml 1 points 2 hours ago

It's pretty good but I prefer gpt. Looking forward to trying deepseek soon.

[-] technocrit@lemmy.dbzer0.com 6 points 1 day ago

It's impossible to disprove statements that are inherently unscientific.

[-] bitjunkie@lemmy.world 7 points 1 day ago

I'm not sure that not bullshitting should be a strict criterion of AGI if whether or not it's been achieved is gauged by its capacity to mimic human thought

[-] finitebanjo@lemmy.world 13 points 1 day ago

The LLM aren't bullshitting. They can't lie, because they have no concepts at all. To the machine, the words are all just numerical values with no meaning at all.

[-] naught101@lemmy.world 2 points 6 hours ago

This is a fun read

Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5

[-] 11111one11111@lemmy.world 9 points 1 day ago* (last edited 1 day ago)

Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn't a single thing in the universe that can't be broken down to a mathematical equation for physics or chemistry? I'm curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it's a leap and I could be wrong but I thought I've heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.

Like I said in the beginning this is straight up bong rips philosophy and haven't looked up any of the shit I brought up.

I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won't see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can't out perform any more independently than a 3 year old.

[-] finitebanjo@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

First of all, I'm about to give the extreme dumbed down explanation, but there are actual academics covering this topic right now usually using keywords like AI "emergent behavior" and "overfitting". More specifically about how emergent behavior doesn't really exist in certain model archetypes and that overfitting increases accuracy but effectively makes it more robotic and useless. There are also studies of how humans think.

Anyways, human's don't assign numerical values to words and phrases for the purpose of making a statistical model of a response to a statistical model input.

Humans suck at math.

Humans store data in a much messier unorganized way, and retrieve it by tracing stacks of related concepts back to the root, or fail to memorize data altogether. The values are incredibly diverse and have many attributes to them. Humans do not hallucinate entire documentations up or describe company policies that don't exist to customers, because we understand the branching complexity and nuance to each individual word and phrase. For a human to describe procedures or creatures that do not exist we would have to be lying for some perceived benefit such as entertainment, unlike an LLM which meant that shit it said but just doesn't know any better. Just doesn't know, period.

Maybe an LLM could approach that at some scale if each word had it's own model with massive amounts more data, but given their diminishing returns displayed so far as we feed in more and more processing power that would take more money and electricity than has ever existed on earth. In fact, that aligns pretty well with OpenAI's statement that it could make an AGI if it had Trillions of Dollars to spend and years to spend it. (They're probably underestimating the costs by magnitudes).

[-] naught101@lemmy.world 1 points 6 hours ago

emergent behavior doesn’t really exist in certain model archetypes

Hey, would you have a reference for this? I'd love to read it. Does it apply to deep neural nets? And/or recurrent NNs?

[-] finitebanjo@lemmy.world 1 points 5 hours ago* (last edited 5 hours ago)

There is this 2023 study from Stanford which states AI likely do not have emergent abilities LINK

And there is this 2020 study by.... OpenAI... which states the error rate is predictable based on 3 factors, that AI cannot cross below the line or approach 0 error rate without exponentially increasing costs several iterations beyond current models, lending to the idea that they're predictable to a fault LINK

There is another paper by DeepMind in 2022 that comes to the conclusion that even at infinite scales it can never approach below 1.69 irreducable error LINK

This all lends to the idea that AI lacks the same Emergent behavior in Human Language.

[-] 11111one11111@lemmy.world 4 points 1 day ago

So that doesn't really address the concept I'm questioning. You're leaning hard into the fact the computer is using numbers in place of words but I'm saying why is that any different than assigning native language to a book written in a foreign language? The vernacular, language, formula or code that is being used to formulate a thought shouldn't delineate if something was a legitimate thought.

I think the gap between our reasoning is a perfect example of why I think FUTURE models (wanna be real clear this is entirely hypothetical assumption that LLMs will continue improving.)

What I mean is, you can give 100 people the same problem and come out with 100 different cognitive pathways being used to come to a right or wrong solution.

When I was learning to play the trumpet in middle school and later learned the guitar and drums, I was told I did not play instruments like most musicians. Use that term super fuckin loosely, I am very bad lol but the reason was because I do not have an ear for music, I can't listen and tell you something is in tune or out of tune by hearing a song played, but I could tune the instrument just fine if I have an in tune note played for me to match. My instructor explained that I was someone who read music the way others read but instead of words I read the notes as numbers. Especially when I got older when I learned the guitar. I knew how to read music at that point but to this day I can't learn a new song unless I read the guitar tabs which are literal numbers on a guitar fretboard instead of a actual scale.

I know I'm making huge leaps here and I'm not really trying to prove any point. I just feel strongly that at our most basic core, a human's understanding of their existence is derived from "I think. Therefore I am." Which in itself is nothing more than an electrochemical reaction between neurons that either release something or receive something. We are nothing more than a series of plc commands on a cnc machine. No matter how advanced we are capable of being, we are nothing but a complex series of on and off switches that theoretically could be emulated into operating on an infinate string of commands spelled out by 1's and 0's.

Im sorry, my brother prolly got me way too much weed for Xmas.

load more comments (3 replies)
load more comments (4 replies)
load more comments (36 replies)
this post was submitted on 27 Dec 2024
365 points (95.1% liked)

Technology

60130 readers
2767 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS