262
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 29 Jan 2024
262 points (100.0% liked)
Technology
37805 readers
104 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
We don't even know what consciousness or sentience is, or how the brain really works. Our hundreds of millions spent on trying to accurately simulate a rat's brain have not brought us much closer (Blue Brain), and there may yet be quantum effects in the brain that we are barely even beginning to recognise (https://phys.org/news/2022-10-brains-quantum.html).
I get that you are excited but it really does not help anyone to exaggerate the efficacy of the AI field today. You should read some of Brooks' enlightening writing like Elephants Don't Play Chess, or the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).
Where did I exaggerate anything?
We know more than you might realize. For instance, consciousness is the ∆ of separate brain areas; when they go all in sync, consciousness is lost. We see a similar behavior with NNs.
It's nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness ("temperature") to return the best results.
There lies the problem. Current NNs have overcome the limitations of 1:1 accurate simulations by solving only for the relevant parts, then increasing the parameter counts to a point where they solve better than the original thing.
It's kind of a brute force approach, but the results speak for themselves.
I'm afraid the "state of the art" in 2020, was not the same as the "state of the art" in 2024. We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.
The human brain is the most complex object in the known universe. We are only scratching the surface of it right now. Discussions of consciousness and sentience are more a domain of philosophy than anything else. The true innovations in AI will come from neurologists and biologists, not from computer scientists or mathematicians.
Quantum effects are not randomness. Emulating quantum effects is possible, they can be understood empirically, but it is very slow. If intelligence relies on quantum effects, then we will need to build whole new types of quantum computers to build AI.
Well, there we agree. In that the results are very limited I suppose that they do speak for themselves 😛
This is what I mean by exaggeration. I'm an AI proponent, I want to see the field succeed. But this is nothing like the leap forward some people seem to think it is. It's a neat trick with some interesting if limited applications. It is not an AI. This is no different than when Minsky believed that by the end of the 70s we would have "a machine with the general intelligence of an average human being", which is exactly the sort of over-promising that led to the AI field having a terrible reputation and all the funding drying up.