317
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 27 Dec 2024
317 points (94.9% liked)
Technology
60123 readers
2786 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
They did! Here's a paper that proves basically that:
van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5
Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.
This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.