[-] GamingChairModel@lemmy.world 2 points 3 hours ago* (last edited 3 hours ago)

But if you read the article, then you saw that the author specifically concludes that the answer to the question in the headline is "yes."

This is a dead end and the only way forward is to abandon the current track.

[-] GamingChairModel@lemmy.world 13 points 3 hours ago

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

They did! Here's a paper that proves basically that:

van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.

This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

[-] GamingChairModel@lemmy.world 10 points 10 hours ago* (last edited 8 hours ago)

The paper gives specific numbers for specific contexts, too. It's a helpful illustration for these concepts:

A 3x3 Rubik's cube has 2^65 possible permutations, so the configuration of a Rubik's cube is about 65 bits of information. The world record for blind solving, where the solver examines the cube, puts on a blindfold, and solves it blindfolded, had someone examining the cube for 5.5 seconds, so the 65 bits were acquired at a rate of 11.8 bits/s.

Another memory contest has people memorizing strings of binary digits for 5 minutes and trying to recall them. The world record is 1467 digits, exactly 1467 bits, and dividing by 5 minutes or 300 seconds, for a rate of 4.9 bits/s.

The paper doesn't talk about how the human brain is more optimized for some tasks over others, and I definitely believe that the human brain's capacity for visual processing, probably assisted through the preprocessing that happens subconsciously, or the direct perception of visual information, is much more efficient and capable than plain memorization. So I'm still skeptical of the blanket 10-bit rate for all types of thinking, but I can see how they got the number.

[-] GamingChairModel@lemmy.world 3 points 18 hours ago

I mean: look at an image for a second. Can you only remember 10 things about it?

The paper actually talks about the winners of memory championships (memorizing random strings of numbers or the precise order of a random arrangement of a 52-card deck). The winners tend to have to study the information for an amount of time roughly equivalent to 10 bits per second.

It even talks about the guy who was given a 45 minute helicopter ride over Rome and asked to draw the buildings from memory. He made certain mistakes, showing that he essentially memorized the positions and architectural styles of 1000 buildings chosen out of 1000 possibilities, for an effective bit rate of 4 bits/s.

That experience suggests that we may compress our knowledge by taking shortcuts, some of which are inaccurate. It's much easier to memorize details in a picture where everything looks normal, than it is to memorize details about a random assortment of shapes and colors.

So even if I can name 10 things about a picture, it might be that those 10 things aren't sufficiently independent from one another to represent 10 bits of entropy.

The problem here is that the bits of information needs to be clearly defined, otherwise we are not talking about actually quantifiable information

here they are talking about very different types of bits

I think everyone agrees on the definition of a bit (a binary two-value variable), but the active area of debate is which pieces of information actually matter. If information can be losslessly compressed into smaller representations of that same information, then the smaller compressed size represents the informational complexity in bits.

The paper itself describes the information that can be recorded but ultimately discarded as not relevant: for typing, the forcefulness of each key press or duration of each key press don't matter (but that exact same data might matter for analyzing someone playing the piano). So in terms of complexity theory, they've settled on 5 bits per English word and just refer to other prior papers that have attempted to quantify the information complexity of English.

The Caltech release says they derived it from "a vast amount of scientific literature" including studies of how people read and write. I think the key is going to be how they derived that number from existing studies.

[-] GamingChairModel@lemmy.world 20 points 1 day ago* (last edited 1 day ago)

Speaking which is conveying thought, also far exceed 10 bits per second.

There was a study in 2019 that analyzed 17 different spoken languages to analyze how languages with lower complexity rate (bits of information per syllable) tend to be spoken faster in a way that information rate is roughly the same across spoken languages, at roughly 39 bits per second.

Of course, it could be that the actual ideas and information in that speech is inefficiently encoded so that the actual bits of entropy are being communicated slower than 39 per second. I'm curious to know what the underlying Caltech paper linked says about language processing, since the press release describes deriving the 10 bits from studies analyzing how people read and write (as well as studies of people playing video games or solving Rubik's cubes). Are they including the additional overhead of processing that information into new knowledge or insights? Are they defining the entropy of human language with a higher implied compression ratio?

EDIT: I read the preprint, available here. It purports to measure externally measurable output of human behavior. That's an important limitation in that it's not trying to measure internal richness in unobserved thought.

So it analyzes people performing external tasks, including typing and speech with an assumed entropy of about 5 bits per English word. A 120 wpm typing speed therefore translates to 600 bits per minute, or 10 bits per second. A 160 wpm speaking speed translates to 13 bits/s.

The calculated bits of information are especially interesting for the other tasks (blindfolded Rubik's cube solving, memory contests).

It also explicitly cited the 39 bits/s study that I linked as being within the general range, because the actual meat of the paper is analyzing how the human brain brings 10^9 bits of sensory perception down 9 orders of magnitude. If it turns out to be 8.5 orders of magnitude, that doesn't really change the result.

There's also a whole section addressing criticisms of the 10 bit/s number. It argues that claims of photographic memory tend to actually break down into longer periods of study (e.g., 45 minute flyover of Rome to recognize and recreate 1000 buildings of 1000 architectural styles translates into 4 bits/s of memorization). And it argues that the human brain tends to trick itself into perceiving a much higher complexity that it is actually processing (known as "subjective inflation"), implicitly arguing that a lot of that is actually lossy compression that fills in fake details from what it assumes is consistent with the portions actually perceived, and that the observed bitrate from other experiments might not properly categorize the bits of entropy involved in less accurate shortcuts taken by the brain.

I still think visual processing seems to be faster than 10, but I'm now persuaded that it's within an order of magnitude.

St Nicklaus

The golfer?!?

[-] GamingChairModel@lemmy.world 4 points 2 days ago

Seriously. I'm not at all an art guy so I feel qualified to observe that The Scream is probably one of the top 5 (and definitely top 10) most well known paintings, somewhere shortly after Da Vinci's Mona Lisa and Van Gogh's Starry Night.

[-] GamingChairModel@lemmy.world 8 points 4 days ago

Standard operating procedure for getting help on Linux tech support is to say that Linux sucks and that it would be way easier to complete a certain task on Windows.

[-] GamingChairModel@lemmy.world 106 points 7 months ago

I disagree with your premise. The 111th Congress got a lot done. Here's a list of major legislation.

  • Lily Ledbetter Act made it easier to recover for employment discrimination, and explicitly overruled a Supreme Court case making it harder to recover back pay.
  • The ARRA was a huge relief bill for the financial crisis, one of the largest bills of all time.
  • The Credit CARD Act changed a bunch of consumer protection for credit card borrowers.
  • Dodd Frank was groundbreaking, the biggest financial reform bill since probably the Great Depression, and created the Consumer Finance Protection Bureau, probably one of the most important pro-consumer agencies in the federal government today.
  • School lunch reforms (why the right now hates Michelle Obama)
  • Children's Health Insurance Program (CHIP or SCHIP): healthcare coverage, independent of Obamacare, for all children under 18.
  • Obamacare itself, which also includes comprehensive student loan reform too.

That's a big accomplishment list for 2 years, plus some smaller accomplishments like some tobacco reform, some other reforms relating to different agencies and programs.

Plus that doesn't include the administrative regulations and decisions the administrative agencies passed (things like Net Neutrality), even though those generally only last as long as the next president would want to keep them (see, again, Net Neutrality).

356
[-] GamingChairModel@lemmy.world 108 points 1 year ago

Our heads are just loaded with sensory capabilities that are more than just the two eyes. Our proprioception, balance, and mental mapping allows us to move our heads around and take in visual data from almost any direction at a glance, and then internally model that three dimensional space as the universe around us. Meanwhile, our ears can process direction finding for sounds and synthesize that information with our visual processing.

Meanwhile, the tactile feedback of the steering wheel, vibration of the actual car (felt by the body and heard by the ears), give us plenty of sensory information for understanding our speed, acceleration, and the mechanical condition of the car. The squeal of tires, the screech of brakes, and the indicators on our dash are all part of the information we use to understand how we're driving.

Much of it is trained through experience. But the fact is, I can tell when I have a flat tire or when I'm hydroplaning even if I can't see the tires. I can feel inclines or declines that affect my speed or lateral movement even when there aren't easy visual indicators, like at night.

view more: next ›

GamingChairModel

joined 2 years ago