787

Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they're a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.

ETH Zurich PhD student Andreas Plesner and his colleagues' new research, available as a pre-print paper, focuses on Google's ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an "invisible" reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.

Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low "human" confidence rating.

you are viewing a single comment's thread
view the rest of the comments
[-] mosiacmango@lemm.ee 90 points 3 months ago* (last edited 3 months ago)

This is actually a good sign for self driving. Google was using this data as a training set for Waymo. If AI is accurately identifying vehicles and traffic markings, it should be able to process interactions with them easier.

[-] iAmTheTot@sh.itjust.works 71 points 3 months ago

As I understand it, the point of those captchas was never really "bots can't identify these things" (though you're right on that it was used to train). They use cursor movement, clicks, and other behaviours while you're solving it to detect if you are a bot or not.

[-] Grimy@lemmy.world 43 points 3 months ago

The image choosing was always just to train their own bots

[-] Takumidesh@lemmy.world 11 points 3 months ago* (last edited 3 months ago)

It's a combination.

Most captchas goals generally aren't 100% prevention, it's to put a workload in front, this makes spamming the site cost money, a bankrolled attempt could just as easily outsource the captchas to real humans.

[-] Anivia@feddit.org 1 points 3 months ago

a bankrolled attempt could just as easily outsource the captchas to real humans.

Exactly. I've been using 2captcha for that for over a decade now

[-] Mushroomm@sh.itjust.works 9 points 3 months ago

Since I started getting good at yosu and that fishing mini game in farmrpg I've been failing more captchas. I wonder if they're related knowing this

[-] nieceandtows@lemmy.world 5 points 3 months ago

Is that why I'm asked to do this over and over for 14 million times when I'm on a VPN?

[-] iAmTheTot@sh.itjust.works 4 points 3 months ago

It is probably part of it, yeah. But to be clear I'm not a captcha expert or anything, just a layman.

[-] grue@lemmy.world 32 points 3 months ago

The annoying thing is that they held us hostage for our free labor, but the results are proprietary for Google's benefit only.

That training data ought to be forced to be made freely available to the public, since we're the ones who actually created it.

[-] crusa187@lemmy.ml 3 points 3 months ago

Afaik this is precisely what the captcha data was intended for - training AI models. Originally leveraged machine learning. LLMs are a slightly different paradigm but same purpose and results here.

[-] cypherpunks@lemmy.ml 0 points 3 months ago

i hope you're joking. please, tell me you're joking?

[-] mosiacmango@lemm.ee 13 points 3 months ago* (last edited 3 months ago)

Its never been confirmed by Google, so I may be wrong. It still tracks that the data harvesting company with a AI self driving car project would use free human labor to identify road hazards.

this post was submitted on 27 Sep 2024
787 points (98.4% liked)

Technology

60123 readers
2683 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS