- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
- [email protected]
Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they’re a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.
ETH Zurich PhD student Andreas Plesner and his colleagues’ new research, available as a pre-print paper, focuses on Google’s ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an “invisible” reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.
Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low “human” confidence rating.
I fail more of those checks then these AI bots do. Surreal.
It seems like every other captcha I get has a picture of a moped and asks to click for a motorcycle. When I don’t click on the moped it says I’m wrong. Pisses me off.
Bots are answering them wrong. Google takes the most submitted answer as truth.
Just be very general, don’t get stuck in the details.
It goes against my human nature to not overanalyze.
leaves plastic banana under your bed
You’ll find that, months from now, and you won’t know where it came from, or why it’s there.
Greetings fellow human!
01001000 01101111 01110111 00100000 01100100 01101111 00100000 01111001 01101111 01110101 00100000 01100100 01101111 00111111
than* these AI bots