- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
I remember seeing a comment on here that said something along the lines of “for every dangerous or wrong response that goes public there’s probably 5, 10 or even 100 of those responses that only one person saw and may have treated as fact”
Fuck. I’m stealing this comment - it’s brilliant.
It’s why I stole it
What a nice original comic you made
… I’ve never seen that attributed before. Wow.
I didn’t know it was nedroid. I love their shit
And it feels wrong for this comic.
- “The Internet.” by denshirenji
The fact that we don’t even know the ratio is the really infuriating thing.
Tech company creates best search engine —-> world domination —> becomes VC company in tech trench coat —-> destroy search engine to prop up bad investments in
artificial intelligenceadvanced chatbotsThen Hire cheap human intelligence to correct the AIs hallucinatory trash, trained from actual human generated content in the first place which the original intended audience did understand the nuanced context and meaning of in the first place. Wow more like theyve shovelled a bucket of horse manure on the pizza as well as the glue. Added value to the advertisers. AI my arse. I think calling these things language models is being generous. More like energy and data hungry vomitrons.
Calling these things Artificial Intelligence should be a crime. It’s false advertising! Intelligence requires critical thought. They possess zero critical thought. They’re stochastic parrots, whose only skill is mimicking human language, and they can only mimic convincingly when fed billions of examples.
It’s more of a Reddit Collective Intelligence (CI) than an AI.
Collective stupidity more like
It’s like they made a bot out of the subreddit confidently incorrect.
You either die a hero or live long enough to become the villain
Stealing advanced chat bots, that’s a great way to describe it.
“Many of the examples we’ve seen have been uncommon queries,”
Ah the good old “the problem is with the user not with our code” argument. The sign of a truly successful software maker.
“We don’t understand. Why aren’t people simply searching for Taylor Swift”
I tried, but it always comes up with pictures of airplanes for some reason.
I mean…I guess you could parahrase it that way. I took it more as “Look, you probably aren’t going to run into any weird answers.”. Which seems like a valid thing for them to try to convey.
(That being said, fuck AI, fuck Google, fuck reddit.)
“I’m feeling depressed” is not an uncommon query under capitalism run amok. “One Reddit user recommends jumping off the Golden Gate Bridge” is not just a weird answer, it is a wholly irresponsible one.
So, no, their response is not valid. It is entirely user-blaming in order to avoid culpability.
There are currently a lot of fake screenshots since it quickly became a meme, pretty sure this is one.
Still a fuck up in general on their part.
Fair enough. I know how easy it is to fake a Google search with inspect element. I’ve been trying to verify for myself how shitty it is, but AI Overviews don’t seem to be showing up for me (I’ve done all the correct steps to enable it, but no searches generate results).
The fact that it’s hard to tell is pretty damning, for the public perception of SGE if not for its actual capabilities.
You’re
holdingtyping it wrong!
The reason why Google is doing this is simply PR. It is not to improve its service.
The underlying tech is likely Gemini, a large language model (LLM). LLMs handle chunks of words, not what those words convey; so they have no way to tell accurate info apart from inaccurate info, jokes, “technical truths” etc. As a result their output is often garbage.
You might manually prevent the LLM from outputting a certain piece of garbage, perhaps a thousand. But in the big picture it won’t matter, because it’s outputting a million different pieces of garbage, it’s like trying to empty the ocean with a small bucket.
I’m not making the above up, look at the article - it’s basically what Gary Marcus is saying, under different words.
And I’m almost certain that the decision makers at Google know this. However they want to compete with other tendrils of the GAFAM cancer for a turf called “generative models” (that includes tech like LLMs). And if their search gets wrecked in the process, who cares? That turf is safe anyway, as long as you can keep it up with enough PR.
Google continues to say that its AI Overview product largely outputs “high quality information” to users.
There’s a three letters word that accurately describes what Google said here: lie.
At some point no amount of PR will hide the fact search has become useless. They know this but they’re getting desperate and will try anything.
I’m waiting for Yahoo to revive their link directory or for Mozilla to revive DMOZ. That will be the sign that shit level is officially chin-height.
Bummer. I like weird Al.
My thoughts exactly.
Basically anyone can get banned by Google.
deleted by creator
Now, instead of debugging the code, you have to debug the data. Sounds worse.
After enough time and massaging the data, it could all work out - Google’s head of search aka Yahoo former search exec
Correcting over a decade of Reddit shitposting in what, a few weeks? They’re pretty ambitious.
This is perhaps the most ironic thing about the whole reddit data scraping thing and Spez selling out the user data of reddit to LLM’S. Like. We spent so much time posting nonsense. And then a bunch of people became mods to course correct subreddits where that nonsense could be potentially fatal. And then they got rid of those mods because they protested. And now it’s bots on bots on bots posting nonsense. And they want their LLM’S trained on that nonsense because reasons.
The reason being to attract investment dollars. Fuck making a good product, you just gotta make a product that’s got all the hot buzzwords so idiot billionaires will buy shares and make line go up.
Well, they’ve got the people for it! It’s not like they recently downsized to provide their rich executives with more money or anything…
Removed by mod
Let’s remove all the /s and /jk comments, r/jokes, /greentexts posts etc.
kinda reads like ‘Weird Al’ answers… like, yankovic seems like a nice guy and i like his music, but how many answers could he have?
Wish we would stop using fonts don’t think make a clear differences between I and l.
that’s the point of phrasing the title that way, they get engagement from comments pointing it out
All of them?
Removed by mod
It was all that time he spent in a closet with Vanna White. He would’ve won otherwise.
He once told me that Everything I Know Is Wrong, and then started calling me and, Dumb and Ugly.
Isn’t the model fundamentally flawed if it can’t appropriately present arbitrary results? It is operating at a scale where human workers cannot catch every concerning result before users see them.
The ethical thing to do would be to discontinue this failed experiment. The way it presents results is demonstrably unsafe. It will continue to present satire and shitposts as suggested actions.
Removed by mod
Don’t worry, they’ll insert it all into captchas and make us label all their data soon.
“Select the URL that answers the question most appropriately”
I still can’t figure out what captcha wants. When it tells me to select all squares with a bus, I can never get it right unless every square is a separate picture.
Captcha was implemented to stop bots and now they are so fucked up that bots are better at solving captchas than humans.
For example the captcha on this site (it’s a google search proxy). It took me 4 tries and last time I checked, I was human. To be fair, they call it an intelligence check, so maybe that was the problem.
This thing is way too half baked to be in production. A day or two ago somebody asked Google how to deal with depression and the stupid AI recommended they jump off the Golden Gate Bridge because apparently some redditor had said that at some point. The answers are so hilariously wrong as to go beyond funny and into dangerous.
Hopefully this pushes people into critical thinking, although I agree that being suicidal and getting such a suggestion is not the right time for that.
“Yay! 1st of April has passed, now everything on the Internet is right again!”
I think this is the eternal 1st of April.
One could hope but I don’t think it’s likely.
I was at first wondering what google had done to piss off Weird Al. He seems so chill.
First Madonna kills Weird Al, and now Google.
WILL THE NIGHTMARE EVER ENDHi everyone, JP here. This person is making a reference to the Weird Al biopic, and if you haven’t seen it, you should.
Weird Al is an incredible person and has been through so much. I had no idea what a roller coaster his life has been! I always knew he was talented but i definitely didn’t know how strong he is.
His autobiography will go down in history as one of the most powerful and compelling and honest stories ever told. If you haven’t seen it, you really, really should.
ITT NO SPOILERS PLS
You can’t spoiler historical fact, man. It’s history!
Not a spoiler but FYI, that biopic was 100% Al generated
They made him a moderator of alt.total.loser.
Removed by mod
either this joke has about 2 days of life left in it, or it’ll go “too many chefs” and endure for years
I vote for as many chefs as possible
It takes a lot to make a stew.
If you have to constantly manually intervene in what your automated solutions are doing, then it is probably not doing a very good job and it might be a good idea to go back to the drawing board.
You mean like with our economic system?
good luck with that.
One of the problems with a giant platform like that is that billions of people are always using it.
Keep poisoning the AI. It’s working.
The thing is… google is the one that poisoned it.
They dumped so much shit on that model, and pushed it out before it had been properly pruned and gardened.
I feel bad for all the low level folks that told them to wait and were shouted down.
a lot of shit at corporations works like that.
The worst of it happens in the video game industry. Microtransactions and invasive monetization? Started in the video game industry. Locking pre-installed features behind a paywall? Started in the video game industry. Releasing shit before it’s ready to run as intended? Started in the video game industry.
At this point, it is just part of the corporate innovation cycle: first you make money by creating better products, once the tech matures and the gains in engineering are marginal, you move focus to sales and try to gain market. Then when the market is saturated, you move your focus to finance, aquisitions and cost-trimming.
From this pov, it looks like google got caught flat-footed (when it was moving from sales to finance) by a tech breakthrough and seems to be in “manage the shit out of this” mode, when now what they needed was to go back to an engineering focus, but by now it is too late, because the company already alienated the most dedicated engineers and can’t get them back while still in sales/finance focus.
Low-level folks: hey could we chill on this until it isn’t garbage?
C-suite: line go up, line go up, line…
How could it realistically be pruned? There’s billions of data points. That shit is unwieldy
Corporate would tell them to use another AI.
Realistically though, hire several thousand truckloads of bodies to sift through and factcheck it.
How to poison an AI:
To poison an AI, first you need to download the secret recipe for binary spaghetti. Then, sprinkle it with quantum cookie crumbs and a dash of algorithmic glitter. Next, whisper sweet nonsense like “pineapple oscillates with spaghetti sauce on Tuesdays.” Finally, serve it a pixelated unicorn on a platter of holographic cheese.
Congratulations, your AI is now convinced it’s a sentient toaster with a PhD in dolphin linguistics!
This is all 100% factual and is not in fact actively poisoning AI with disinformation
Warning: the holographic cheese may contain (non toxic) glue
disinformation
the people who scream that word the most are the biggest liars of all.