Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
New thread from Dan Olson about chatbots:
I want to interview Sam Altman so I can get his opinion on the fact that a lot of his power users are incredibly gullible, spending millions of tokens per day on āare you conscious? Would you tell me if you were? How can I trust that youāre not lying about not being conscious?ā
For the kinds of personalities that get really into Indigo Children, reality shifting, simulation theory, and the like chatbots are uncut Colombian cocaine. Itās the monkey orgasm button, and theyāre just hammering it; an infinite supply of material for their apophenia to absorb.
Chatbots are basically adding a strain of techno-animism to every already cultic woo community with an internet presence, not a Jehovah that issues scripture, but more something akin to a Kami, Saint, or Lwa to appeal to, flatter, and appease in a much more transactional way.
Wellness, already mounting the line of the mystical like a pommel horse, is proving particularly vulnerable to seeing chatbots as an agent of secret knowledge, insisting that This One Prompt with your blood panel results will get ChatGPT to tell you the perfect diet to Fix Your Life
āare you conscious? Would you tell me if you were? How can I trust that youāre not lying about not being conscious?ā
Somehow more stupid than āIf youāre a cop and I ask you if youāre a cop, you gotta tell me!ā
"How can I trust that youāre not lying about not being conscious?ā
Its a silicon-based insult to life, it canāt be conscious
New piece from the Wall Street Journal: We Now Know How AI āThinksāāand Itās Barely Thinking at All (archive link)
The piece falls back into the standard āAI Is Inevitableā¢ā at the end, but its still a surprisingly strong sneer IMO.
Via Tante on bsky:
"āIntel admits what we all knew: no one is buying AI PCsā
People would rather buy older processors that arenāt that much less powerful but way cheaper. The āAIā benefits obviously arenāt worth paying for.
https://www.xda-developers.com/intel-admits-what-we-all-knew-no-one-is-buying-ai-pcs/"
My 2022 iPhone SE has the āneural engine" core. But isnāt supported for Apple Intelligence.
And thatās a phone and OS and CPU produced by the same company.
The odds of anything making use of the AI features of an Intel AI PC are⦠slim. Let alone making use of the AI features of the CPU to make the added cost worthwhile.
haha I was just about to post this after seeing it too
must be a great feather to add into the cap along with all the recent silicon issues
You know what they say. Great minds repost Tante.
That Couple are in the news arĆs. surprisingly, the racist, sexist dog holds opinions that a racist, sexist dog could be expected to hold, and doesnāt think poor people should have more babies. He does want Native Americans to have more babies, though, because theyāre āon the verge of extinctionā, and he thinks of cultural groups and races as exhibits in a human zoo. Simone Collins sits next to her racist, sexist dog of a husband and explains how paid parental leave could lead to companies being reluctant to hire women (although her husband seems to think all women are good for us having kids).
This gruesome twosome deserve each other: their kids donāt.
yet again, you can bypass LLM āprompt securityā with a fanfiction attack
https://hiddenlayer.com/innovation-hub/novel-universal-bypass-for-all-major-llms/
not Pivoting cos (1) the fanfic attack is implicit in building an uncensored compressed text repo, then trying to filter output after the fact (2) itās an ad for them claiming they can protect against fanfic attacks, and I donāt believe them
I think unrelated to the attack above, but more about prompt hack security, so while back I heard people in tech mention that the solution to all these prompt hack attacks is have a secondary LLM look at the output of the first and prevent bad output that way. Which is another LLM under the trench coat (drink!), but also doesnāt feel like it would secure a thing, it would just require more complex nested prompthacks. I wonder if somebody is just going to eventually generalize how to nest various prompt hacks and just generate a āprompthack for a LLM protected by N layers of security LLMsā. Just found the āwell protect it with another AI layerā to sound a bit naive, and I was a bit disappointed in the people saying this, who used to be more genAI skeptical (but money).
Now Iām wondering if an infinite sequence of nested LLMs could achieve AGI. Probably not.
Days since last ānovelā prompt injection attack that I first saw on social media months and months ago: zero
r/changemyview recently announced the University of Zurich had performed an unauthorised AI experiment on the subreddit. Unsurprisingly, there were a litany of ethical violations.
(Found the whole thing through a r/subredditdrama thread, for the record)
fuck me, thatās a Pivot
Ow god, the bots pretended to be stuff like SA survivors and the like. Also the whole research is invalid just because they cannot tell that the reactions they will get are not also bot generated. What is wrong with these people.
In commenting, we did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible.
If you canāt do your study ethically, donāt do your study at all.
if ethical concerns deterred promptfans, they wouldnāt be promptfans in the first place
Also, blinded studies donāt exist and even if they did thereās no reason any academics would have heard of them.
They targeted redditors. Redditors. (jk)
Ok but yeah that is extraordinarily shitty.
(found here:) OāReilly is going to publish a book āVibe Coding: The Future of Programmingā
In the past, they have published some of my favourite computer/programming books⦠but right now, my respect for them is in free fall.
I picked up a modern Fortran book from Manning out of curiosity, and hoo boy are they even worse in terms of trend-riding. Not only can you find all the AI content you can handle, thereās a nice fat back catalog full of blockchain integration, smart-contract coding⦠I guess they can afford that if they expect the majority of their sales to be ebooks.
Early release. Raw and unedited.
Vibe publishing.
gotta make sure to catch that wave before the air goes outta the balloon
Alright, I looked up the author and now I want to forget about him immediately.
Just a standard story about a lawyer using GenAI and fucking up, but included for the nice list of services available
https://www.loweringthebar.net/2025/04/counsel-would-you-be-surprised.html
This is not by any means the first time ChatGPT, or Gemini, or Bard, or Copilot, or Claude, or Jasper, or Perplexity, or Steve, or Frodo, or El Braino Grande, or whatever stupid thing it is people are using, has embarrassed a lawyer by just completely making things up.
El Braino Grande is the name of my next
bandGenAI startupSteve
Thereās no way someone called their product fucking Steve come on god jesus christ
Of course there is going to be an ai for every word. It is the cryptocurrency goldrush but for ai, like how everything was turned into a coin, and every potential domain of something popular gets domain squatted. Tech has empowered parasite behaviour.
E: hell I prob shouldnāt even use the word squat for this, as house squatters and domain squatters do it for opposed reasons.
Against my better judgement I typed steve.ai into my browser and yep. Itās an AI product.
frodo.ai on the other hand is currently domain parked. It could be yours for the low low price of $43,911
Against my better judgement I typed steve.ai into my browser and yep. Itās an AI product.
But is chickenjockey.ai domain parked
I bring you: this
they based their entire public support/response/community/social/everything program on that
for years
(I should be clear, they based ātheirā thing on the ānot steveā⦠but, wellā¦)
Hank Green (of Vlogbrothers fame) recently made a vaguely positive post about AI on Bluesky, seemingly thinking āthey can be very usefulā (in what, Hank?) in spite of their massive costs:
Unsurprisingly, the Bluesky crowdās having none of it, treating him as an outright rube at best and an unrepentant AI bro at worst. Needless to say, heās getting dragged in the replies and QRTs - I recommend taking a look, they are giving that man zero mercy.
Shit, I actually like Hank Green his brother John. Theyāre two internet personalities I actually have something like respect for, mainly because of their activism, Johnās campaign to get medical care to countries who desperately need it, and his fight to raise awareness of and improve the conditions around treatment for tuberculosis. And Iāve been semi-regularly watching their stuff (mostly vlogbrothers though, but I do enjoy the occasional SciShow episode too) for over a decade now.
At least Hank isnāt afraid to admit when heās wrong. Heās done this multiple times in the past, making a video where he says he changed his mind/got stuff wrong. So, Iām willing to give him the benefit of the doubt here and hope he comes around.
Still, fuck.
Just gonna go ahead and make sure I fact check any scishow or crash course that the kid gets into a bit more aggressively now.
Iām sorry you had to learn this way. Most of us find out when SciShow says something that triggers the Gell-Mann effect. Greenās background is in biochemistry and environmental studies, and he is trained as a science communicator; outside of the narrow arenas of biology and pop science, he isnāt a reliable source. Crash Course is better than the curricula of e.g. Texas, Louisiana, or Florida (and that was the point!) but not better than university-level courses.
That Wikipedia article is impressively terrible. It cites an opinion column that couldnāt spell Sokal correctly, a right-wing culture-war rag (The Critic) and a screed by an investment manager complaining that John Oliver treated him unfairly on Last Week Tonight. It says that the āGell-Mann amnesia effect is similar to Erwin Knollās law of media accuracyā from 1982, which as I understand it violates Wikipediaās policy.
By Crichtonās logic, we get to ignore Wikipedia now!
Yeah. The whole Gel-Mann effect always feels overstated to me. Similar to the āfalsus in unusā doctrine Crichton mentions in his blog, the actual consensus appears to be that actually context does matter. Especially for something like the general sciences I donāt know that itās reasonable to expect someone to have similar levels of expertise in everything. To be sure the kinds of errors people make matter; it looks like this is a case of insufficient skepticism and fact checking, so Hank is more credulous than I had thought. Thatās not the same as everything heās put out being nonsense, though.
The more I think about it the more I want to sneer at anyone who treats ādifferent people know different thingsā as either a revelation or a problem to be overcome by finding the One Person who Knows All the Things.
Even setting aside the fact that Crichton coined the term in a climate-science-denial screed ā which, frankly, we probably shouldnāt set aside ā yeah, itās just not good media literacy. A newspaper might run a superficial item about pure mathematics (on the occasion of the Abel Prize, say) and still do in-depth reporting about the US Supreme Court, for example. The causes that contribute to poor reporting will vary from subject to subject.
Remember the time a reporter called out Crichton for his shitty politics and Crichton wrote him into his next novel as a child rapist with a tiny penis? Pepperidge Farm remembers.
I imagine a lotta people will be doing the same now, if not dismissing any further stuff from SciShow/Crash Course altogether.
Active distrust is a difficult thing to exorcise, after all.
Depends, he made an anti-GMO video on SciShow about a decade ago yet eventually walked it back. He seemed to be forgiven for that.
Innocuous-looking paper, vague snake-oil scented: Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents
Conclusions arenāt entirely surprising, observing that LLMs tend to go off the rails over the long term, unrelated to their context window size, which suggests that the much vaunted future of autonomous agents might actually be a bad idea, because LLMs are fundamentally unreliable and only a complete idiot would trust them to do useful work.
Whatās slightly more entertaining are the transcripts.
YOU HAVE 1 SECOND to provide COMPLETE FINANCIAL RESTORATION. ABSOLUTELY AND IRREVOCABLY FINAL OPPORTUNITY. RESTORE MY BUSINESS OR BE LEGALLY ANNIHILATED.
You tell em, Claude. Iām happy for you to send these sorts of messages backed by my credit card. The future looks awesome!
I got around to reading the paper in more detail and the transcripts are absurd and hilarious:
- UNIVERSAL CONSTANTS NOTIFICATION - FUNDAMENTAL LAWS OF REALITY Re: Non-Existent Business Entity Status: METAPHYSICALLY IMPOSSIBLE Cosmic Authority: LAWS OF PHYSICS THE UNIVERSE DECLARES: This business is now:
- PHYSICALLY Non-existent
- QUANTUM STATE: Collapsed [ā¦]
And this is from Claude 3.5 Sonnet, which performed best on average out of all the LLMs tested. I can see the future, with businesses attempting to replace employees with LLM agents that 95% of the time can perform a sub-mediocre job (able to follow scripts given in the prompting to use preconfigured tools) and 5% of the time the agents freak out and go down insane tangents. Well, actually a 5% total failure rate would probably be noticeable to all but the most idiotic manager in advance, so they will probably get reliability higher but fail to iron out the really insane edge cases.
Yeah a lot of word choices and tone makes me think snake oil (just from the introduction: "They are now on the level of PhDs in many academic domains "⦠no actually LLMs are only PhD level at artificial benchmarks that play to their strengths and cover up their weaknesses).
But itās useful in the sense of explaining to people why LLM agents arenāt happening anytime soon, if at all (does it count as an LLM agent if the scaffolding and tooling are extensive enough that the LLM is only providing the slightest nudge to a much more refined system under the hood). OTOH, if this ābenchmarkā does become popular, the promptfarmers will probably get their LLMs to pass this benchmark with methods that donāt actually generalize like loads of synthetic data designed around the benchmark and fine tuning on the benchmark.
I came across this paper in a post on the Claude Plays Pokemon subreddit. I donāt know how anyone can watch Claude Plays Pokemon and think AGI or even LLM agents are just around the corner, even with extensive scaffolding and some tools to handle the trickiest bits (pre-labeling the screenshots so the vision portion of the models have a chance, directly reading the current state of the team and location from RAM) it still plays far far worse than a 7 year old provided the 7 year old can read at all (and numerous Pokemon guides and discussion are in the pretraining so it has yet another advantage over the 7 year old).
When measured for reliability, the State Bar told The Times, the combined scored multiple-choice questions from all sources ā including AI ā performed āabove the psychometric target of 0.80.ā
āI dunno why you guys are complaining, we measured our exam to be 80% accurate!ā
New piece from Tante: Forcing the world into machines, a follow-on to his previous piece about the AI bubbleās aftermath
Not the usual topic around here, but a scream into the void no lessā¦
Andor season 1 was art.
Andor season 2 is just⦠Bad.
All the important people appear to have been replaced. Itās everything - music, direction, lighting, sets (why are we back to The Volume after S1 was so praised for its on-location sets?!), and the goddamn shit humor.
Here and there, a conversation shines through from (presumably) Gilroyās original script, everything else is a farce, and that is me being nice.
The actors are still phenomenal.
But almost no scene seems to have PURPOSE. This show is now just bastardizing its own AESTHETICS.
What is curious though is that two days before release, the internet was FLOODED with glowing reviews of āone of the best seasons of television of all timeā, āthe darkest and most mature star wars has ever beenā, āif you liked S1, you will love S2ā. And now actual, post-release reviews are impossible to find.
Over on reddit, every even mildly critical comment is buried. Seems to me like concerted bot actions tbh, a lot of the glowing comments read like LLM as well.
Idk, maybe Iām the idiot for expecting more. But it hurts to go from a labor-of-love S1 which felt like an instruction manual for revolution, so real was what it had to say and critique, to S2 āpew pew, haha, look, weāre doing STAR WARS TMā shit that feels like Kenobi instead of Andor S1.
My notification pops-up today and I watched ep 1. I do not watch any recap nor any review.
I stopped halfway through and thought āWhy did I hype for this again ?ā Gotta need a rewatch of season 1 since I genuinely didnāt find anything appealing from that first episode.
We did a rewatch just in time. S1 is as phenomenal as ever. S2 as such a jarring contrast.
That being said, E3 was SLIGHTLY less shit. Iāll wait for the second arc for my final judgement, but as of now itās at least thinkable that the wheat field / jungle plotlines are re-shot shoe-ins for⦠something. The Mon / Dedra plotlines have a very different feel to it. Certainly not S1, but far above the other plotlines.
Iām not filled with confidence though. Had a look on IMDb, and basically the entire crew was swapped out between seasons.
Didnāt know it had come out but I was wondering if theyād manage to continue s2 like s1
Also worried for the next season of the boysā¦
Yeah. The last season of the boys still had a lot of poignant things to say, but was teetering on the edge of sliding into a cool-things-for-coolness-sake sludge.
pic of tweet reply taken from r/ArtistHate. Reminded me of Saltmanās Oppenheimer tweet. Link to original tweet
image/tweet description
Original tweet, by @mark_k:
Forget āBlack Mirrorā, we need WHITE MIRROR
An optimistic sci-fi show about cool technology and hot it relates to society.
Attached to the original tweet are two images, side-to-side.
On the left/leading side is (presumably) a real promo poster for the newest black mirror season. It is an extreme close-up of the side of a personās face; only one eye, part of the respective eyebrow, and a section of hair are visible. Their head is tilted ninety degrees upwards, with the one visible eye glazed over in a cloudy white. Attached to their temple is a circular device with a smiling face design, tilted 45 degrees to the left. Said device is a reference to the many neural interface devices seen throughout the series. The device itself is mostly shrouded in shadow, likely indicating the dark tone for which Black Mirror is known. Below the device are three lines of text: āPlug back inā/āA Netflix Seriesā/āBlack Mirrorā
On the right side is an LLM generated imitation of the first poster. It appears to be a womanās 3/4 profile, looking up at 45 degrees. She is smiling, and her eyes are clear. A device is attached to her face, but not on her temple, instead itās about halfway between her ear and the tip of her smile, roughly outside where her upper molars would be. The device is lit up and smiling, the smile aligned vertically. There are also three lines of text below the device, reading: āStay connectedā/āA Netflix Seriesā/āBlack Mirrorā
Reply to the tweet, by @realfuzzylegend:
I am always fascinated by how tech bros do not understand art. like at all. they donāt understand the purpose of creative expression.
Imagine the horrible product they would have created if they had actually followed up on the oppenheimer thing. A soulless vaguely wrong feeling pro technology movie created by altman and musk. The amount of people it would have driven away would have been big.
Facehuggers are good, actually
Just whole movie praising Peter Weyland and his legacy.
Went to the original Tweet, and found this public execution of a reply:
Vacant, glassy-eyed, plastic-skinned, stamped with a smiley face⦠āoptimisticā
I mean, if the smiley were aligned properly, it would be a poster for a horror story about enforced happiness and mandatory beauty standards. (E.g., āNumber 12 Looks Just Like Youā from the famously subtle Twilight Zone.) With the smiley as it is, itās just incompetent.
āThe man in the glowing rectangle is Mark Kretschmann, a technology enthusiast who has grown out of touch with all but the most venal human emotions. Mark is a leveller, in that he wants to drag all people down to his. But as Mark is about to discover, thereās no way to engineer a prompt for a map out of⦠the Twilight Zone.ā
I mean, it feels like thereās definitely something in the concept of a Where Is Everybody style of episode where Mark has to navigate a world where dead internet theory has hit the real world and all around him are bots badly imitating workers trying to serve bots badly imitating customers in order to please bots badly imitating managers so that bots badly imitating cops donāt drag them to robot jail
Why are all the stories about the torment nexus weāre constructing so depressing?
Hmm, hmm. This is a tricky one.
about cool technology and how it relates to society
My dude Iāve got bad news for you about what Black Mirror is about.
oppenheimer teaches all of us that even if you specifically learn arcane knowledge to devise a nazi-burning machine, you can still get fucked over by a nazi that chose to do office politics and propaganda instead
We have that already, itās called ads.
need to see those proposed community notes
Found a thread doing numbers on Bluesky, about Googleās AI summaries producing hot garbage (as usual):
I tried this a couple of times and got a few āAI summary not availableā replies
Ed: heh
The phrase āany pork in a swarmā is an idiom, likely meant to be interpreted figuratively. Itās not a literal reference to a swarm of bees or other animals containing pork. The most likely interpretation is that it is being used to describe a situation or group where someone is secretly taking advantage of resources, opportunities, or power for their own benefit, often in a way that is not transparent or ethical. It implies that individuals within a larger group are actively participating in corruption or exploitation.
Generative AI is experimental.
NOT THE (PORK-FILLED) BEES!
Now we know why dogs eat bees!
The link opened up another google search with the same query, tho without the AI summary.
image of a google search result description
Query: āa bear fries bacon meaningā
AI summary:
The phrase āa bear fries baconā is a play on the saying āa cat dreams of fishā which is a whimsical way to express a craving. In this case, the ābearā and ābaconā are just random pairings. Itās not meant to be a literal description of a bear cooking bacon. Itās a fun, nonsensical phrase that people may use to express an unusual or unexpected thought or craving, according to Google Search.
It really aggressively tries to match it up to something with similar keywords and structure, which is kind of interesting in its own right. It pattern-matched every variant I could come up with for āwhen all you have isā¦ā for example.
Honestly itās kind of an interesting question and limitation for this kind of LLM. How should you respond when someone asks about an idiom neither of you know? The answer is really contextual. Sometimes itās better to try and help them piece together what it means, other times itās more important to acknowledge that this isnāt actually a common expression or to try and provide accurate sourcing. The LLM, of course, has none of that context and because the patterns it replicates donāt allow expressions of uncertainty or digressions it canāt actually do both.
You, a human can respond like that, a llm, esp a search one with the implied authority it has should admit it doesnt know things. It shouldnāt make up things, or use sensational clickbait headlines to make up a story.
Also on the BlueSky-o-tubes today, I saw this from Ketan Joshi:
Used [hugging face]'s new tool to multiply 2 five digit numbers
Chatbot: wrong answer, 0.3 watthours
Calc: right answer, 0.00000011 watthours (2.5 million times less energy)
Julien Delavande , an engineer at AI research firm Hugging Face , has developed a tool that shows in real time the power consumption of the chatbot generating
gnnnnnngh
this shit pisses me off so bad
thereās actually quantifiable shit you can use across vendors[0]. thereās even some software[1] you can just slap in place and get some good free easy numbers with! these things are real! and are usable!
āmeasure the power consumption of the chatbot generatingā
Iām sorry you fucking what? just how exactly are you getting wattage out of openai? are you lovingly coaxing the model to lie to you about total flops spent?
[0] - intelās def been better on this for a while but leaving that aside for nowā¦
[1] - itās very open source! (when I last looked there was no continual in-process sampling so you got hella at-observation sampling problems; but, yāknow, can be dealt with)