I guess the question here really boils down to: Can (less-than-perfect) capitalism solve this problem somehow (by allowing better solutions to prevail), or is it bound to fail due to the now-insurmountable market power of existing players?
- 0 Posts
- 100 Comments
- HedyL@awful.systemstoTechTakes@awful.systems•Google AI Overview is just affiliate marketing spam now5·1 天前
- HedyL@awful.systemstoTechTakes@awful.systems•Google AI Overview is just affiliate marketing spam nowEnglish13·1 天前
Somehow makes me think of the times before modern food safety regulations, when adulterations with substances such as formaldehyde or arsenic were common, apparently: https://pmc.ncbi.nlm.nih.gov/articles/PMC7323515/ We may be in a similar age regarding information now. Of course, this has always been a problem with the internet, but I would argue that AI (and the way oligopolistic companies are shoving it into everything) is making it infinitely worse.
- HedyL@awful.systemstoTechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 27th July 2025English8·2 天前
Or like the radium craze of the early 20th century (even if radium may have a lot more legitimate use cases than current-day LLM).
- HedyL@awful.systemstoTechTakes@awful.systems•16% of employees pretend to use AI at work to please their bossEnglish43·2 天前
New reality at work: Pretending to use AI while having to clean up after all the people who actually do.
If I’m not mistaken, even in pre-LLM days, Google had some kind of automated summaries which were sometimes wrong. Those bothered me less. The AI hallucinations appear to be on a whole new level of wrong (or is this just my personal belief - are there any statistics about this?).
Most searchers don’t click on anything else if there’s an AI overview — only 8% click on any other search result. It’s 15% if there isn’t an AI summary.
I can’t get over that. An oligopolistic company imposes a source on its users that is very likely either hallucinating or plagiarizing or both, and most people seem to eat it up (out of convenience or naiveté, I assume).
- HedyL@awful.systemstoTechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 27th July 2025English6·3 天前
Maybe us humans possess a somewhat hardwired tendency to “bond” with a counterpart that acts like this. In the past, this was not a huge problem because only other humans were capable of interacting in this way, but this is now changing. However, I suppose this needs to be researched more systematically (beyond what is already known about the ELIZA effect etc.).
- HedyL@awful.systemstoSneerClub@awful.systems•We did it. 2 people and many boats problem is a classic now. [content warning: botshit]English12·3 天前
Somehow, the “smug” tone really rubs me the wrong way. It is of great comedic value here, but it always reminds me of that one person who is consistently wrong yet is somehow the boss’s or the teacher’s favorite.
- HedyL@awful.systemstoSneerClub@awful.systems•We did it. 2 people and many boats problem is a classic now. [content warning: botshit]English9·4 天前
Officially, you can’t. Unofficially, just have one of the ferrymen tow a boat.
Or swim back. However, the bot itself appears to have ruled out all of these options.
- HedyL@awful.systemstoSneerClub@awful.systems•We did it. 2 people and many boats problem is a classic now. [content warning: botshit]English12·4 天前
At first glance it seems impossible once N≥2, because as soon as you bring a boat across to the right bank, one of you must pilot a boat back—leaving a boat behind on the wrong side.
In this sentence, the bot appears to sort of “get” it (not entirely, though, the wording is weird). However, from there, it definitely goes downhill…
- HedyL@awful.systemstoTechTakes@awful.systems•Huge SaaStr-Replit vibe coding disaster! — if it ever happenedEnglish8·4 天前
Turns out that being a proficient liar might be the key to success in this attention economy (see also: chatbots).
- HedyL@awful.systemstoTechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 27th July 2025English11·6 天前
Of course, there are also the usual comments saying artists shouldn’t complain about getting replaced by AI etc. Reminds me why I am not on Twitter anymore.
It also strikes me that in this case, the artist didn’t even expect to get paid. Apparently, the AI bros even crave the unpaid “exposure” real artists get, without wanting to put in any of the work and while (in most cases) generating results that are no better than spam.
It is a sickening display of narcissism IMHO.
- HedyL@awful.systemstoTechTakes@awful.systems•Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 20th July 2025 - awful.systemsEnglish6·9 天前
With LLMs not only do we see massive increases in overhead costs due to the training process necessary to build a usable model, each request that gets sent has a higher cost. This changes the scaling logic in ways that don’t appear to be getting priced in or planned for in discussions of the glorious AI technocapital future
This is a very important point, I believe. I find it particularly ironic that the “traditional” Internet was fairly efficient in particular because many people were shown more or less the same content, and this fact also made it easier to carry out a certain degree of quality assurance. Now with chatbots, all this is being thrown overboard and extreme inefficiencies are being created, and apparently, the AI hypemongers are largely ignoring that.
It’s quite noteworthy how often these shots start out somewhat okay at the first prompt, but then deteriorate markedly over the following seconds.
As a layperson, I would try to explain this as follows: At the beginning, the AI is - to some extent - free to “pick” how the characters and their surroundings would look like (while staying within the constraints of the prompt, of course, even if this doesn’t always work out either).
Therefore, the AI can basically “fill in the blanks” from its training data and create something that may look somewhat impressive at first glance.
However, for continuing the shot, the AI is now stuck with these characters and surroundings while having to follow a plot that may not be represented in its training data, especially not for the characters and surroundings it had picked. This is why we frequently see inconsistencies, deviations from the prompt or just plain nonsense.
If I am right about this assumption, it might be very difficult to improve these video generators, I guess (because an unrealistic amount of additional training data would be required).
Edit: According to other people, it may also be related to memory/hardware etc. In that case, my guesses above may not apply. Or maybe it is a mixture of both.
- HedyL@awful.systemstoTechTakes@awful.systems•Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 20th July 2025 - awful.systemsEnglish11·11 天前
I have been thinking about the true cost of running LLMs (of course, Ed Zitron and others have written about this a lot).
We take it for granted that large parts of the internet are available for free. Sure, a lot of it is plastered with ads, and paywalls are becoming increasingly common, but thanks to economies of scale (and a level of intrinsic motivation/altruism/idealism/vanity), it still used to be viable to provide information online without charging users for every bit of it. Same appears to be true for the tools to discover said information (search engines).
Compare this to the estimated true cost of running AI chatbots, which (according to the numbers I’m familiar with) may be tens or even hundreds of dollars a month for each user. For this price, users would get unreliable slop, and this slop could only be produced from the (mostly free) information that is already available online while disincentivizing creators from producing more of it (because search engine driven traffic is dying down).
I think the math is really abysmal here, and it may take some time to realize how bad it really is. We are used to big numbers from tech companies, but we rarely break them down to individual users.
Somehow reminds me of the astronomical cost of each bitcoin transaction (especially compared to the tiny cost of processing a single payment through established payment systems).
- HedyL@awful.systemstoTechTakes@awful.systems•Stubsack: Stubsack: weekly thread for sneers not worth an entire post, week ending 20th July 2025 - awful.systemsEnglish13·12 天前
Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?
This really gets on my nerves too. They probably came up with the idea that they could increase time spent on their platforms and thus revenue by providing more content in their users’ native languages (especially non-English). Simply forcing it on everyone, without giving their users a choice, was probably the cheapest way to implement it. Even if this annoys most of their user base, it makes their investors happy, I guess, at least over the short term. If this bubble has shown us anything, it is that investors hardly care whether a feature is desirable from the users’ point of view or not.
- HedyL@awful.systemstoTechTakes@awful.systems•AI coders think they’re 20% faster — but they’re actually 19% slowerEnglish6·15 天前
I’m not sure how much this observation can be generalized, but I’ve also wondered how much the people who overestimate the usefulness of AI image generators underestimate the chances of licensing decent artwork from real creatives with just a few clicks and at low cost. For example, if I’m looking for an illustration for a PowerPoint presentation, I’ll usually find something suitable fairly quickly in Canva’s library. That’s why I don’t understand why so many people believe they absolutely need AI-generated slop for this. Of course, however, Canva is participating in the AI hype now as well. I guess they have to keep their investors happy.
- HedyL@awful.systemstoTechTakes@awful.systems•AI coders think they’re 20% faster — but they’re actually 19% slowerEnglish10·15 天前
What fascinates me is why coders who use LLMs think they’re more productive.
As @[email protected] wrote, LLM usage has been compared to gambling addiction: https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
I wonder to what extent this might explain this phenomenon. Many gambling addicts aren’t fully aware of their losses, either, I guess.
- HedyL@awful.systemstoTechTakes@awful.systems•AI coders think they’re 20% faster — but they’re actually 19% slowerEnglish27·16 天前
… and just a few paragraphs further down:
The number of people tested in the study was n=16. That’s a small number. But it’s a lot better than the usual AI coding promotion, where n=1 ’cos it’s just one guy saying “I’m so much faster now, trust me bro. No, I didn’t measure it.”
I wouldn’t call that “burying information”.
It’s also very difficult to get search results in English when this isn’t set as your first language in Google, even if your entire search term is in English. Even “Advanced Search” doesn’t seem to work reliably here, and of course, it always brings up the AI overview first, even if you clicked advanced search from the “Web” tab.