Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    Ā·
    edit-2
    13 hours ago

    So us sneerclubbers correctly dismissed AI 2027 as bad scifi with a forecasting model basically amounting to ā€œline goes upā€, but if you end up in any discussions with people that want more detail titotal did a really detailed breakdown of why their model is bad, even given their assumptions and trying to model ā€œline goes upā€: https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models

    tldr; the AI 2027 model, regardless of inputs and current state, has task time horizons basically going to infinity at some near future date because they set it up weird. Also the authors make a lot of other questionable choices and have a lot of other red flags in their modeling. And the picture they had in their fancy graphical interactive webpage for fits of the task time horizon is unrelated to the model they actually used and is missing some earlier points that make it look worse.

    • aio@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      edit-2
      7 hours ago

      If the growth is superexponential, we make it so that each successive doubling takes 10% less time.

      (From AI 2027, as quoted by titotal.)

      This is an incredibly silly sentence and is certainly enough to determine the output of the entire model on its own. It necessarily implies that the predicted value becomes infinite in a finite amount of time, disregarding almost all other features of how it is calculated.

      To elaborate, suppose we take as our ā€œbase modelā€ any function f which has the property that lim_{t → āˆž} f(t) = āˆž. Now I define the concept of ā€œsuper-fā€ function by saying that each subsequent block of ā€œvirtual timeā€ as seen by f, takes 10% less ā€œreal timeā€ than the last. This will give us a function like g(t) = f(-log(1 - t)), obtained by inverting the exponential rate of convergence of a geometric series. Then g has a vertical asymptote to infinity regardless of what the function f is, simply because we have compressed an infinite amount of ā€œvirtual timeā€ into a finite amount of ā€œreal timeā€.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      11 hours ago

      titotal??? I heard they were dead! (jk. why did they stop hanging here, I forget…)

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        8 hours ago

        We did make fun of titotal for the effort they put into meeting rationalist on their own terms and charitably addressing their arguments and you know, being an EA themselves (albeit one of the saner ones)…

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        8 hours ago

        My AllTrails told me bears keep eating his promptfondlers so I asked how many promptfondlers he has and he said he just goes to AllTrails and gets a new promptfondler afterwards so I said it sounds like he’s just feeding promptfondlers to bears and then his parks service started crying.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      9 hours ago

      Amazing. Can’t wait for the doomers to claim that somehow this has enough intent to classify as murder. I wonder if they’ll end up on one of the weirdly large number of ā€œbad things that happen to people in the national parksā€ podcasts.

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    22 hours ago

    OT: boss makes a dollar, I make a dime, thats why I listen to audiobooks on company time.

    (Holy shit I should have got airpods a long time ago. But seriously, the jobs going great.)

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    3 days ago

    Doing some reading about the SAG-AFTRA video game voice acting strike. Anyone have details about ā€œEthovoxā€, the AI company that SAG has apparently partnered with?

    • irelephant [he/him]@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      edit-2
      3 days ago

      Irrelevant. Please stay on topic and refrain from personal attacks.

      I think if someone writes a long rant about how germany wasn’t at fault for WW2 in a COC for one of their projects, its kinda relevant.

    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      Ā·
      edit-2
      3 days ago

      ā€œwe set out to make the torment nexus, but all we accomplished is making the stupid faucet and now we can’t turn it off and it’s flooding the house.ā€ - Every AI company, probably.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        18
        Ā·
        edit-2
        3 days ago

        Alright OpenAI, listen up. I’ve got a whole 250GB hard drive from 2007 full of the Star Wars/Transformers crossover stories I wrote at the time. I promise you it’s AI-free and won’t be available to train competing models. Bidding starts at seven billion dollars. I’ll wait while you call the VCs.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          15
          Ā·
          3 days ago

          Do you want shadowrunners to break into your house to steal your discs? Because this is how you get shadowrunners.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    4 days ago

    First confirmed openly Dark Enlightenment terrorist is a fact. (It is linked here directly to NRx, but DE is a bit broader than that, it isn’t just NRx, and his other references seem to be more garden variety neo-nazi type (not that this kind of categorizing really matters)).

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    Ā·
    3 days ago

    Easy Money Author (and former TV Star) Ben Mckenzie’s new cryptoskeptic documentary is struggling to find a distributor. Admittedly, the linked article is more a review of the film than a look at the distributor angle. Still, it looks like it’s telling the true story in a way that will hopefully connect with people, and it would be a real shame if it didn’t find an audience.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    23
    Ā·
    5 days ago

    I might be the only person here who thinks that the upcoming quantum bubble has the potential to deliver useful things (but boring useful things, and so harder to build hype on) but stuff like this particularly irritates me:

    https://quantumai.google/

    Quantum fucking ai? Motherfucker,

    • You don’t have ai, you have a chatbot
    • You don’t have a quantum computer, you have a tech demo for a single chip
    • Even if you had both of those things, you wouldn’t have ā€œquantum aiā€
    • if you have a very specialist and probably wallet-vaporisingly expensive quantum computer, why the hell would anyone want to glue an idiot chatbot to it, instead of putting it in the hands of competent experts who could actually do useful stuff with it?

    Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says ā€œaiā€ to them.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      Ā·
      edit-2
      4 days ago

      Quantum computing reality vs quantum computing in popculture and marketing follows precisely the same line as quantum physics reality vs popular quantum physics.

      • Reality: Mostly boring multiplication of matrices, big engineering challenges, extremely interesting stuff if you’re a nerd that loves the frontiers of human knowledge
      • Cranks: Literally magic, AntMan Quantummania was a documentary, give us all money
    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      4 days ago

      Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says ā€œaiā€ to them.

      That’s my hope either - every dollar spent on the technological dead-end of quantum is a dollar not spent on the planet-killing Torment Nexus of AI.