• @[email protected]
    link
    fedilink
    201 month ago

    Or, hear me out, there was NO figuring of any kind, just some magic LLM autocomplete bullshit. How hard is this to understand?

      • @[email protected]
        link
        fedilink
        31 month ago

        I have to disagree with that. To quote the comment I replied to:

        AI figured the “rescued” part was either a mistake or that the person wanted to eat a bird they rescued

        Where’s the “turn of phrase” in this, lol? It could hardly read any more clearly that they assume this “AI” can “figure” stuff out, which is simply false for LLMs. I’m not trying to attack anyone here, but spreading misinformation is not ok.

        • @[email protected]
          link
          fedilink
          4
          edit-2
          1 month ago

          I’ll be the first one to explain to people that AI as we know it is just pattern recognition, so yeah, it was a turn of phrase, thanks for your concern.

          • @[email protected]
            link
            fedilink
            11 month ago

            Ok, great to know. Nuance doesn’t cross internet well, so your intention wasn’t clear, given all the uninformed hype & grifters around AI. Being somewhat blunt helps getting the intended point across better. ;)

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          1 month ago

          My point wasn’t that LLMs are capable of reasoning. My point was that the human capacity for reasoning is grossly overrated.

          The core of human reasoning is simple pattern matching: regurgitating what we have previously observed. That’s what LLMs do well.

          LLMs are basically at the toddler stage of development, but with an extraordinary vocabulary.