• @[email protected]
    link
    fedilink
    English
    83 months ago

    When you ask an LLM a reasoning question. You’re not expecting it to think for you, you’re expecting that it has crawled multiple people asking semantically the same question and getting semantically the same answer, from other people, that are now encoded in its vectors.

    That’s why you can ask it. because it encodes semantics.

    • ebu
      link
      fedilink
      English
      263 months ago

      because it encodes semantics.

      if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical

    • @[email protected]
      link
      fedilink
      English
      243 months ago

      thank you for bravely rushing in and providing yet another counterexample to the “but nobody’s actually stupid enough to think they’re anything more than statistical language generators” talking point

    • @leftzero
      link
      English
      203 months ago

      Paraphrasing Neil Gaiman, LLMs don’t give you information; they give you information shaped sentences.

      They don’t encode semantics. They encode the statistical likelihood that each token will follow a given sequence of tokens.

      • @[email protected]
        link
        fedilink
        English
        33 months ago

        It’s worth pointing out that it does happen to reconstruct information remarkably well considering it’s just likelihood. They’re pretty useful tools like any other, it’s funny ofc to watch silicon valley stumble all over each other chasing the next smartphone.

        • @[email protected]
          link
          fedilink
          English
          123 months ago

          The only remarkable thing is how fucking easy it is to convince the median consumer that vaguely-correct-shape sentences are correct.

    • @[email protected]
      link
      fedilink
      English
      183 months ago

      Rooting around for that Luke Skywalker “every single word in that sentence was wrong” GIF…

    • @[email protected]
      link
      fedilink
      English
      15
      edit-2
      3 months ago

      guy who totally gets what these words mean: “an llm simply encodes the semantics into the vectors”

      • @[email protected]
        link
        fedilink
        English
        163 months ago

        all you gotta do is, you know, ground the symbols, and as long as you’re writing enough Lisp that should be sufficient for GAI

        • @[email protected]
          link
          fedilink
          English
          123 months ago

          both your comments made my eye twitch

          like what’d happen if bob fucked up the symbols in a pentacle

        • @[email protected]
          link
          fedilink
          English
          113 months ago

          also why do we need getaddrinfo? the promptfans will always readily tell you who they are

    • @[email protected]
      link
      fedilink
      English
      153 months ago

      because it encodes semantics.

      Please enlighten me on how? I admit I don’t know all the internals of the transformer model, but from what I know it encodes precisely only syntactical information, i.e. what next syntactical token is most likely to follow based on a syntactical context window.

      How does it encode semantics? What is the semantics that it encodes? I doubt they have denatotational or operational semantics of natural language, I don’t think something like that even exists, so it has to be some smaller model. Actually, it would be enlightening if you could tell me at least what the semantical domain here is, because I don’t think there’s any naturally obvious choice for that.