• Zos_Kia
    link
    English
    94 months ago

    You cannot in all seriousness use a LLM as a research tool. That is explicitly not what it is useful for. A LLM’s latent space is like a person’s memory : sure there is some accurate data in there, but also a lot of “misremembered” or “misinterpreted” facts, and some bullshit.

    Think of it like a reasoning engine. Provide it some data which you have researched yourself, and ask it to aggregate it, or summarize it, you’ll get some great results. But asking it to “do the research for you” is plain stupid. If you’re going to query a probabilistic machine for accurate information, you’d be better off rolling dice.

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      4 months ago

      Exactly my point - except that the word “reasoning” is far too generous, as it implies that there would be some way for it to guarantee that its logic is sound, not just highly resembling legible text.

      • Zos_Kia
        link
        English
        44 months ago

        I don’t understand. Have you ever worked an office job? Most humans have no way to guarantee their logic is sound yet they are the ones who do all of the reasoning on earth. Why would you have higher standards for a machine?

          • Zos_Kia
            link
            English
            34 months ago

            Sounds like a recipe for disappointment tbh. But on the other hand, sounds like you trust techno marketing a bit too much.