Any tool can be a hammer if you use it wrong enough.

A good hammer is designed to be a hammer and only used like a hammer.

If you have a fancy new hammer, everything looks like a nail.

    • @[email protected]
      link
      fedilink
      75 months ago

      LLMs are part of AI, which is a fairly large research domain of math/info, including machine learning among other. God, even linear regression can be classified as AI : that term is reeeally large

        • @[email protected]
          link
          fedilink
          25 months ago

          This is a domain research domain that contain statistic methods and knowledge modeling among other. That’s not new, but the fact that this is marketed like that everywhere is new

          AI is really not a specific term. You may refer as global AI, and I suspect that’s what you refer to when you say AI?

        • @[email protected]
          link
          fedilink
          15 months ago

          it’s always been this broad, and that’s a good thing. if you want to talk about AGI then say AGI.

    • @[email protected]
      link
      fedilink
      25 months ago

      I know that they’re “autocorrect on steroids” and what that means, I don’t see how that makes it any less ai. I’m not saying that LLMs have that magic sauce that is needed to be considered truly “intelligent”, I’m saying that ai doesn’t need any magic sauce to be ai. the code controlling bats in Minecraft is called ai, and no one complained about that.

    • Zos_Kia
      link
      15 months ago

      Very useful in some contexts, but it doesn’t “learn” the way a neural network can. When you’re feeding corrections into, say, ChatGPT, you’re making small, temporary, cached adjustments to its data model, but you’re not actually teaching it anything, because by its nature, it can’t learn.

      But that’s true of all (most ?) neural networks ? Are you saying Neural Networks are not AI and that they can’t learn ?

      NNs don’t retrain while they are being used, they are trained once then they cannot learn new behaviour or correct existing behaviour. If you want to make them better you need to run them a bunch of times, collect and annotate good/bad runs, then re-train them from scratch (or fine-tune them) with this new data. Just like LLMs because LLMs are neural networks.