I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

  • @[email protected]OP
    link
    fedilink
    English
    291 month ago

    Are you asserting that chatbots are so fundamentally different from LLMs that “oh shit we can’t just throw more CPU and data at this anymore” doesn’t apply to roughly the same degree?

      • Greg Clarke
        link
        fedilink
        English
        41 month ago

        People that don’t understand those terms are using them interchangeably

        • @[email protected]
          link
          fedilink
          English
          71 month ago

          LLM is the technology, Chatbot is an implementation of it. So yes a Chatbot as it’s talked about here is an LLM. Although obviously chatbots don’t have to be LLM, those that are not are irrelevant.

          • Greg Clarke
            link
            fedilink
            English
            31 month ago

            No, a chat bot as it’s talked about here is not an LLM. This article is discussing limitations of LLM training data and inferring that chat bots can not scale as a result. There are many techniques that can be used to continue to improve chat bots.

            • @[email protected]
              link
              fedilink
              English
              61 month ago

              The chatbot is a front end to an LLM, you are being needlessly pedantic. What the chatbot serves you, is the result of LLM queries.

              • Greg Clarke
                link
                fedilink
                English
                31 month ago

                That may have been true for the early LLM chatbots but not anymore. ChatGPT for instance, now writes code to answer logical questions. The o1 models have background token usage because each response is actually the result of multiple background LLM responses.

    • Greg Clarke
      link
      fedilink
      English
      51 month ago

      Yes of course I’m asserting that. While the performance of LLMs may be plateauing, the cost, context window, and efficiency is still getting much better. When you chat with a modern chat bot it’s not just sending your input to an LLM like the first public version of ChatGPT. Nowadays a single chat bot response may require many LLM requests along with other techniques to mitigate the deficiencies of LLMs. Just ask the free version of ChatGPT a question that requires some calculation and you’ll have a better understanding of what’s going on and the direction of the industry.

      • @[email protected]OP
        link
        fedilink
        English
        91 month ago

        I think you’re agreeing, just in a rude and condescending way.

        There’s a lot of ways left to improve, but they’re not as simple as just throwing more data and CPU at the problem, anymore.

        • Greg Clarke
          link
          fedilink
          English
          3
          edit-2
          1 month ago

          I’m sorry if I’m coming across as condescending, that’s not my intent. It’s never been “as simple as just throwing more data and CPU at the problem”. There were algorithmic challenges for every LLM evolution. There are still lots of potential improvements using the existing training data. But even if there wasn’t, we’ll still see loads of improvements in chat bots because of other techniques.

          Edit: typo