• @[email protected]
    link
    fedilink
    English
    37 months ago

    OpenAI themselves have made it very clear that scaling up their models have diminishing returns and that they’re incapable of moving forward without entirely new models being invented by humans. A short while ago they proclaimed that they could possibly make an AGI if they got several Trillions of USD in investment.

    • @[email protected]
      link
      fedilink
      English
      37 months ago

      5 years ago I don’t think most people thought ChatGPT was possible, or StableDiffusion/MidJourney/etc.

      We’re in an era of insane technological advancement, and I don’t think it’ll slow down.

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        7 months ago

        Okay but the people who made the advancements are telling you it has already slowed down. Why don’t you understand that? A flawed Chatbot and some art theft machines who can’t draw hands aren’t exactly worldchanging, either, tbh.

        • @[email protected]
          link
          fedilink
          English
          17 months ago

          This is such a rich-country-centric view that I can’t stand. LLMs have already given the world maybe it’s greatest gift ever – access to a teacher.

          Think of the 800 million poor children in the world and their access to a Kahn academy level teacher on any subject imaginable with a cellphone/computer as all they need. How could that not have value and is pearl clutching drawing skills becoming devalued really all you can think about it?

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            7 months ago

            Anything you learn from an LLM has a margin of error that makes it dangerous and harmful. It hallucinates documentation and fake facts like an asylum inmate. And it’s so expensive compared to just having real teachers that it’s all pointless. We’ve got humans, we don’t need more humans, adding labor doesn’t solve the problem with education.

            • @[email protected]
              link
              fedilink
              English
              17 months ago

              bro I was taught in a textbook in the US in the 00s that the statue of liberty was painted green.

              No math teacher I ever had actually knew the level of math they were teaching.

              Humans hallucinate all the time. almost 1 billion children don’t even have access to a human teacher, thus the boon to humanity

              • @[email protected]
                link
                fedilink
                English
                3
                edit-2
                7 months ago

                Those textbooks and the people who regurgitate their contents are the training data for the LLM. Any statement you make about human incompetence is multiplied by an LLM. If they don’t have access to a human teacher then they probably don’t have PCs and AI subscriptions, either.

        • @[email protected]
          link
          fedilink
          English
          17 months ago

          There are other people in the world. Some of them are inventing completely new ways of doing things, and one of those ways could lead to a major breakthrough. I’m not saying a GPT LLM is going to solve the problem, I’m saying AI will.

          • @leftzero
            link
            English
            17 months ago

            Some of them are inventing completely new ways of doing things

            No, they’re not. All the money is now on the LLM autocomplete chatbots.

            Real progress on AI won’t resume until after the LLM bubble has burst. (And even then investors will probably be wary of putting money in AI for probably a few decades, because LLMs are being marked as AI despite having little to do with it.)

            It’s quite depressing, really.

            • @[email protected]
              link
              fedilink
              English
              17 months ago

              Who was making this “real progress on AI” that you mention? Why did they stop that when an LLM became popular?