I saw people complaining the companies are yet to find the next big thing with AI, but I am already seeing countless offer good solutions for almost every field imaginable. What is this thing the tech industry is waiting for and what are all these current products if not what they had in mind?

I am not great with understanding the business point of view of this situation and I have been out from the news for a long time, so I would really appreciate if someone could ELI5.

  • slazer2au
    link
    fedilink
    English
    6325 days ago

    Here’s a secret. It’s not true AI. All the hype is marketing shit.

    Large language models like GPT, llama, and Gemini don’t create anything new. They just regurgitate existing data.

    You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.

    Until a llm can understand why it is wrong we won’t have true AI.

      • xep
        link
        fedilink
        825 days ago

        Statistical methods have been a longstanding mainstay in the field of AI since its inception. I think the trouble is that the term AI has been co-opted for marketing.

    • @[email protected]
      link
      fedilink
      English
      1925 days ago

      That’s not a secret. The industry constantly talks about the difference between LLMs and AGI.

      • slazer2au
        link
        fedilink
        English
        1325 days ago

        Until a product goes through marketing and they slap that ‘Using AI’ into the blurb when it doesn’t.

        • @[email protected]
          link
          fedilink
          1025 days ago

          LLMs are AI. They are not AGI. AGI is a particular subset of AI, that does not preclude non-general AI from being AI.

          People keep talking about how it just regurgitates information, and says incorrect things sometimes, and hallucinates or misinterprets things, as if humans do not also do those things. Most people just regurgitate information they found online, true or false. People frequently hallucinate things they think are true and stubbornly refuse to change when called out. Many people cannot understand when and why they’re wrong.

          • @[email protected]
            link
            fedilink
            English
            325 days ago

            People can also stop saying words and think for a second about the information they’re actually saying first, whereas an LLM just vomits up words that seem to match the pattern of the rest of the sentence. If I were to ask you what 2 + 2 is, you’d stop, run the math in your head, get 4, then reply with 4. An LLM would just start vomiting out words based on what it’s been trained on without verifying that the information is good (or even relevant), and can end up confidently telling you that 2 + 2 is in fact equal to the cube root of 5 because that’s what the data said so it has to be right, for instance.

            I’m aware this is a drastic oversimplification, and I think the tech is neat (although I avoid non-self-hosted models like the plague due to privacy concerns), but it’s oversold to all hell, and is definitely not even close to intelligent.

            • @[email protected]
              link
              fedilink
              225 days ago

              You haven’t really looked into multi-agent setups at all, have you? Basically any system of multiple agents can double-check themselves.

              Additionally, none of this conflicts with my original point. If you train a human on bad data, they’ll GIGO too. I know plenty of humans who have confidently told me objectively false things because they had bad training data.

    • FaceDeer
      link
      fedilink
      18
      edit-2
      25 days ago

      It is true AI, it’s just not AGI. Artificial General Intelligence is the sort of thing you see on Star Trek. AI is a much broader term and it encompasses large language models, as well as even simpler things like pathfinding algorithms or OCR. The term “AI” has been in use for this kind of thing since 1956, it’s not some sudden new marketing buzzword that’s being misapplied. Indeed, it’s the people who are insisting that LLMs are not AI that are attempting to redefine a word that’s already been in use for a very long time.

      You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.

      Reminds me of the classic quote from Charles Babbage:

      “On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question”

      How is the chatbot supposed to know that the information it’s been given is wrong?

      If you were talking with a human and they thought something was true that wasn’t actually true, do you not count them as an intelligence any more?

      • The Pantser
        link
        fedilink
        725 days ago

        If you were talking with a human and they thought something was true that wasn’t actually true, do you not count them as an intelligence any more?

        If they refuse to learn and change their belief? Absolutely.

    • Zos_Kia
      link
      1025 days ago

      Large language models like GPT, llama, and Gemini don’t create anything new

      That’s because it is a stupid use case. Why should we expect AI models to be creative, when that is explicitly not what they are for?

      • @[email protected]
        link
        fedilink
        125 days ago

        They are creative, though:

        They put things that are “near” each-other into juxtaposition, and sometimes the insights are astonishing.

        The AI’s don’t understand anything, though: they’re like bacteria-instinct: total autopilot.

        The real problem is that we humans aren’t able to default to understanding such non-understanding apparent-someones.

        We’ve created a “hack” of our entire mental-system, and it is the money-profit-rules-the-world group which controls its evolution.

        This is called “Darwin Award territory”, at the species-scale.

        No matter:

        The Great Filter, which is what happens when a world-species hasn’t grown-up, but gains adult-level technology ( nukes, entire-country-destroying-militaries, biotech, neurotoxins, immense industrial toxic wastelands like the former USSR, accountability-denial-mechanisms in all corporate “persons”, etc… )

        you have a toddler with a loaded gun, & killing can happen.

        “there’s no such thing as a dangerous gun: only a dangerous man”, as the book “Starship Troopers” pushed…

        Toddlers with guns KILL people in the US.

        AI’s our “gun”, & narcissistic-sociopathy’s our “toddler commanding the ship” nature.

        Maybe we should rename Earth to “The Titanic”, for honesty’s sake…

        _ /\ _

    • @[email protected]
      link
      fedilink
      7
      edit-2
      25 days ago

      I have different weights for my two dumbbells and I asked ChatGPT 4.0 how to divide the weights evenly on all 4 sides of the 2 dumbbells. It told me to use 4 half-pound weighs instead of my 2 pound weighs constantly, and finally after like 15 minutes, it admitted that, with my sets of weights, it’s impossible to divide them evenly…

      • FaceDeer
        link
        fedilink
        1025 days ago

        You used an LLM for one of the things it is specifically not good at. Dismissing its overall value on that basis is like complaining that your snowmobile is bad at making its way up and down your basement stairs, and so it is therefore useless.

        • @[email protected]
          link
          fedilink
          3
          edit-2
          25 days ago

          You are totally right! Sadly, people think that LLMs are able to do all of these things…