I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?

  • @[email protected]
    link
    fedilink
    832 months ago

    Investors are dumb. It’s a hot new tech that looks convincing (since LLMs are designed specifically to appear correct, not be correct), so anything with that buzzword gets a ton of money thrown at it. The same phenomenon has occurred with blockchain, big data, even the World Wide Web. After each bubble bursts, some residue remains that actually might have some value.

    • @[email protected]
      link
      fedilink
      372 months ago

      And LLM is mostly for investors, not for users. Investors see you “do AI” even if you just repackage GPT or llama, and your Series A is 20% bigger.

    • @[email protected]OP
      link
      fedilink
      132 months ago

      I can see that. That guy over there has the new shiny toy. I want a new shiny toy. Give me a new shiny toy.

  • @[email protected]
    link
    fedilink
    382 months ago

    Generative AI has allowed us to do some things that we could not do before. A lot of people very foolishly took that to mean it would let us do everything we couldn’t do before.

    • @[email protected]OP
      link
      fedilink
      92 months ago

      I hope someday we can come up with an economic system that is not based purely on profit and the exploitation of human beings. But I don’t know that I’ll live long enough to see it.

    • @[email protected]
      link
      fedilink
      52 months ago

      That is a very pessimistic and causal explanation, but you’ve got the push right. It’s marketing that pushes I though, not necessarily tech. AI, as we currently see it in use, is a very neat technological development. Even more so it is a scientific development, because it isn’t just some software, it is a intricate mathematical model. It is such a complex model, that we actually have study it how it even works,because we don’t now the finer details.

      It is not a replacement for office workers, it is not the robot revolution and it is not godlike. It is just a mathematical model on a previously unimaginable scale.

      • @[email protected]
        link
        fedilink
        12 months ago

        Machine learning has many valid applications, and there are some fields genuinely utilizing ML tools to make leaps and bounds in advancements.

        LLMs, aka bullshit generators, which is where a huge majority of corporate AI investment has gone in this latest craze, is one of the poorest. Not to mention the steaming pile of ethical issues with training data.

    • @[email protected]
      link
      fedilink
      32 months ago

      Very nice writeup. My only critique is the need to “lay off workers to stop inflation.” I have no doubt that some (many?) managers etc… believed that to be the case, but there’s rampant evidence that the spike of inflation we’ve seen over this period was largely due to corporate greed hiking prices, not due to increased costs from hiring too many workers.

    • @[email protected]
      link
      fedilink
      English
      12 months ago

      I appreciate the candid analysis, but perhaps “nothing to see here” (my paraphrase) is only one part of the story. The other part is that there is genuine innovation and new things within reach that were not possible before. For example, personalized learning–the dream of giving a tutor to each child, so we can overcome Bloom’s 2 Sigma Problem–is far more likely with LLMs in the picture than before. It isn’t a panacea, but it is certainly more useful than cryptocurrency kept promising to be IMO.

    • @[email protected]
      link
      fedilink
      12 months ago

      We’ve already established that language models just make shit up. There is no need to demonstrate. Bad bot!

  • @[email protected]
    link
    fedilink
    262 months ago

    Robots don’t demand things like “fair wages” or “rights”. It’s way cheaper for a corporation to, for example, use a plagiarizing artificial unintelligence to make images for something, as opposed to commissioning a human artist who most likely will demand some amount of payment for their work.

    Also I think that it’s partially caused by people going “ooh, new thing!” without stopping to think about the consequences of this technology or if it is actually useful.

  • @[email protected]
    link
    fedilink
    English
    17
    edit-2
    2 months ago

    A dumb person thinks AI is really smart, because they just listen to anyone that answers confidentially

    And no matter what, AI is going to give its answer like it’s is 100% definitely the truth.

    That’s why there’s such a large crossover with AI and crypto, the same people fall for everything.

    There’s new supporting evidence for Penrose’s theory that natural intelligence involves just an absolute shit ton of quantum interactions, because we just found out how the body can create an environment where quantom super position can not only be achieved, but incredibly simply.

    AI got a boost because we didn’t really (still dont) understand consciousness. Tech bro’s convinced investors that neurons were what mattered, and made predictions for when that amount of neurons can be simulated.

    But if it include billions of molecules in quantum superposition, we’re not getting there in our lifetimes. But there’s a lot of money sunk in to it already, so there’s a lot of money to lose if people suddenly get realistic about what it takes to make a real artificial intelligence.

      • @[email protected]
        link
        fedilink
        English
        32 months ago

        The microtubules creating an environment that can sustain quantum super position just came out like a month ago.

        In all honesty the tech bros probably don’t even know yet, or understands it means human level AI speculation has essentially been disproven as happening anytime remotely soon.

        But I’m assuming when they do, they’ll just ignore it and double down to maintain share prices.

        It’s also possible it all crashes and billions of dollars disappear.

        • @[email protected]
          link
          fedilink
          42 months ago

          Microtubules have been pushed for decades without any proof. The latest paper wasn’t evidence but unsupported speculation.

          But more importantly the physics of computation that creates intelligence has absolutely nothing to do with understanding intelligence. Even if quantum effects are relevant ( which is extremely unlikely given the warm and moving environment inside the brain), it doesn’t answer anything about how humans are intelligent.

          Penrose used Quantum Mechanics as a “God in the Gaps” explanation. That worked 40 years ago but today we have working quantum computers but no human intelligence.

  • @[email protected]
    link
    fedilink
    English
    122 months ago

    The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.

    There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.

    Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.

      • @[email protected]
        link
        fedilink
        English
        32 months ago

        I’m not talking about one-offs and the assessment noise floor, more like: “ChatGPT broke the Turing test” (as is claimed). It used to be something we tried to attain, and now we don’t even bother trying to make GPT seem human… we actually train them to say otherwise lest people forget. We figuratively pole-vaulted over the turing test and are now on the other side of it, as if it was a point on a timeline instead of an academic procedure.

    • @[email protected]OP
      link
      fedilink
      82 months ago

      It’s easier for the marketing department. According to an article, it’s neither artificial nor intelligent.

        • @[email protected]OP
          link
          fedilink
          32 months ago

          Artificial intelligence (AI) is not artificial in the sense that it is not fake or counterfeit, but rather a human-created form of intelligence. AI is a real and tangible technology that uses algorithms and data to simulate human-like cognitive processes.

            • @[email protected]OP
              link
              fedilink
              12 months ago

              Well, using the definition that artificial means man made then no. Human intelligence wasn’t made by humans therefore it isn’t artificial.

              • @[email protected]
                link
                fedilink
                English
                22 months ago

                I wonder if some of our intelligence is artificial. Being able to drive directly to any destination, for example, with a simple cell-phone lookup. Reading lifetimes worth of experience in books that doesn’t naturally come at birth. Learning incredibly complex languages that are inherited not by genes, but by environment–and, depending on the language, being able to distinguish different colors.

                • @[email protected]OP
                  link
                  fedilink
                  22 months ago

                  From the day I was born, my environment shaped what I thought and felt. Entering the school system I was indoctrinated into whatever society I was born to. All of the things that I think I know are shaped by someone else. I read a book and I regurgitate its contents to other people. I read a post online and I start pretending that it’s the truth when I don’t actually know. How often do humans actually have an original thought? Most of the time we’re just regurgitating things that we’ve experienced, read, or heard from exteral foces rather than coming up with thoughts on our own.

    • @[email protected]
      link
      fedilink
      32 months ago

      When will people finally stop parroting this sentence? It completely misses the point and answers nothing.

      • @[email protected]
        link
        fedilink
        12 months ago

        Where’s the intelligence in suggesting glue in pizza? Or is it just copying random stuff and guessing what comes next like a huge phone keyboard app?

    • @[email protected]
      link
      fedilink
      32 months ago

      Artificial intelligence is a branch of computer science. Of which, LLMs are objectively a part of.

  • Buglefingers
    link
    fedilink
    112 months ago

    IIRC When ChatGPT was first announced I believe the hype was because it was the first real usable interface a layman could interact with using normal language and have an intelligible response from the software. Normally to talk with computers we use their language (programming) but this allowed plain language speakers to interact and get it to do things with simple language in a more pervasive way than something like Siri for instance.

    This then got over hyped and over promised to people with dollars in their eyes at the thought of large savings from labor reduction and capabilities far greater than it had. They were sold a product that has no real “product” as it’s something most people would prefer to interact with on their own terms when needed, like any tool. That’s really hard to sell and make people believe they need it. So they doubled down with the promise it would be so much better down the road. And, having spent an ungodly amount into it already, they have that sunken cost fallacy and keep doubling down.

    This is my personal take and understanding of what’s happening. Though there’s probably more nuances, like staying ahead of the competition that also fell for the same promises.

  • @[email protected]
    link
    fedilink
    102 months ago

    This is like saying that automobiles are overhyped because they can’t drive themselves. When I code up a new algorithm at work, I’m spending an hour or two whiteboarding my ideas, then the rest of the day coding it up. AI can’t design the algorithm for me, but if I can describe it in English, it can do the tedious work of writing the code. If you’re just using AI as a Google replacement, you’re missing the bigger picture.

        • @[email protected]OP
          link
          fedilink
          42 months ago

          The hype machine said we could use it in place of search engines for intelligent search. Pure BS.

        • @[email protected]OP
          link
          fedilink
          12 months ago

          I’ll see if I can think of something creative to do. I was just reading an article from MIT that pointed out that one reason AI is bad at search is that it can’t determine whether a source is accurate. It can’t tell the difference between Reddit and Harvard.

      • @[email protected]
        link
        fedilink
        22 months ago

        A lot of people are doing work that can be automated in part by AI, and there’s a good chance that they’ll lose their jobs in the next few years if they can’t figure out how to incorporate it into their workflow. Some people are indeed out of the workforce or in industries that are safe from AI, but that doesn’t invalidate the hype for the rest of us.

  • @[email protected]
    link
    fedilink
    English
    102 months ago

    Disclaimer: I’m going to ignore all moral questions here

    Because it represents a potentially large leap in the types of problems we can solve with computers. Previously the only comparable tool we had to solve problems were algorithms, which are fast, well-defined, and repeatable, but cannot deal with arbitrary or fuzzy inputs in a meaningful way. AI excels at dealing with fuzzy inputs (including natural language, which was a huge barrier previously), at the expense of speed and reliability. It’s basically an entire missing half to our toolkit.

    Be careful not to conflate AI in general with LLMs. AI is usually implemented as Machine Learning, which is a method of fitting an output to training data. LLMs are a specific instance of this that are trained on language (hence, large language models). I suspect that if AI becomes more widely adopted, most users will be interacting with LLMs like you are now, but most of the business benefit would come from classifiers that have a more restricted input/output space. As an example, you could use ML to train an AI that can be used to detect potentially suspicious bank transactions. The more data you have to sort through, the better AI can learn from it*, so I suspect the companies that have been collecting terabytes of data will start using AI to try to analyze it. I’m curious if that will be effective.

    *technically it depends a lot on the training parameters

    • @[email protected]OP
      link
      fedilink
      22 months ago

      I suppose it depends on the data you’re using it for. I can see a computer looking through stacks data in no time.

  • Admiral Patrick
    link
    fedilink
    English
    10
    edit-2
    2 months ago

    Like was said: money.

    In addition, they need training data. Both conversations and raw material. Shoving “AI” into everything whether you want it or not gives them the real world conversational data to train on. If you feed it any documents, etc it’s also sucking that up for the raw data to train on.

    Ultimately the best we can do is ignore it and refuse to use it or feed it garbage data so it chokes on its own excrement.

  • @[email protected]
    link
    fedilink
    92 months ago

    Holy BALLS are you getting a lot of garbage answers here.

    Have you seen all the other things that generative AI can do? From bone-rigging 3D models, to animations recreated from a simple video, recreations of voices, art created from people without the talent for it. Many times these generative AIs are very quick at creating boilerplate that only needs some basic tweaks to make it correct. This speeds up production work 100 fold in a lot of cases.

    Plenty of simple answers are correct, breaking entrenched monopolies like Google from search, I’ve even had these GPTs take input text and summarize it quickly - at different granularity for quick skimming. There’s a lot of things that can be worthwhile out of these AIs. They can speed up workflows significantly.

    • @[email protected]OP
      link
      fedilink
      62 months ago

      I’m a simple man. I just want to look up a quick bit of information. I ask the AI where I can find a setting in an app. It gives me the wrong information and the wrong links. That’s great that you can do all that, but for the average person, it’s kind of useless. At least it’s useless to me.

      • @[email protected]
        link
        fedilink
        English
        22 months ago

        You aren’t really using it for its intended purpose. It’s supposed to be used to synthesize general information. It only knows what people talk about; if the subject is particularly specific, like the settings in one app, it will not give you useful answers.

      • @[email protected]
        link
        fedilink
        2
        edit-2
        2 months ago

        So you got the wrong information about an app once. When a GPT is scoring higher than 97% of human test takers on the SAT and other standardized testing - what does that tell you about average human intelligence?

        The thing about GPTs is that they are just word predictors. Lots of time when asked super specific questions about small subjects that people aren’t talking about - yeah - they’ll hallucinate. But they’re really good at condensing, categorizing, and regurgitating a wide range of topics quickly; which is amazing for most people.

        • @[email protected]OP
          link
          fedilink
          42 months ago

          It’s not once. It has become such an annoyance that I quit using and asked what the big deal is. I’m sure for creative and computer nerd stuff it’s great, but for regular people sitting at home listening to how awesome AI is and being underwhelmed it’s not great. They keep shoving it down our throats and plain old people are bailing.

          • @[email protected]
            link
            fedilink
            English
            22 months ago

            tl;dr: It’s useful, but not necessarily for what businesses are trying to convince you it’s useful for

          • @[email protected]
            link
            fedilink
            12 months ago

            Yeah, see that’s the kicker. Calling this “computer nerd stuff” just gives away your real thinking on the matter. My high school daughters use this to finish their essay work quickly, and they don’t really know jack about computers.

            You’re right that old people are bailing - they tend to. They’re ignorant, they don’t like to learn new and better ways of doing things, they’ve raped our economy and expect everything to be done for them. People who embrace this stuff will simply run circles around those who don’t. That’s fine. Luddites exist in every society.

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      Yeah, I feel like people who have very strong opinions about what AI should be used for also tend to ignore the facts of what it can actually do. It’s possible for something to be both potentially destructive and used to excess for profit, and also an incredible technical achievement that could transform many aspects of our life. Don’t ignore facts about something just because you dislike it.

  • @[email protected]
    link
    fedilink
    92 months ago

    A lot of jobs are bullshit. Generative AI is good at generating bullshit. This led to a perception that AI could be used in place of humans. But unfortunately, curating that bullshit enough to produce any value for a company still requires a person, so the AI doesn’t add much value. The bullshit AI generates needs some kind of oversight.