• @[email protected]
    link
    fedilink
    English
    94 months ago

    AI at this stage is just a tool. This might change one day, but today is not that day. Blame the user, not the tool.

    AI and ML was being used to assist in scientific research long before ChatGPT or StableDiffusion hit the mainstream news cycle. AIs can be used to predict all sorts of outcomes, including ones relevant to climate, weather, even medical treatment. The University I work for even have a funded PhD program looking at using AI algorithms to detect cancer better, I found out because one of my friends is applying for it.

    The research I am doing with AI is not quite as important as that, but it could shape the future of both cyber security and education, as I am looking at using for teaching cyber security students about ethical hacking and security. Do people also use LLMs to hack businesses or government organisations and cause mayhem? Quite probably, and they definitely will in the future. That doesn’t mean that the tool itself is bad, just that some people will inevitably abuse it.

    Not all of this stuff is run by private businesses either. A lot of work is done by open source devs working on improving publicly available AI and ML models in their spare time. Likewise some of this stuff is publicly funded through universities like mine. There are people way better than me out there using AIs for all sorts of good things including stopping hackers, curing patients, teaching the next generation, or monitoring climate change. Some of them have been doing it for years.

      • @[email protected]
        link
        fedilink
        English
        64 months ago

        The problem is that some people like me won’t get that reference and instead think AIs are universally bad. A lot of people already think this way, and it’s hard to know who believes what.

        • @leftzero
          link
          English
          24 months ago

          The problem is that people selling LLMs keep calling them AI, and people keep buying their bullshit.

          AI isn’t necessarily bad. LLMs are.

          • @[email protected]
            link
            fedilink
            English
            6
            edit-2
            4 months ago

            LLMs have legitimate uses today even if they are currently somewhat limited. In the future they will have more legitimate and illegitimate uses. The capabilities of current LLMs are often oversold though, which leads to a lot of this resentment.

            Edit: also LLMs very much are AI (specifically ANI) and ML. It’s literally a form of deep learning. It’s not AGI, but nobody with half a brain ever claimed it was.

            • @leftzero
              link
              English
              34 months ago

              LLMs have legitimate uses today

              No they don’t. The only thing they can be somewhat reliable for is autocomplete, and the slight improvement in quality doesn’t compensate the massive increase in costs.

              In the future they will have more legitimate and illegitimate uses

              No. Thanks to LLM peddlers being excessively greedy and saturating the internet with LLM generated garbage newly trained models will be poisoned and only get worse with every iteration.

              The capabilities of current LLMs are often oversold

              LLMs have only one capability: to produce the most statistically likely token after a given chain of tokens, according to their model.

              Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.

              • @[email protected]
                link
                fedilink
                English
                5
                edit-2
                4 months ago

                This is false. Anyone who has used these tools for long enough can tell you this is false.

                LLMs have been used to write computer code, craft malware, and even semi-independently hack systems with the support of other pieces of software. They can even grade student’s work and give feedback, but it’s unclear how accurate this will be. As someone who actually researches the use of both LLMs and other forms of AI you are severely underestimating their current capabilities, never mind what they can do in the future.

                I also don’t know where you came to the conclusion that hardware performance is always an issue, given that LLM model size varies immensely as does the performance requirements. There are LLMs that can run and run well on an average laptop or even smartphone. It honestly makes me think you have never heard of LLaMa models inc. TinyLLaMa or similar projects.

                Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.

                You can filter data you get from the internet to websites archived before LLMs were even invented as a concept. This is trivial to do for some data sets as well. Some data sets used for this training have already been created without LLM output (think about how the first LLM was trained).

                Sources:

        • @[email protected]
          link
          fedilink
          English
          14 months ago

          Clearly, based on your responses, you don’t think AI/LLMs are universally bad. And anyone who is that easily swayed by what is essentially a clever shitpost likely also thinks the earth is flat and birds aren’t real.

          You know. Morons.