• @leftzero
    link
    1
    edit-2
    6 months ago

    LLMs process information

    No, they don’t. They merely tell you which sequence of characters comes most often in their training set after the sequence of characters you gave them. That’s all. No processing going on, no information being generated or retrieved other than statistical trivia about their training set.

    AI can be extremely dangerous in either case. LLMs are no different from that perspective.

    General AI could be dangerous because it could be smarter than us while having interests, objectives, and morals that could clash with our own, causing it to antagonise us.

    That’s obviously impossible for LLMs, which have as much intelligence, interests, objectives, or morals as your average paperweight.

    LLMs are dangerous because they’re good enough at sounding like they know what they’re saying that you people actually believe them to be intelligent (and the fact that the bastards selling them are using their apparent intelligence as their main selling point obviously doesn’t help either), and they can be convincing enough that when they randomly tell you to get a bleach and ammonia enema to help with that headache you might actually believe them since by that point there’ll be no way left to check your facts. Which, hey, fair enough, natural selection and all that… but at some point one of you is going to fart that chlorine gas in my general vicinity, and that isn’t so good.