• @[email protected]
    link
    fedilink
    English
    106 months ago

    It’s not a monster. It doesn’t vaguely resemble a monster.

    It’s a ridiculously simple tool that does not in any way resemble intelligence and has no agency. LLMs do not have the capacity for harm. They do not have the capability to invent or discover (though if they did, that would be a massive boon for humanity and also insane to hold back). They’re just a combination of a mediocre search tool with advanced parsing of requests and the ability to format the output in the structure of sentences.

    AI cannot do anything. If your concern is allowing AI to release proteins into the wild, obviously that is a terrible idea. But that’s already more than covered by all the regulation on research in dangerous diseases and bio weapons. AI does not change anything about the scenario.

      • @[email protected]
        link
        fedilink
        English
        5
        edit-2
        6 months ago

        We’re not anywhere near anything that has anything in common with human level intelligence, or poses any threat.

        The only possible cause for support of legislation like this is either a completely absence of understanding of what the technology is combined with treating Hollywood as reality (the layperson and probably most legislators involved in this), or an aggressive market control attempt through regulatory capture by big tech. If you understand where we are and what paths we have forward, it’s very clear that there’s only harm that this can do.