Thats on the companies to figure out, tbh. “you cant say we arent allowed to build biological weapons, thats too hard” isn’t what you’re saying, but it’s a hyperbolic example. The industry needs to figure out how to control the monster they’ve happily sent staggering towards the village, and really they’re the only people with the knowledge to figure out how to stop it. If it’s not possible, maybe we should restrict this tech until it is possible. LLMs aren’t going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.
Yeah that’s my big takeaway here: If the people who are rolling out this technology cannot make these assurances then the technology has no right to exist.
Like, Stable Diffusion, a tool used to generate images, would fall under this. It’s very questionable that it, however, would be terribly useful in doing anything dangerous.
My concern is how short a hop it is from this to “won’t someone please think of the children?” And then someone uses Stable Diffusion to create a baby in a sexy pose and it’s all down in flames. IMO that sort of thing happens enough that pushing back against “gateway” legislation is reasonable.
California putting a restriction like that in place, absent some kind of global restriction, won’t stop development of models.
I’d be concerned about its impact on the deployment of models too. Companies are not going to want to write software that they can’t sell in California, or that might get them sued if someone takes it into California despite it not being sold there. Silicon Valley is in California, this isn’t like it’s Montana banning it.
It’s not a monster. It doesn’t vaguely resemble a monster.
It’s a ridiculously simple tool that does not in any way resemble intelligence and has no agency. LLMs do not have the capacity for harm. They do not have the capability to invent or discover (though if they did, that would be a massive boon for humanity and also insane to hold back). They’re just a combination of a mediocre search tool with advanced parsing of requests and the ability to format the output in the structure of sentences.
AI cannot do anything. If your concern is allowing AI to release proteins into the wild, obviously that is a terrible idea. But that’s already more than covered by all the regulation on research in dangerous diseases and bio weapons. AI does not change anything about the scenario.
We’re not anywhere near anything that has anything in common with human level intelligence, or poses any threat.
The only possible cause for support of legislation like this is either a completely absence of understanding of what the technology is combined with treating Hollywood as reality (the layperson and probably most legislators involved in this), or an aggressive market control attempt through regulatory capture by big tech. If you understand where we are and what paths we have forward, it’s very clear that there’s only harm that this can do.
So, the monster was given a human brain that was already known to be murderous. Why, we don’t know, but a good bet would be childhood abuse and alcohol syndrome, maybe inherited syphilis, given the era. Now that murderer’s brain is given an extra-strong body, and then subjected to more abuse and rejection. That’s how you create a monster.
you can guarantee it, by feeding it only information without weapon information. The information they use, is just scraping every single piece of data from the internet.
Removed by mod
Thats on the companies to figure out, tbh. “you cant say we arent allowed to build biological weapons, thats too hard” isn’t what you’re saying, but it’s a hyperbolic example. The industry needs to figure out how to control the monster they’ve happily sent staggering towards the village, and really they’re the only people with the knowledge to figure out how to stop it. If it’s not possible, maybe we should restrict this tech until it is possible. LLMs aren’t going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.
Yeah that’s my big takeaway here: If the people who are rolling out this technology cannot make these assurances then the technology has no right to exist.
deleted by creator
Removed by mod
My concern is how short a hop it is from this to “won’t someone please think of the children?” And then someone uses Stable Diffusion to create a baby in a sexy pose and it’s all down in flames. IMO that sort of thing happens enough that pushing back against “gateway” legislation is reasonable.
I’d be concerned about its impact on the deployment of models too. Companies are not going to want to write software that they can’t sell in California, or that might get them sued if someone takes it into California despite it not being sold there. Silicon Valley is in California, this isn’t like it’s Montana banning it.
It’s not a monster. It doesn’t vaguely resemble a monster.
It’s a ridiculously simple tool that does not in any way resemble intelligence and has no agency. LLMs do not have the capacity for harm. They do not have the capability to invent or discover (though if they did, that would be a massive boon for humanity and also insane to hold back). They’re just a combination of a mediocre search tool with advanced parsing of requests and the ability to format the output in the structure of sentences.
AI cannot do anything. If your concern is allowing AI to release proteins into the wild, obviously that is a terrible idea. But that’s already more than covered by all the regulation on research in dangerous diseases and bio weapons. AI does not change anything about the scenario.
Removed by mod
We’re not anywhere near anything that has anything in common with human level intelligence, or poses any threat.
The only possible cause for support of legislation like this is either a completely absence of understanding of what the technology is combined with treating Hollywood as reality (the layperson and probably most legislators involved in this), or an aggressive market control attempt through regulatory capture by big tech. If you understand where we are and what paths we have forward, it’s very clear that there’s only harm that this can do.
Haven’t these guys read Frankenstein? Everyone knows Monsters are bad.
So, the monster was given a human brain that was already known to be murderous. Why, we don’t know, but a good bet would be childhood abuse and alcohol syndrome, maybe inherited syphilis, given the era. Now that murderer’s brain is given an extra-strong body, and then subjected to more abuse and rejection. That’s how you create a monster.
Indeed. If only Frankenstein’s Monster had been shunned nothing bad would have happened.
You two may not be giving me enough credit for my choice of metaphors here.
you can guarantee it, by feeding it only information without weapon information. The information they use, is just scraping every single piece of data from the internet.