The wrongful death lawsuit against several social media companies for allegedly contributing to the radicalization of a gunman who killed 10 people at a grocery store in Buffalo, New York, will be allowed to proceed.
Completely different cases, questionable comparison;
social media are the biggest cultural industry at the moment, albeit a silent and unnoticed one. Cultural industries like this are means of propaganda, information and socilalization, all of which is impactful and heavily personal and personalised for everyone’s opinion.
thus the role of such an impactul business is huge and can move opinions and whole movements, the choices that people takes are driven by their media consumption and communities they take part in.
In other words, policy, algorhitms, GUI are all factors that drive the users to engage in speific ways with harmful content.
I wish you guys would stop making me defend corporations. Doesn’t matter how big they are, doesn’t matter their influence, claiming that they are responsible for someone breaking the law because someone else wrote something that set them off and they, as overlords, didn’t swoop in to stop it is batshit.
Since you don’t like those comparisons, I’ll do one better. This is akin to a man shoving someone over a railing and trying to hold the landowners responsible for not having built a taller railing or more gradual drop.
You completely fucking ignore the fact someone used what would otherwise be a completely safe platform because another party found a way to make it harmful.
polocy and algorithm are factors that drive users to engage
Yes. Engage. Not in harmful content specifically, that content just so happens to be the content humans react to the strongest. If talking about fields of flowers drove more engagement, we’d never stop seeing shit about flowers. It’s not them maliciously pushing it, it’s the collective society that’s fucked.
The solution is exactly what it has always been. Stop fucking using the sites if they make you feel bad.
Again, no such a thing as a neutral space or platform, case in point, reddit with its gated communities and the lack of control over what people does with the platform is in fact creating safe spaces for these kind of things. This may not be inentional, but it ultimately leads towards the radicalization of many people, it’s a design choice followed by the internal policy of the admins who can decide to let these communities be on one of the mainstream websites. If you’re unsure about what to think, delving deep into these subreddits has the effect of radicalising you, whereas in a normal space you wouldn’t be able o do it as easily. Since this counts as engagement, reddit can suggest similar forums, leading via algorhitms to a path of radicalisation. This is why a site that claims to be neutra is’t truly neutral.
This is an example of alt-right pipeline that reddit succesfully mastered:
The alt-right pipeline (also called the alt-right rabbit hole) is a proposed conceptual model regarding internet radicalization toward the alt-right movement. It describes a phenomenon in which consuming provocative right-wing political content, such as antifeminist or anti-SJW ideas, gradually increases exposure to the alt-right or similar far-right politics. It posits that this interaction takes place due to the interconnected nature of political commentators and online communities, allowing members of one audience or community to discover more extreme groups (https://en.wikipedia.org/wiki/Alt-right_pipeline)
And yet you keep comparing cultural and media consumption to a physical infrastructure, which is regulated as to prevent what you mentioned, an unsafe management of the terrain for instace. So taking your examples as you wanted, you may just prove that regulations can in fact exist and private companies or citizens are supposed to follow them. Since social media started to use personalisation and predictive algorhitms, they also behave as editors, handling and selecting the content that users see. Why woul they not be partly responsible based on your argument?
They can suggest similar [communities] so it can’t be neutral
My guy, what? If all you did was look at cat pictures you’d get communities to share fucking cat pictures. These sites aren’t to blame for “radicalizing” people into sharing cat pictures any more than they are to actually harmful communities. By your logic, lemmy can also radicalize people. I see anarchist bullshit all the time, had to block those communities and curate my own experience. I took responsibility and instead of engaging with every post that pissed me off, removed that content or avoided it. Should the instance I’m on be responsible for not defederating radical instances? Should these communities be made to pay for radicalizing others?
Fuck no. People are not victims because of the content they’re exposed to, they choose to allow themselves to become radical. This isn’t a “I woke up and I really think Hitler had a point.” situation, it’s a gradual decline that isn’t going to be fixed by censoring or obscuring extreme content. Companies already try to deal with the flagrant forms of it but holding them to account for all of it is truly and completely stupid.
Nobody should be responsible because cat pictures radicalized you into becoming a furry. That’s on you. The content changed you and the platform suggesting that content is not malicious nor should it be held to account for that.
As neutral platforms that will as readily push cat pictures as often it will far right extremism and the only difference is how much the user personally engages with it?
Whatever you say, CopHater69. You’re definitely not extremely childish and radical.
I doubt you could engineer a plug into your own asshole but sure, I’ll take your word that you’re not just lying and have expert knowledge on this field yet still refused to engage with the point to sling insults instead.
I’ve literally watched friends of mine descend into far right thinking and I can point to the moment when they started having algorithms suggest content that puts them down a “rabbit hole”
Like, you’re not wrong they were right wing initially but they became the “lmao I’m an unironic fascist and you should be pilled like me” variety over a period of six months or so. Started stock piling guns and etc.
This phenomena is so commonly reported it makes you start wonder where all these people deciding to “radicalize themselves” all at once seemingly came out in droves.
Additionally, these companies are responsible for their content serving algorithms, and if they did not matter for affecting the thoughts of the users: why do propaganda efforts from nation states target their narratives and interests appearing within them if it was not effective? Did we forget the spawn and ensuing fall out of the Arab Spring?
Completely different cases, questionable comparison;
social media are the biggest cultural industry at the moment, albeit a silent and unnoticed one. Cultural industries like this are means of propaganda, information and socilalization, all of which is impactful and heavily personal and personalised for everyone’s opinion.
thus the role of such an impactul business is huge and can move opinions and whole movements, the choices that people takes are driven by their media consumption and communities they take part in.
In other words, policy, algorhitms, GUI are all factors that drive the users to engage in speific ways with harmful content.
I wish you guys would stop making me defend corporations. Doesn’t matter how big they are, doesn’t matter their influence, claiming that they are responsible for someone breaking the law because someone else wrote something that set them off and they, as overlords, didn’t swoop in to stop it is batshit.
Since you don’t like those comparisons, I’ll do one better. This is akin to a man shoving someone over a railing and trying to hold the landowners responsible for not having built a taller railing or more gradual drop.
You completely fucking ignore the fact someone used what would otherwise be a completely safe platform because another party found a way to make it harmful.
Yes. Engage. Not in harmful content specifically, that content just so happens to be the content humans react to the strongest. If talking about fields of flowers drove more engagement, we’d never stop seeing shit about flowers. It’s not them maliciously pushing it, it’s the collective society that’s fucked.
The solution is exactly what it has always been. Stop fucking using the sites if they make you feel bad.
Again, no such a thing as a neutral space or platform, case in point, reddit with its gated communities and the lack of control over what people does with the platform is in fact creating safe spaces for these kind of things. This may not be inentional, but it ultimately leads towards the radicalization of many people, it’s a design choice followed by the internal policy of the admins who can decide to let these communities be on one of the mainstream websites. If you’re unsure about what to think, delving deep into these subreddits has the effect of radicalising you, whereas in a normal space you wouldn’t be able o do it as easily. Since this counts as engagement, reddit can suggest similar forums, leading via algorhitms to a path of radicalisation. This is why a site that claims to be neutra is’t truly neutral.
This is an example of alt-right pipeline that reddit succesfully mastered:
And yet you keep comparing cultural and media consumption to a physical infrastructure, which is regulated as to prevent what you mentioned, an unsafe management of the terrain for instace. So taking your examples as you wanted, you may just prove that regulations can in fact exist and private companies or citizens are supposed to follow them. Since social media started to use personalisation and predictive algorhitms, they also behave as editors, handling and selecting the content that users see. Why woul they not be partly responsible based on your argument?
My guy, what? If all you did was look at cat pictures you’d get communities to share fucking cat pictures. These sites aren’t to blame for “radicalizing” people into sharing cat pictures any more than they are to actually harmful communities. By your logic, lemmy can also radicalize people. I see anarchist bullshit all the time, had to block those communities and curate my own experience. I took responsibility and instead of engaging with every post that pissed me off, removed that content or avoided it. Should the instance I’m on be responsible for not defederating radical instances? Should these communities be made to pay for radicalizing others?
Fuck no. People are not victims because of the content they’re exposed to, they choose to allow themselves to become radical. This isn’t a “I woke up and I really think Hitler had a point.” situation, it’s a gradual decline that isn’t going to be fixed by censoring or obscuring extreme content. Companies already try to deal with the flagrant forms of it but holding them to account for all of it is truly and completely stupid.
Nobody should be responsible because cat pictures radicalized you into becoming a furry. That’s on you. The content changed you and the platform suggesting that content is not malicious nor should it be held to account for that.
This is an extremely childish way of looking at the world, IT infrastructure, social media content algorithms, and legal culpability.
As neutral platforms that will as readily push cat pictures as often it will far right extremism and the only difference is how much the user personally engages with it?
Whatever you say, CopHater69. You’re definitely not extremely childish and radical.
Oh I’m most certainly a radical, but I understand what that means because I got a college degree, and now engineer the internet.
I doubt you could engineer a plug into your own asshole but sure, I’ll take your word that you’re not just lying and have expert knowledge on this field yet still refused to engage with the point to sling insults instead.
So triggered
I’ve literally watched friends of mine descend into far right thinking and I can point to the moment when they started having algorithms suggest content that puts them down a “rabbit hole”
Like, you’re not wrong they were right wing initially but they became the “lmao I’m an unironic fascist and you should be pilled like me” variety over a period of six months or so. Started stock piling guns and etc.
This phenomena is so commonly reported it makes you start wonder where all these people deciding to “radicalize themselves” all at once seemingly came out in droves.
Additionally, these companies are responsible for their content serving algorithms, and if they did not matter for affecting the thoughts of the users: why do propaganda efforts from nation states target their narratives and interests appearing within them if it was not effective? Did we forget the spawn and ensuing fall out of the Arab Spring?