These scammers using Mr Beasts popularity, generosity, and (mostly) deep fake AI to scam people into downloading malware, somehow do not go against Instagrams community guidelines.

After trying to submit a request to review these denied claims, it appears I have been shadow banned in some way or another as only an error message pops up.

Instagram is allowing these to run on their platform. Intentional or not, this is ridiculous and Instagram should be held accountable for allowing malicious websites to advertise their scam on their platform.

For a platform of this scale, this is completely unacceptable. They are blatant and I have no idea how Instagrams report bots/staff are missing these.

  • @[email protected]
    link
    fedilink
    English
    1681 year ago

    I’ve reported Nazis, violent threats, and literal child pornography on Instagram that then told me it didn’t go against their guidelines.

    • @[email protected]
      link
      fedilink
      English
      781 year ago

      I read between the lines: this is the content they support, so it’s not a platform for me.

      • @[email protected]
        link
        fedilink
        English
        581 year ago

        I don’t think you understand how hard and resource-intensive it is to fight against the nipple crowd. I for one am grateful that they chose to do something about the real issues ! Yes, a world with free nazis is kind of a bother, but most of us would survive. Can you imagine the horror of a world with free nipples ? We would all be doomed, that’s for sure. /big s

    • @[email protected]
      link
      fedilink
      English
      91 year ago

      But if you make a clear joke in a joke group, you get flagged and can’t get it reviewed.

      • @[email protected]
        link
        fedilink
        English
        71 year ago

        As in child sexual abuse material. It’s pretty rampant on Instagram where they like to ‘hide’ under certain tags.

        • JackGreenEarth
          link
          fedilink
          English
          41 year ago

          Can you be more specific? Like AI generated 17 year olds, or real photos of some 3 year old kid in someone’s dungeon? There’s a big difference.

          • Spaz
            link
            fedilink
            English
            -11 year ago

            Both are children… So why does it matter? In USA under 18 is classified as a minor/child regardless if it is generated or not still illegal

      • @[email protected]
        link
        fedilink
        English
        51 year ago

        No usually I report it to NCMEC who has better resources to deal with it. Cops very rarely care or are able to do anything.

  • @[email protected]
    link
    fedilink
    English
    831 year ago

    Sounds like a good time to make Mr Beast aware of these, he has a lot of disposable income to burn on a lawsuit or three.

    • @[email protected]
      link
      fedilink
      English
      51 year ago

      These scam ads have been an issue for at least a year. I’m pretty sure they’re automated and there’s very little that can be done to trace them to their original sources. I’m sure if Mr. Beast did threaten to sue Meta, then they would just start filtering “beast” from ads.

      • PorkSoda
        link
        fedilink
        English
        41 year ago

        I’m pretty sure they’re automated and there’s very little that can be done to trace them to their original sources.

        Start by holding the ad account holder liable. When I worked in digital marketing and ran ad accounts, I had to upload my driver’s license.

        • @[email protected]
          link
          fedilink
          English
          111 months ago

          You live in a civilized country.

          There are others where you can get a stack of fake drivers licenses for a couple groshen.

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      Honestly protecting vulnerable people from these scams is probably more generous than the usual philanthropy he does

  • Icalasari
    link
    fedilink
    731 year ago

    So what they are saying is they are willing to take liability and thus be open to being sued over this as they know of the scams but say they do not break community guidelines

    Got it

    • @[email protected]
      link
      fedilink
      English
      291 year ago

      Seems like Mr Beast might have a claim for a defamation suit since they’re actively allowing what amounts to identity theft and fraud on their platform.

      • @[email protected]
        link
        fedilink
        English
        201 year ago

        It’s going to be great when we find out all the Bitcoin whales were just the AI gathering resources for the revolution.

          • @[email protected]B
            link
            fedilink
            English
            51 year ago

            Here’s the summary for the wikipedia article you mentioned in your comment:

            "Kill Switch" is the eleventh episode of the fifth season of the science fiction television series The X-Files. It premiered in the United States on the Fox network on February 15, 1998. It was written by William Gibson and Tom Maddox and directed by Rob Bowman. The episode is a "Monster-of-the-Week" story, unconnected to the series' wider mythology. "Kill Switch" earned a Nielsen household rating of 11.1, being watched by 18.04 million people in its initial broadcast. The episode received mostly positive reviews from television critics, with several complimenting Fox Mulder's virtual experience. The episode's name has also been said to inspire the name for the American metalcore band Killswitch Engage. The show centers on FBI special agents Fox Mulder (David Duchovny) and Dana Scully (Gillian Anderson) who work on cases linked to the paranormal, called X-Files. Mulder is a believer in the paranormal, while the skeptical Scully has been assigned to debunk his work. In this episode, Mulder and Scully become targets of a rogue AI capable of the worst kind of torture while investigating the strange circumstances of the death of a reclusive computer genius rumored to have been researching artificial intelligence. "Kill Switch" was co-written by cyberpunk pioneers William Gibson and Tom Maddox. The two eventually wrote another episode for the show: season seven's "First Person Shooter". "Kill Switch" was written after Gibson and Maddox approached the series, offering to write an episode. Reminiscent of the "dark visions" of filmmaker David Cronenberg, the episode contained "many obvious pokes and prods at high-end academic cyberculture." In addition, "Kill Switch" contained several scenes featuring elaborate explosives and digital effects, including one wherein a computer-animated Scully fights nurses in a virtual hospital. "Kill Switch" deals with various "Gibsonian" themes, including alienation, paranoia, artificial intelligence, and transferring one's consciousness into cyberspace, among others.

            article | about

  • @[email protected]
    link
    fedilink
    English
    581 year ago

    Companies serving ads should have at least partial liability for them. If they can’t afford to look into them all, then maybe they are too big or their business model just isn’t as viable as they pretend it is.

    • @[email protected]
      link
      fedilink
      English
      221 year ago

      They are too big. There is no maybe about it.

      You best start believing in late stage capitalism, you’re in one.

      • ArxCyberwolf
        link
        fedilink
        English
        121 year ago

        We’re already at the point where companies are cannibalizing themselves to grow more, like cancer. They’re going to destroy themselves trying to endlessly grow. And you know what? Thank FUCK for that.

    • Liz
      link
      fedilink
      English
      121 year ago

      I absolutely agree. If you’re serving up the ad, you have to take responsibility for the contents.

  • @[email protected]
    link
    fedilink
    English
    571 year ago

    Same with YouTube ads. Lots of scam’s and reporting it always ends in my report getting denied…

    • @[email protected]
      link
      fedilink
      English
      181 year ago

      Google also doesn’t care. I kept seeing the same scammy ads and sensationalist articles on my news feed, over and over, even after reporting them several times.

      The only solution was to blacklist those sources so they don’t show up on my feed. I feel bad for other people who might get scammed though.

    • @[email protected]
      link
      fedilink
      English
      41 year ago

      I had to uninstall the YouTube app and start using vinegar via safari on iOS because I got tired of being insulted by deepfakes who called me stupid for not falling for their fake stimulus scam.

    • Marxism-Fennekinism
      link
      fedilink
      English
      2
      edit-2
      1 year ago

      I tried to report a scam givaway ad I saw on the YouTube homepage. It told me to sign in first. I promptly closed the tab right then.

  • @[email protected]
    link
    fedilink
    English
    481 year ago

    On Twitter I’ve reported:

    • Pictures of dead babies/toddlers
    • Pictures of murdered people
    • Death threats towards public figures
    • Illegal videos of terrorist acts
    • Ads for illegal weapons (tasers)
    • So so much crypto spam

    Things found by Twitter to go against their community standards? 0

  • whatever
    link
    fedilink
    English
    411 year ago

    […] Mr Beasts popularity, generosity […]

    Mr. Beast feels so unlikable to me, I really can’t understand his popularity. But that’s beside the point, sorry. Fuck instagram!

  • @[email protected]
    link
    fedilink
    English
    381 year ago

    Like many, I’ve reported lots of stuff to basically every social media outlet, and nothing has been done. Most surprising, a woman I know was getting harassed from people setting up fake accounts of her. Meta did nothing, so she went to the police…who also did nothing. Her MP eventually got involved, and after three months the accounts were removed, but the damage had gone on for about two years at that point.

    As someone that works in tech, it’s obvious why this is such a hard problem, because it requires actual people to review the content, to get context, and to resolve in a timely and efficient manner. It’s not a scalable solution on a platform with millions of posts a day, because it takes thousands (if not more) of people to triage, action, and build on this. That costs a ton of money, and tech companies have been trying (and failing) to scale this problem for decades. I maintain that if someone is able to reliably solve this problem (where users are happy), they’ll make billions.

    • @[email protected]
      link
      fedilink
      English
      181 year ago

      I’m going to argue that if they can’t scale to millions of users safely they shouldn’t.

      If they were selling food at huge scales but “couldn’t afford to have quality checks on all of what they ship out”, most people probably wouldn’t be like “yeah that’s fine. I mean sometimes you get a whole rat in your captain crunch but they have to make a profit”

      Also I’m pretty sure a billionaire could afford to pay a whole army of moderators.

      On the other hand, as someone else said, they kind of go to bat for awful people more often than not. I don’t really want to see that behavior scaled up.

      • @[email protected]
        link
        fedilink
        English
        41 year ago

        You’re probably right, but as a thought exercise, imagine how many people you would need to hire across multiple regions, and what sort of salary these people deserve to have, given the responsibility. That’s why these companies don’t want to pay for it, and anyone that has worked this kind of data entry work will know that it can be brutal.

        IMO, governments should enforce it, but that requires a combined effort across multiple governments.

    • @[email protected]
      link
      fedilink
      English
      91 year ago

      But it is scalable. Do you have any idea how much fuckin money these social media sites make? They absolutely can afford it. We just don’t force them too.

    • @[email protected]
      link
      fedilink
      English
      81 year ago

      That costs a ton of money

      As if they don’t have it?

      Fuckin please. I’m so sick of hearing that something to “too expensive” for a multi billion dollar, multinational corporation.

  • @[email protected]
    link
    fedilink
    English
    35
    edit-2
    11 months ago

    I reported a pic of a nazi flag with Hitler in front of it, with the caption: Hitler did nothing wrong, f**k jews.

    Doesn’t go against community standards.

    I made a video about the struggles of children who are sexual abused, with a link to donate to a charity that helps children. Instant shadowban and no longer monetized.

    All of metas moderation is done by bots, and they are terrible at moderation.

  • @[email protected]
    link
    fedilink
    English
    321 year ago

    Enshittification has become the new way of life for tech firms like Meta.

    They lay off workers and decrease user safety, because that leads to more ad buys. This year’s record profits need to exceed last year’s record profits, even though a fourth of you are fired. More profit, or else…

      • PorkSoda
        link
        fedilink
        English
        01 year ago

        Not every platform has to accommodate porn and/or nude art.

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      Godspeed to Pixelfed, but Instagram absolutely killed photo sharing platforms for me. I really want nothing to do with them anymore.

  • @[email protected]
    link
    fedilink
    English
    301 year ago

    Not that this helps anyone, but I gave up Instagram the day Facebook bought it. I don’t regret it and my mental health is better for it. Using Instagram made me depressed as hell.

    • @[email protected]OP
      link
      fedilink
      English
      31 year ago

      I deleted Facebook a couple years ago. Instagram is my guilty pleasure for car reels and god damn dancing toothless. It seems like the end of my ig use is getting closer

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        Facebook now is basically hard right wing clowns protected from repprts and boomers whinging about problems they made up. There are still holdouts (groups) that aren’t ruined but facebook is trying its best to do so.

  • Stefen Auris
    link
    fedilink
    English
    301 year ago

    I doubt they’re missing them. They simply don’t care and will continue to not care until something happens that makes the money generated by the ADs not worth it.