I’m talking about this sort of thing. Like clearly I wouldn’t want someone to see that on my phone in the office or when I’m sat on a bus.

However there seems be a lot of these that aren’t filtered out by nsfw settings, when a similar picture of a woman would be, so it seems this is a deliberate feature I might not be understanding.

Discuss.

  • @[email protected]
    link
    fedilink
    English
    58
    edit-2
    5 months ago

    I feel like the Internet needs more tags:

    • Explicit (rude language, nudity, etc)
    • Porn (nsfw legacy tag)
    • Violence
    • Not safe for life

    Something like that.

      • @[email protected]
        link
        fedilink
        English
        125 months ago

        Yeah, I agree. I do sort of understand op’s consternation. I don’t browse Lemmy on my work PC, but sometimes on lunch or in public I pull it up on my phone on All communities and I’m suddenly conscious that everyone beside me can see the “sfw” furry and anime art that I scroll past.

        However, that’s kinda my fault. I don’t want to ban those communities because I like that stuff. It’s just a little odd that we call it sfw when, to be honest, I have a hard time picturing most work places where I live happy to see that on my desktop.

    • @[email protected]
      link
      fedilink
      English
      135 months ago

      I wonder if Lemmy could easily do content warnings like on Mastodon. I don’t know if it’s part of the ActivityPub spec but it’s definitely a thing that’s been implemented elsewhere.

      • Aedis
        link
        fedilink
        105 months ago

        The answer to “is it part of the activityPub spec?” is more often than not a strong No.

    • AnIndefiniteArticle
      link
      fedilink
      135 months ago

      I’ve seen sites that have something similar, including a “suggestive” tag for pics like OP’s.

      • @[email protected]
        link
        fedilink
        English
        75 months ago

        Yeah, that would be great. Many instance admins already use CSAM classifier models on all incoming images. It’d be great if they could add additional models that could put meta tags on images automatically like “suggestive” and “gore” with the option for the poster to modify the tags just in case it was a false negative or positive. Like a lasagna getting gore, for example.