• @[email protected]
    link
    fedilink
    773 days ago

    Why are all the comments here whataboutism?

    Can’t we just agree it’s fucking awful China is censoring it’s massacres?

    • @[email protected]
      link
      fedilink
      37
      edit-2
      3 days ago

      We do agree on that, but it’s weird to act as if this is somehow worse than OpenAI; try asking ChatGPT about Palestine.

      Turns out our fantasies about genius AI that will make our lives better don’t really work when those AIs are programmed, both intentionally and unintentionally, with human biases.

      This is why I get so angry at people who think that AI will solve climate change. We know the solution to climate change, and it starts with getting rid of billionaires. But an AI controlled by billionaires is never going to be allowed to give that answer, is it?

      • @[email protected]
        link
        fedilink
        20
        edit-2
        3 days ago

        Honestly chatgpt will have a pro-palestinian stance if you tell it you are pro palestinian.

        Deepseek doesnt do that.

        • @[email protected]
          link
          fedilink
          283 days ago

          As with all things LLM, triggering or evading the censorship depends on the questions asked and how they’re phrased, but the censorship most definitely is there.

          • @[email protected]
            link
            fedilink
            2
            edit-2
            2 days ago

            That could just come down to the nature of the debate. The freedom of Israelis isn’t really a question in the debate. People who see a difference between Palestinians and Hamas also see a difference between Israel’s administration and military and the general Israeli population.

            My guess is that it’s set up to see contexts with conflicting positions associated as controversial but it will just go with responses that don’t have controversy associated with them.

            A bias in the training data will result in a bias in the results and it doesn’t have morals to help it choose between conflicting data in its training. It’s possible that this bias was deliberately introduced, though it’s also possible that it was negligently introduced as it just sucked up data from the internet.

            I’m curious though how it would respond if the second response is used to challenge the first one with a clarification that Palestinians are indeed people.

            Edit: not saying that there isn’t any censorship going on with LLMs outside of China (I believe there absolutely is, depending on the model), just that that example doesn’t look like the other cases of censorship I’ve seen.

            • @[email protected]
              link
              fedilink
              32 days ago

              My guess is that it’s set up to see contexts with conflicting positions associated as controversial but it will just go with responses that don’t have controversy associated with them.

              This is significantly more reasoning and analysis than LLMs are capable of.

              When they give those “I can’t respond to that” replies, it’s because a specific programmed keyword filter was tripped, forcing the model to insert a pre-programmed response instead. The rest of the time, they’re just regurgitating a melange of the most statically present text on the subject from their training data.

              • @[email protected]
                link
                fedilink
                12 days ago

                Yeah, that’s what censorship usually looks like but look at the image in the comment I originally replied to. It didn’t say “I can’t answer that”, it said it didn’t have an opinion and then talked about the controversial nature of it.

                It’s not really reasoning or analysis I’m talking about but the way it ended up setting up its weights in the NN. If it had training data with wildly different responses to questions like that and had training data that commented on wildly different opinions as controversial, then that could make it believe (metaphor) that “it’s a controversial subject” is the most statistically present text.

    • @[email protected]
      link
      fedilink
      152 days ago

      Because tankies love authoritarian china and latch onto any thread to pretend they aren’t an authoritarian shithole

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      2 days ago

      The censorship is external to the LLM. If you run it locally, it will answer the query.

      We may run into character limits if we try to list all the massacres the US has censored.

      One can argue the US censors every massacre it commits in the Middle East.

      Which doesn’t make China’s censorship any better. It just establishes that state censorship is a global norm, regardless or how ‘free’ you think your press is.

    • @[email protected]
      link
      fedilink
      English
      52 days ago

      Pretty simple. Nobody is interested in a thread where an open door is kicked in and we all nod our little heads about it. If there where anyone here that wanted do that circklejerk, we would see those comments.

    • @[email protected]
      link
      fedilink
      English
      22 days ago

      Yes we agree, but the question that needs to be asked is where were these type of posts to point out the same of chatgpt and others?

    • stebo
      link
      fedilink
      33 days ago

      it is. even more awful is that America is going the same course