• @[email protected]
    link
    fedilink
    361 day ago

    Quick answer: Don’t give any non-locally running non-opensource LLM’s sensitive info / private info.

      • @sanosuke001
        link
        English
        131 day ago

        Tbh, if you don’t know what that means, you can’t trust it.

        Though, it means that unless it’s running locally on your own hardware and not in the cloud and you haven’t verified the source code directly (or someone else you trust hasn’t) then assume it is nefarious and do not give it any personal or sensitive information you wouldn’t want anyone on the Internet to know.

          • @sanosuke001
            link
            English
            9
            edit-2
            1 day ago

            Name, address, phone number, Bank info, nude photos of yourself, etc. If the info being released could harm you or in some way negatively impact your life, assume it would be sent to China or anywhere else on the world wide web if you ran it without following the previous guidelines.

          • @[email protected]
            link
            fedilink
            10
            edit-2
            1 day ago

            Any data related to your person. (name, contacts, date of birth, etc.) Search “PII” or “personally identifiable information” if you want to read more about that.

      • @[email protected]
        link
        fedilink
        101 day ago

        ‘locally-running’ means it is on your computer, will work without an internet connection

        anything you access using the internet is not ‘locally-running’

        The comment means don’t send information over the internet that you don’t want to share.

        • @[email protected]
          link
          fedilink
          41 day ago

          Thanks @[email protected] for filling OP in! I want to add a few things incase OP is unaware of more than just what you explained:

          LLM = large language model, one of the types of AI. Examples: ChatGPT, DeepSeek, Meta’s LLaMA

          Open-Source: the program code of the AI is available to look at, in its entirety

          If you are not sure if you understand these terms and what frightful_hobgoblin said, then just assume whatever AI you are using is going to share your chat with the company behind it.

          • Pup Biru
            link
            fedilink
            English
            26 hours ago

            open source is also very tricky with LLMs: i’d argue if you can’t recreate it from scratch, it’s not open source… deep seek does not contain all the data necessary to recreate it from scratch: it’s open weights (the model itself can be downloaded and run) but not open source… i’d classify it as free (as in beer) software; not open source

            • @[email protected]
              link
              fedilink
              16 hours ago

              Excellent addition, I agree!

              That’s the criteria of many FOSS catalogue repositories: they won’t add any software that is not completely reproducible.

  • fxomtM
    link
    fedilink
    151 day ago

    Anything that is not local AI cannot be trusted.

    Have you ever thought to yourself, where the fuck do these corporations get the funding to make me use such a service for free? By harvesting your data and selling it.

    From your other comment i saw you aren’t using a PC, i haven’t tested this out but you may be interested in it (local LLM and android only): https://github.com/Vali-98/ChatterUI

    Best of luck to you.

  • @[email protected]
    link
    fedilink
    English
    61 day ago

    Depends on what you ask.

    Go ask it about NATO or Tienanmen Square and see what happens. The data model is heavily redacted, filtered, suppressed, biased…

    So if you ask it a question, it will always be pro-China/anti-America. It also changes responses on the fly to fit with Chinese law, which includes denying the Tienanmen Square massacre, and other historic events and even goes as far as to imply or outright say they never happened at all.

    So can the content be trusted? Not really.

    • erin (she/her)
      link
      fedilink
      415 hours ago

      This is incorrect. This only applies if not hosted locally. I host it myself it has none of these restrictions. If you’re using it from their app or website it’s hosted in China and must follow Chinese law.

      • @[email protected]
        link
        fedilink
        English
        514 hours ago

        If you’re using it from their app or website it’s hosted in China and must follow Chinese law.

        This is literally what I’ve just said…

        It also changes responses on the fly to fit with Chinese law. You called what Is aid wrong, and then immediately exactly reiterated what I’ve said…

        Why? What do you get out of it?

        • erin (she/her)
          link
          fedilink
          28 hours ago

          I suppose if that line is a catch-all, sure. Your message didn’t make it clear that self-hosting removes Chinese bias and censorship. This is an important bit of information for OPs question, and what I get out of it is a valid and important addition to the conversation. I genuinely don’t know why you’re defensive. Being incorrect, or I suppose in this case, lacking nuance, isn’t a character flaw. I do it all the time.

    • @[email protected]OP
      link
      fedilink
      219 hours ago

      I personally just want to get chibese translations, but I don’t if it’s worth it anymore…

      • erin (she/her)
        link
        fedilink
        18 hours ago

        Refer to my other comments above. Self-hosting it removes censorship and bias. It’s only biased as long as it’s on Chinese servers and therefore following Chinese law.

      • @[email protected]
        link
        fedilink
        English
        617 hours ago

        DeepSeek has some of the most syntactically correct and accurate English to Chinese translations I’ve ever seen–so it’s super useful for that.

  • @[email protected]
    link
    fedilink
    11
    edit-2
    1 day ago

    You can’t trust anything.

    You always have to use trustless software.

    ‘Trusting’ is privacy-by-policy.

    Trustlessness is privacy-by-design.

    Deepseek’s models can be run truslessly locally, or can be hosted on a server.


    Wait were you talking about privacy or fact-checking? LLMs don’t stick to the truth.

  • @[email protected]
    link
    fedilink
    English
    51 day ago

    Here’s what you can trust: https://LMStudio.ai

    Otherwise, only ask it generic coding questions that any student studying your topic would, and then there would be nothing to distrust over.