• @[email protected]
    link
    fedilink
    English
    98 months ago

    Interesting article, and a worrying trend. Stamping a bit of text like ‘Generated by Midjourney’ is ridiculously weak protection though. I wonder if some kind of hidden visual data could be embedded within AI images - like a QR code that can be read by computers but is invisible to humans.

    Just found the wikipedia page for steganography. Have any AI companies tried using this technique I wonder? 🤔

    • Flying SquidOP
      link
      fedilink
      English
      228 months ago

      The problem is that even if Midjourney did that, there will be other creators have no such moral or ethical issues with people using their software to make these fake photos without any sort of hidden or obvious data to show that they are fakes. And then there will be the ones which have money from a state behind them, and possibly a very large library of surveillance photos for the AI to learn from.

    • @[email protected]
      link
      fedilink
      English
      108 months ago

      I wonder if some kind of hidden visual data could be embedded within AI images - like a QR code that can be read by computers but is invisible to humans.

      Said protection would also be hilariously weak. It would be easy for malicious actors to strip/alter the metadata of the image. And embedding the flag in the image itself is something that can be circumvented by using a model that doesn’t apply any flag.

      We’re about to live in a world where nobody can tell truth from fiction.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          8 months ago

          That’s a fair assessment, but I think it’s going to get a whole lot worse.

          Before, to the degree that nobody could figure out the truth, it was largely due to lack of information/evidence. The future will instead have evidence manufactured for whatever opinion you like.

    • @[email protected]
      link
      fedilink
      English
      68 months ago

      Specific programs can. You can probably train specific models and alter datasets to include them as well.

      But we’re past the point where photo and video is sufficient on its own. Especially when there’s a possibility of state level actors benefiting.

    • @[email protected]
      link
      fedilink
      English
      48 months ago

      There is the Content Authentication Initiative which keeps track of the source of an image (it was taken by this camera, etc). It’s technically impossible to fake as it’s validated, registered and traceable, but who knows. It’s more a database of known images.

    • @[email protected]
      link
      fedilink
      English
      28 months ago

      Yeah, the only real way to do it is have people digitally sign their images, but it still comes down to a trust element. You need to trust the person who created/signed the original content. It also means getting content from 3rd parties is going to be a lot harder in the scientific/historical communities of the world.

    • @[email protected]
      link
      fedilink
      English
      28 months ago

      Have any AI companies tried using this technique I wonder?

      Yes, I have read that they want to do something like that. Stamp all images that their AI has created.

      But of course it won’t be hard to remove the stamp, if you want to.