• @[email protected]
        link
        fedilink
        English
        911 months ago

        A Variational AutoEncoder is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It’s that which makes it possible to run Stable Diffusion at home.

        This attack targets the VAE. The image is altered so that the latent representation looks like a very different image, but still roughly the same to humans. The actual image AI works on a different image. Obviously, this only works if you have the right VAE. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked.

        I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. It doesn’t seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.

        • @[email protected]
          link
          fedilink
          English
          311 months ago

          Thank you for explaining. I work in NLP and are not familiar with all CV acronyms. That sounds like it kind if defeats the purpose if it only targets open source models. But yeah, makes sense that you would need the actual autoencoder in order to learn how to alter your data such that the representation from the autoencoder is different enough.

    • lad
      link
      fedilink
      English
      511 months ago

      So there’s a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.

      This would be truly ironic

    • Hello Hotel
      link
      fedilink
      English
      2
      edit-2
      11 months ago

      If users have verry much control and we can coordinate then you could gaslight the AI into a screwed up alternate reality