While I think this is a bit overblown in sensationalism, any company that allows user generative AI, especially as open as using LoRas and any amount of checkpoints, needs to have very good protection against synthetic CSAM like this. To the best of my knowledge, only the AI Horde has taken this sufficiently seriously until now.

  • @ReallyActuallyFrankenstein
    link
    English
    111 months ago

    I’d agree with the caveat that a model that may be trained on actual CSAM is a problem. Anything that is an actual product of, or cannot exist without, child abuse should be absolutely prohibited and avoided.

    I’m not sure whether there is such a model out there, but now that I imagine it, I assume it’s inevitable there will be one. Apart from being disturbing to think about, that introduces another problem - which is, once that happens how will anyone know the model was trained on such images?

    • HubertManne
      link
      fedilink
      211 months ago

      Oh I can totally agree with the training part but that can’t be fought at the ai level it needs to be stopped at the csam level.