• @[email protected]
    link
    fedilink
    English
    11
    edit-2
    2 months ago

    I don’t really personally care much. I don’t think that the way we’re going to deal with forged video is going to be clamping down on every forum sufficiently to ensure that nobody shares it. It’s just a technical dead end. Too easy to do, too many routes to spread content.

    I’m also skeptical that automatically detecting forged video is gonna work, not as a long-run solution. Too many fundamental technical problems. People are always gonna try to make generated images and video better.

    Digital watermarking of generated content is also pretty limited. Hostile actors with real resources can just generate their own without watermarking, and if you want to have software that does detection generally-available, you’re also putting information as to how to defeat it out there.

    I think that a much-better route is to produce technologies and conventions that permit verifiable photographs and video to be produced, and then raising public expectations to the point that images or video that don’t conform aren’t considered to be convincing proof of something controversial.

    Like, if you want people to believe that your crochet project is real and not something from a generative AI, then you get someone from some reputable “endorsement organization” to come out and photograph it and that organization cryptographically signs the image. Maybe that becomes the gold standard for proof of truth. Could even have multiple organizations do so for important stuff. It’s how we handled things prior to image or audio recording – have a trusted person go there and attest to something being real.

    Then have people expect that level if they’re going to trust a recording as to truth.