An artist who infamously duped an art contest with an AI image is suing the U.S. Copyright Office over its refusal to register the image’s copyright.

In the lawsuit, Jason M. Allen asks a Colorado federal court to reverse the Copyright Office’s decision on his artwork Theatre D’opera Spatialbecause it was an expression of his creativity.

Reuters says the Copyright Office refused to comment on the case while Allen in a statement complains that the office’s decision “put me in a terrible position, with no recourse against others who are blatantly and repeatedly stealing my work.”

  • @[email protected]
    link
    fedilink
    English
    42 months ago

    That’s, uh, not what happened here.

    I agree. He shouldn’t own that image.

    And I’ve never heard of anyone doing that. Anyone with the skill to draw the kinds of pictures they want would simply draw the kinds of pictures they want instead of putting in tons of effort to get an AI to do it worse

    I think that’s a matter of time until it becomes the norm. There was a time we painted literally everything and then photography came along. You could make the same argument against photography because back then photography needed setting up, the images were black and white and you could arguably do a better job painting it instead. However photography took over because you could spend the next how many hours or days painting something or you could go click and have the photo that isn’t as “high quality” but is close enough.

    I think in the future artists will use AI to quickly prototype through ideas and when they get roughly what they originally envisioned, they take the AI image as a canvas and touch it up a bit. Sure they could paint it themselves and spend the next week prototyping all sorts of ideas before creating the final image, but would you really do that when you could spend maybe a day prototyping with AI and then another day to fix up the image? Maybe the image doesn’t even need fixing up, maybe the AI generated exactly what you imagined?

    • @[email protected]
      link
      fedilink
      7
      edit-2
      2 months ago

      I think the statement “then photography took over” is doing a lot of work here. It’s incredibly inaccurate to say that photography took over as the primary means of visual creativity.

      Photography took over as the primary means of capturing a moment. Sure it’s used artistically sometimes, but primarily it’s used for subjective reality. I would argue that painting, and especially digital painting, is at an all-time high due to the ease and relatively low barrier to entry.

      I think that most artists would still prefer to paint something that they can consider “their art”, over typing a sentence and getting back a result. Sure, it’s neat, but it will never be anything more than a novelty, or a shortcut to generic results. The process of creation is only really 50% the final result, and the process itself is an important aspect and not just a means to an end.

      Using AI just feels like a weird commodification of art - like using only pre-made Unity assets for a game and nothing else, and then having someone else make it for pennies.

      I’ve seen so many bizarre “AI artists” cropping up, especially online, who legitimately try to sell AI art online for hundreds of dollars. I think the reasons people buy art can usually be put into three buckets: they appreciate the process that went behind it, they like the style of the artists or that painting in particular, or they find some meaning in it. If you wanted to buy AI art why not just prompt it yourself. What process, or artistic style, or meaning is even in AI art?

      It’s not even like AI can be trained on an artist’s own works. It takes millions of samples to train AI, which a singular artist would never be able to produce. So, at some point, that model will have had to have stolen the content of its results from something.

      • @[email protected]
        link
        fedilink
        English
        22 months ago

        I think the statement “then photography took over” is doing a lot of work here. It’s incredibly inaccurate to say that photography took over as the primary means of visual creativity.

        I think my context there was pretty obvious so it’s somewhat disingenuous to take it out of context. Photography has largely taken over portrait paintings. I think photography has also largely taken over scenic paintings. I never said it completely replaced painting, it became a tool in the hands of artists the same way AI art can become a tool.

        I think that most artists would still prefer to paint something that they can consider “their art”, over typing a sentence and getting back a result. Sure, it’s neat, but it will never be anything more than a novelty, or a shortcut to generic results. The process of creation is only really 50% the final result, and the process itself is an important aspect and not just a means to an end.

        And I think artist will use AI to come up ideas for their art and use the output as a canvas.

        Using AI just feels like a weird commodification of art - like using only pre-made Unity assets for a game and nothing else, and then having someone else make it for pennies.

        Because that’s the current use of AI. It doesn’t mean AI will stay this way.

        I’ve seen so many bizarre “AI artists” cropping up, especially online, who legitimately try to sell AI art online for hundreds of dollars.

        I’m not talking about those people and I’ve already mentioned elsewhere that their “work” can be considered questionable.

        I think the reasons people buy art can usually be put into three buckets: they appreciate the process that went behind it, they like the style of the artists or that painting in particular, or they find some meaning in it. If you wanted to buy AI art why not just prompt it yourself. What process, or artistic style, or meaning is even in AI art?

        Let’s say the artist trains an AI model solely on their own previous art and then releases some of those AI generated images. The person who likes the style or a particular painting, do they care it was made by AI? Doubt it, because it’s in the artists style. The person who appreciates the process that went behind it, is “I put my previous works into an AI model and the model generated this image based on what I imagined this image should be” really that much less impressive than “I imagined what this image should be and so I sat behind my drawing board and drew it”? As for meaning, the artist still chooses what to release. If they release something it must have a meaning. I think it would be extremely disrespectful towards an artist to claim the art they chose to release has no meaning.

        It’s not even like AI can be trained on an artist’s own works. It takes millions of samples to train AI, which a singular artist would never be able to produce. So, at some point, that model will have had to have stolen the content of its results from something.

        I thought we were talking about it from a philosophical point of view. I’m not about to predict the future and claim it could or couldn’t be done, but let’s say it could be done. Would that change your opinion?

      • lime!
        link
        fedilink
        English
        12 months ago

        It’s not even like AI can be trained on an artist’s own works. It takes millions of samples to train AI, which a singular artist would never be able to produce. So, at some point, that model will have had to have stolen the content of its results from something.

        so i have no skin in this discussion, but i thought i’d point out that this is generally not how it works. there are image generation models trained on only public domain works (specifically because of concerns like what the thread is about). you can take a model like that and “continue to train it” on a fairly low number of works (20 to 200 is generally enough) of a particular style, which results in a much smaller (tens to hundreds of MB) low-rank adaption or “LoRA” model. This model is applied on top of the “base” model, morphing the output to match the style. you can even add a multiplying factor to the lora model to get output more or less like the style in question.