• @[email protected]
    link
    fedilink
    English
    0
    edit-2
    1 year ago

    I can understand partially your argument, and I’d agree the work you personally do is your own, but the art generated by the AI is not.

    It’s as much your art as the person who googles extensively to find images that they ten cut out and place into your art. It’s as much your art as taking it to another person, asking them to make the edits, and revising.

    Now, if the image you get from google is creative commons, and the revision artist agrees to signing off their rights, you’d be able to copywrite your work. I’d agree to the same situation with AI, if the people who’s art makes up the model agree to that circumstance, you should be able to copywrite. Otherwise you’re just taking credit for others work because you described it well enough while ingraining it into your own.

    • @[email protected]
      link
      fedilink
      English
      51 year ago

      No specific person’s art is being put into my generated image, unlike if I were to copy an image from Google. If a model is trained on 1 trillion images, then every single one of those images influenced the weights in the model which then resulted in the output.

      But my argument there is that when the generation becomes very integrated into the workflow as a tool, then it can be nearly impossible to separate out what was actually created by me vs what the ai did.

      • @[email protected]
        link
        fedilink
        English
        7
        edit-2
        1 year ago

        I disagree, you can see signatures and figures drawn by individual artists in even the largest models of today. Also, only a fraction are what you specified

        Though trillions may be used, only billions are of dragons, millions of clocks, and thousands of something more specific or in certain style. A picture of a cat has nothing to do with the prompt “rick and Morty inflation porn, big feet, (feet:1.3), cartoon, drawn, colourful”

        I’ve worked in the AI field specializing in vectorization, a method of creating automated systems to catch failures, and it’s clear to me what gets imprinted onto the nodes is just other peoples work. The line-work, colouration, composition, etc. on a particular output will be from a tiny fraction of the models training and will be, individually per addition or edit, directly taken from a handful of images.

        This is why you can get text based or code based AI to word for word output some of their trained work. Same with image based, though only pieces again.

        All the actual decision making, the colouration, the composition, line-work, perspective, base stylistic choice, etc. will be made by another person or people before being detected by the AI and output when the correct input (prompt) is given.

        To be clear, If I had called Pokemon fish whenever you put in the word fish something stylistically Pokemon would be output with nothing to do with fish. It’s not learning what our prompt actually means just what gets it a head pat from the dev.

        It’s not just learning what a word means and outputting a new image, it’s finding a way to output the original data in a way that makes somebody like me, an AI dev, say “yeah that’s about right”. That’s all

        Once more, because each time I stare I hate AI I get misinterpreted, I hate that it’s taking without permission. If that is granted then it’s perfectly moral.

        • @[email protected]
          link
          fedilink
          English
          01 year ago

          If I like a particular element of a piece of art, like the way they painted a distinctive piece of clothing in a portrait, and I copy that element in my own work, am I stealing their work?

          • @[email protected]
            link
            fedilink
            English
            7
            edit-2
            1 year ago

            Did you trace the linework, did you copy near exactly the colouration and composition, could you place one over the other and see it’s very close to the origional? if so, yes. Yes you did. If you think to yourself, I like these specific elements of this art and am going to take them into account while creating a new piece, with new ideas, then no. You did not. AI art does the first. It doesn’t know what makes up the art. It can’t. It just knows if I take this data from the origional data set and place it in this manner when this term is present I have done well. It’s just pattern recognition, no critical thought