I’ve recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.

    • troed
      link
      fedilink
      114 months ago

      Sure - they’d need to at least loan the epubs just like a human would need to if wanting to read them.

  • @[email protected]
    link
    fedilink
    English
    58
    edit-2
    4 months ago

    This falls squarely into the trap of treating corporations as people.

    People have a right to public data.

    Corporations should continue to be tolerated only while they carefully walk an ever tightening fine line of acceptable behavior.

      • @[email protected]
        link
        fedilink
        English
        8
        edit-2
        4 months ago

        Yes. Large groups of people acting in concert, with large amounts of funding and influence, must be held to the highest standards, regardless of whether they’re doing something I personally value highly.

        An individual’s rights must be held sacred.

        When those two goals are in conflict, we must melt the corporation-in-conflict down for scrap parts, donate all of its intellectual property to the public domain, and try again with forming a new organization with a similar but refined charter.

        Shareholders should be, ideally, absolutely fucked by this arrangement, when their corporation fucks up, as an incentive to watch and maintain legal compliance in any companies they hold shares in and influence over.

        Investment will still happen, but with more care. We have historically used this model to great innovative success, public good, and lucrative dividends. Some people have forgotten how it can work.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          4 months ago

          I think they are saying that preventing open source models being trained and released prevents people from using them. Trying to make training these models more difficult doesn’t just affect businesses, it affects individuals too. Essentially you have all been trying to stand in the way of progress, probably because of fears over job security. It’s not really different to being a luddite.

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            4 months ago

            Essentially you have all been trying to stand in the way of progress,

            Fuck progress from anyone who can’t be bothered to do it right. There’s justified risks where the cost of inaction is just as horrible as action. This isn’t that, and everyone saying it is, is an asshole whose shouting about this we would all be better off without.

            This work can be done correctly, and even reasonably quickly. Shortcuts aren’t merited.

            probably because of fears over job security. It’s not really different to being a luddite.

            My job is secure. I have substantially more than typical expertise in language models.

            The emperor, today, is butt naked. Anyone telling you we are about to see fast new progress is full of shit, and isn’t your friend.

            I’ve seen this before, and I’ll see it again.

            I’ve given a polite warning, where it looked like folks might listen. The rest aren’t my problem.

  • @[email protected]
    link
    fedilink
    English
    544 months ago

    This is not an opinion. You have made a statement of fact. And you are wrong.

    At law, something being publicly available does not mean it is allowed to be used for any purpose. Copyright law still applies. In most countries, making something publicly available does not cause all copyrights to be disclaimed on it. You are still not permitted to, for example, repost it elsewhere without the copyright holder’s permission, or, as some courts have ruled, use it to train an AI that then creates derivative works. Derivative works are not permitted without the copyright holder’s permission. Courts have ruled that this could mean everything an AI generates is a derivative work of everything in its training data and, therefore, copyright infringement.

    • @[email protected]
      link
      fedilink
      English
      344 months ago

      Saying that statistical analysis is derivative work is a massive stretch. Generative AI is just a way of representing statistical data. It’s not particularly informative or useful (it may be subject to random noise to create something new, for example), but calling it a derivative work in the same way that fan-fiction is derivative is disingenuous at best.

      • @[email protected]
        link
        fedilink
        English
        74 months ago

        Wouldn’t that argument be like saying an image I drew of a copyrighted character is just an arrangement of pixels based on existing data? The fact remains that, if I tell an AI to generate an image of a copyrighted character, then it’ll produce something without the permission of the original artist.

        I suppose then the problem becomes, who do you hold responsible for the copyright violation (if you pursue that course of action)? Do you go after the guy who told the AI to do it, or do the people who trained the AI and published it? Possibly both? On one hand, suing the AI AL company would be like suing Adobe because they don’t stop people from drawing copyrighted materials in their software (yet). On the other hand, they did create this software that basically acts in the place of an artist that draws whatever you want for commission. If that artist was drawing copyrighted characters for money, you could make the case that the AI company is doing the same - manufacturing copyrighted character images by feeding the AI images of the character and allowing people to generate images of it while collecting money for their services.

        All this to say, copyright is stupid.

      • Match!!
        link
        fedilink
        English
        24 months ago
        • Tracing a picture to make an outline in pencil is a derivative work. There’s plenty of court cases ruling on this.

        • A convolutional neural network applies a kernel over the input layer to (for example) detect edges and output to the next layer a digital equivalent of a tracing.

        Why would the CNN not be a derivative work if tracing by hand is?

        • @[email protected]
          link
          fedilink
          English
          24 months ago

          Tracing is fine if you use it to learn how to draw. It’s not fine if it ends up in the finished product. Determining if it ends up in the finished product with AI either means finding the exact pattern in the AI’s output (which you will not), or clearly understanding how AI use their training data (which we do not)

    • Zagorath
      link
      fedilink
      English
      204 months ago

      They have indeed made a statement of fact. But to the best of my knowledge it’s not one that’s got any definite controlling precedent in law.

      You are still not permitted to, for example, repost it elsewhere without the copyright holder’s permission

      That’s the thing. It’s not clear that an LLM does “repost it elsewhere”. As the OP said, the model itself is basically just a mathematical construct that can’t really be turned back into the original work, which is possibly a sign that it’s not a derivative work, but a transformative one, which is much more likely to be given Fair Use protection. Though Fair Use is always a question mark and you never really know if a use is Fair without going to court.

      You could be right here. Or OP could. As far as I’m concerned anyone claiming to know either way is talking out of their arse.

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        4 months ago

        Just because something is transformative doesn’t mean that it’s fair use. There’s three other factors, including the nature of the work you copied, the amount of the copyrighted work taken for the use, and the effect on the market. There’s no way in hell I believe that anyone can plausibly say with a straight face “I’m taking literally all of the creative works you’ve ever produced and using them to create a product designed to directly compete with you and put you out of business, and this qualifies as a fair use” and I would be shocked if any judge in any court heard that argument without laughing the poor lawyer making it out of the court.

  • @[email protected]
    link
    fedilink
    English
    254 months ago

    I don’t have a problem with tech companies doing statistics on publicly available data, I have a problem with them getting rich by charging money for the collective creative works of humanity. But if they want to share their work for free, I have no issue with that.

  • deaf_fish
    link
    fedilink
    English
    224 months ago

    For personal or public use, I’m fine with it. If you use it to make money, that’s when I get upsetti spaghetti.

    • @[email protected]
      link
      fedilink
      English
      144 months ago

      Ok. Devil’s Advocate: how is a software engineer profiting from his AI model different from an artist who leans to draw by mimicking the style of public works? Asking for a friend.

      • @[email protected]
        link
        fedilink
        English
        74 months ago

        Good question!

        First, that artist will only learn from a few handful of artists instead of every artist’s entire field of work all at the same time. They will also eventually develop their own unique style and voice–the art they make will reflect their own views in some fashion, instead of being a poor facsimile of someone else’s work.

        Second, mimicking the style of other artists is a generally poor way of learning how to draw. Just leaping straight into mimicry doesn’t really teach you any of the fundamentals like perspective, color theory, shading, anatomy, etc. Mimicking an artist that draws lots of side profiles of animals in neutral lighting might teach you how to draw a side profile of a rabbit, but you’ll be fucked the instant you try to draw that same rabbit from the front, or if you want to draw a rabbit at sunset. There’s a reason why artists do so many drawings of random shit like cones casting a shadow, or a mannequin doll doing a ballet pose, and it ain’t because they find the subject interesting.

        Third, an artist spends anywhere from dozens to hundreds of hours practicing. Even if someone sets out expressly to mimic someone else’s style, teaches themselves the fundamentals, it’s still months and years of hard work and practice, and a constant cycle of self-improvement, critique, and study. This applies to every artist, regardless of how naturally talented or gifted they are.

        Fourth, there’s a sort of natural bottleneck in how much art that artist can produce. The quality of a given piece of art scales roughly linearly with the time the artist spends on it, and even artists that specialize in speed painting can only produce maybe a dozen pieces of art a day, and that kind of pace is simply not sustainable for any length of time. So even in the least charitable scenario, where a hypothetical person explicitly sets out to mimic a popular artist’s style in order to leech off their success, it’s extremely difficult for the mimic to produce enough output to truly threaten their victim’s livelihood. In comparison, an AI can churn out dozens or hundreds of images in a day, easily drowning out the artist’s output.

        And one last, very important point: artists who trace other people’s artwork and upload the traced art as their own are almost universally reviled in the art community. Getting caught tracing art is an almost guaranteed way to get yourself blacklisted from every art community and banned from every major art website I know of, especially if you’re claiming it’s your own original work. The only way it’s even mildly acceptable is if the tracer explicitly says “this is traced artwork for practice, here’s a link to the original piece, the artist gave full permission for me to post this.” Every other creative community writing and music takes a similarly dim views of plagiarism, though it’s much harder to prove outright than with art. Given this, why should the art community treat someone differently just because they laundered their plagiarism with some vector multiplication?

      • deaf_fish
        link
        fedilink
        English
        44 months ago

        Good question.

        Ok, so let’s say the artist does exactly what the AI does, in that they don’t try to do anything unique, just looking around at existing content and trying to mix and mash existing ideas. No developing of their own style, no curiosity of art history, no humanity, nothing. In this case I would say that they are mechanically doing the exact same thing as an AI is doing. Do I think I they should get payed. Yes! They spent a good chunk of their life developing this skill, they are a human, they deserve to get their basic needs met and not die of hunger or exposure. Now, this is a strange case because 99.99% of artists don’t do this. Most develop a unique style and add life experience in their art to generate something new.

        A Software Engineer can profit off their AI model by selling it. If they are make money by generating images, then they are making money off of hard working artists that should be payed for their work. That isn’t great. The outcome of allowing this is that art will no longer be something you can do to make a living. This is bad for society.

        It also should be noted that a Software Engineer making an AI model from scratch is 0.01% of the AIs being used. Most people, lay people, who have spent very little time developing art or Software Engineering skills can easily use an existing model to create “art”. The result of this is that many talented artists that could bring new and interesting ideas to world are being out competed by one guy with a web browser producing sub-par sloppy work.

  • @[email protected]
    link
    fedilink
    English
    194 months ago

    It would be nice if the AI industry had one big positive effect by finally reigning in the overboarding copyright laws.

  • @[email protected]
    link
    fedilink
    English
    104 months ago

    “They should pay their sources!”

    Source is 600GB of raw copied website data mixed in a giant witches cauldron

    • @[email protected]
      link
      fedilink
      English
      104 months ago

      As someone who doesn’t hate AI, I hate a few things about how it’s happening:

      • If I want to make a book, and I want to use other books for reference, I need to obtain them legally. Purchase, rent, loan… Else I’m a pirate. Multimillion companies say for them it’s fine as long as somebody posted it on the internet. Their version of annas-archive is suddenly legal and moral, while I’m harming the authors if I use it.
      • They are stuffing everything with AI, which generally means internet connection and sending unknown data.
      • It’s an annoying marketing gimmick. While incredible useful in some places, the insistence that it solves all the problems make it seem as a failure.
      • @[email protected]
        link
        fedilink
        English
        24 months ago

        I think your issue moreso lies on copyright laws than the LLM datasets origination then. Which I completely understand, I hate copyright laws.

        There’s TV shows that I can’t stream and the only legal way to watch them is to buy the box set for £90. Get fucked I’m not paying that, I’ll just download it for free.

    • @[email protected]
      link
      fedilink
      English
      74 months ago

      There are a lot of problems with it. Lots of people could probably tell you about security concerns and wasted energy. Also there’s the whole comically silly concept of them marketing having AI write your texts and emails for you, and then having it summarize the texts and emails you get. Just needlessly complicating things.

      Conceptually, though, most people aren’t too against it. In my opinion, all the stuff they are labeling “generative AI” isn’t really “AI” or “generative”. There are lots of ways that people define AI, and without being too pedantic about definitions, the main reason I think they call it that, other than marketing, is that they are really trying to sway public opinion by controlling language. Scraping all sorts of copywritten material, and re-jumbling it to spit out something similar, is arguably something we should prohibit as copyright infringement. It’s enough of a gray area to get away with short term. By convincing people with the very language they use to describe it that they aren’t just putting other people’s material in a mixer, they are “generating new content”, they hope to have us roll over and sign off on what they’ve been doing.

      Saying that humans create stories by jumbling together previous stories is a BS cop out, too. Obviously, we do, but humans have not, and do not have to give computers that same right. Also, LLMs are very complex, but they are also way way less complex than human minds. The way they put together text is closer to running a story through Google translate 10 times than it is to a human using a story for inspiration.

      There are real, definite benefits of using LLMs, but selling it as AI and trying to force it into everything is a gimmick.

    • @[email protected]
      link
      fedilink
      English
      3
      edit-2
      4 months ago

      I hate it because it’s a gigantic waste of time and resources. Big tech has poured hundreds of billions of dollars, caused double digit percentage increases in data center emissions, and fed almost the entire collective output of humanity into these models.

      And what did we get for it? We got a toy that is at best mildly amusing, but isn’t really all that actually useful for anything; the output provided by generative AI is too unreliable to trust outright and needs to be reviewed and tweaked by hand, so at best you’re getting a minor productivity gain, and more likely you’re seeing a neutral or negative impact on your productivity (or producing low-quality crap faster and calling it “good enough”). At worst, it’s put a massive force multiplier in the hands of the bad actors using disinformation to tear apart modern society for their personal gain. Goldman Sachs released a report in late June where they pointed this out: if tech companies are planning on investing a trillion dollars into AI, what is the trillion dollar problem that AI is going to solve? And so far as I can tell, it seems that the answer to the question is either “it will eliminate millions of jobs and wipe out entire industries without any replacement or safety net, causing untold human suffering” or (more likely to be the case) “there is no trillion dollar problem AI can solve and the entire endeavor is pointless.”

      Even ignoring the opportunity cost–the money spent could have literally solved the entire homelessness crisis, world hunger, lifted entire countries out of poverty, or otherwise funded solutions for real, intractable, pressing problems for all of humanity–even ignoring that generative AI has single-handedly erased years of progress in reducing our C02 emissions and addressing the climate crisis, even ignoring the logistical difficulty of the scale of build-out being discussed requiring a bigger improvement in our power grid than has been done basically ever, even ignoring the concerns over IP theft and everything else, fundamentally generative AI just isn’t worth the hype. It’s the crypto craze and NFT craze and metaverse craze (remember Zuckerberg burning 36 billion to make a virtual meeting space featuring avatars without legs?) all over again, except instead of only impacting the suckers who bought into the hype, this time it’s getting shoved in everybody’s face even if they want nothing to do with it.

      But hey, at least it gave us “I Glued My Balls To My Butthole Again.” That totally makes the hundred billion investment worth it, right?

  • Match!!
    link
    fedilink
    English
    94 months ago

    if they’re using creative commons licenses (or other sharing licenses) then it’s fine! but the model is then alsp bound by the same licenses because that’s how licenses work

  • @[email protected]
    link
    fedilink
    English
    9
    edit-2
    4 months ago

    As long as it’s licensed as Creative Common of some sort. Copyrighted materials are copyrighted and shouldn’t be used without concent , this protect also individuals not only corporations. (Excuse my English)

    Edit: Your argument about probability and parameter size is inapplicable in my mind. The same can be said about jpeg lossy compression.

    • Zagorath
      link
      fedilink
      English
      74 months ago

      Creative Commons would not actually help here. Even the most permissive licence, CC-BY, requires attribution. If using material for training material requires a copyright licence (which is certainly not a settled question of law), CC would likely be just the same as all rights reserved.

      (There’s also CC-0, but that’s basically public domain, or as near to it as an artist is legally allowed to do in their locale. So it’s basically not a Creative Commons licence.)

    • @[email protected]OP
      link
      fedilink
      English
      14 months ago

      Incidentally, I read this a while ago, because I was training a classifier on mostly Creative Commons licensed works: https://creativecommons.org/2023/08/18/understanding-cc-licenses-and-generative-ai/

      … we believe there are strong arguments that, in most cases, using copyrighted works to train generative AI models would be fair use in the United States, and such training can be protected by the text and data mining exception in the EU. However, whether these limitations apply may depend on the particular use case.

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        4 months ago

        Maybe there should be a distinction if an individual does is for educational and research and a corporation does it for commercial use. As a user it’s fun and usefull to generate whatever mix of text or images I want from a model that was trained on everything, but a user doesn’t see the exploitation made by the corporation that handed him the tool

    • wildncrazyguy138
      link
      fedilink
      14 months ago

      Could the copywrited material consumed potentially fall under fair use? There are provisions for research purposes.

      • Zagorath
        link
        fedilink
        English
        44 months ago

        Just fyi the term is “copyrighted”, not “copywrited”. Copyright is about the right to copy, not anything about writing.

  • @[email protected]
    link
    fedilink
    English
    7
    edit-2
    4 months ago

    Here’s an analogy that can be used to test this idea.

    Let’s say I want to write a book but I totally suck as an author and I have no idea how to write a good one. To get some guidelines and inspiration, I go to the library and read a bunch of books. Then, I’ll take those ideas and smash them together to produce a mediocre book that anyone would refuse to publish. Anyway, I could also buy those books, but the end result would still be the same, except that it would cost me a lot more. Either way, this sort of learning and writing procedure is entirely legal, and people have been doing this for ages. Even if my book looks and feels a lot like LOTR, it probably won’t be that easy to sue me unless I copy large parts of it word for word. Blatant plagiarism might result in a lawsuit, but I guess this isn’t what the AI training data debate is all about, now is it?

    However, if I pirated those books, that could result in some trouble. However, someone would need to read my miserable book, find a suspicious passage, check my personal bookshelf and everything I have ever borrowed etc. That way, it might be possible to prove that I could not have come up with a specific line of text except by pirating some book. If an AI is trained on pirated data, that’s obviously something worth the debate.

    • wildncrazyguy138
      link
      fedilink
      4
      edit-2
      4 months ago

      To expand on what you wrote, I’d equate the LLM output as similar to me reading a book. From here on out until I become senile, the book is part of memory. I may reference it, I may parrot some of its details that I can remember to a friend. My own conversational style and future works may even be impacted by it, perhaps even subconsciously.

      In other words, it’s not as if a book enters my brain and then is completely gone once I’m finished reading it.

      So I suppose then, that the question is moreso one of volume. How many works consumed are considered too many? At what point do we shift from the realm of research to the one of profiteering?

      There are a certain subset of people in the AI field who believe that our brains are biological forms of LLMs, and that, if we feed an electronic LLM enough data, it’ll essentially become sentient. That may be for better or worse to civilization, but I’m not one to get in the way of wonder building.

      • @[email protected]
        link
        fedilink
        English
        34 months ago

        A neural network (the machine learning technology) aims to imitate the function to normal neurons in a human brain. If you have lots of these neurons, all sorts of interesting phenomena begin to emerge, and consciousness might be one of them. If/when we get to that point, we’ll also have to address several of legal and philosophical questions. It’s going to be a wild ride.

  • Waldowal
    link
    fedilink
    English
    74 months ago

    Agree for these reasons:

    • Legally: It’s always been legal (in the US at least) to relay the ideas in a copywrited work. AI might need to get better at providing a bibliography, but that’s likely a courtesy more than a legal requirement.

    • Culturally: Access to knowledge should be free. It’s one of the reasons public libraries exist. If AI can help people gain knowledge more quickly and completely, it’s just the next evolution of the same idea.

    • Also Culturally: Think about what’s out on the internet. Millions of recipes, no doubt copied from someone else, with pages of bullshit about how the author “grew up on a farm that produced Mohitos”. For decades now, “content creators” have gotten paid for millions of low quality bullshit click bait articles. There’s that. Most of the real “knowledge” on the internet is freely accessible technical / product documentation, forum posts like StackOverflow, and scientific studies. All of it is stuff the authors would probably love to have out there and freely accessible. Sure, some accidental copywrite infringement might happen here and there, but I think it’s a tiny problem in relation to the value that AI might bring society.

  • @[email protected]
    link
    fedilink
    English
    5
    edit-2
    4 months ago

    I agree with some other comments that this is a question of public domain vs. copyright. However, even copyright has exceptions, notably fair use in the US.

    TL;DR: If I can create art imitating [insert webcomic artist here] based on fair use, or use their work for artistic inspiration, it’s legal, but when a machine does it, it’s illegal?

    One of the chief AI critics, Sarah Andersen, made a claim 9 months ago that when AI generated the following output for “Sarah Andersen comic”, it clearly imitated her style, and if any AI company is to be believed, it’s going to get more accurate with later models, possibly creating a believable comic including text.

    Regardless of how accurately the AI can draw the comics (as long as they aren’t effectively identical to a single specific comic of hers), shouldn’t this just qualify as fair use? I can imitate SA’s style too and make a parody comic, or even just go the lazy way and change some text like alt-right “memers” did (politics and unfunniness aside, I believe the comic should be legal if they replaced “© Sarah Andersen” with “Parody of comic by Sarah Andersen”). As long as the content is distributed as “homage”, “parody”, “criticism” etc., doesn’t directly harm the Sarah Andersen’s financial interests, and makes it clear that the author is clearly not her, I think there should be no issue even if it features likeness of trademarked characters, phrases and concepts.

    Makes me ashamed there is a book by her in my house (my sister received it as a gift).

    • @[email protected]
      link
      fedilink
      English
      44 months ago

      This argument is more along the lines of what is actually being argued by AI companies in court. Style cannot be copyrighted. They argue AI is simply recreating a style.

      The problem with this is that, in order to recreate a style, AI needs to be trained on that content. So if an AI starts reproducing art in the same style as a popular artist, it must have inherently been fed a whole bunch of that artist’s work. Artists claim this is a violation of copyright since they never agreed for their art to be used in that way. The AI companies argue fair use also allows use of copyrighted works for teaching or training. An art class can use a popular artist’s work as examples of how to recreate a certain style. Of course, training AI is different than training a group of students. Is it different enough that fair use doesn’t apply is the question being decided on in court.