• @[email protected]
    link
    fedilink
    English
    8
    edit-2
    4 months ago

    Ah, to clarify: Model Collapse is still an issue - one for which mitigation techniques are already being developed and applied, and have been for a while. While yes currently LLM content is harder to train against, there’s no reason that must always hold true - this paper actually touches on that weird aspect! Right now, we have to be careful to design with model collapse in mind and work to mitigate it manually, but as the technology improves it’s theorized that we’ll hit a point at which models coalesce towards stability, not collapse, even when fed training data that was generated by an LLM. I’ve seen the concept called Generative Bootstrapping or the Bootstrap Ladder (it’s a new enough concept that we haven’t all agreed on a name for it yet. we can only hope someone comes up with something better because wow the current ones suck…). We’re even seeing some models that are starting to do this coalesce-towards-stability thing, though only in some extremely niche applications. Only time will tell if all models are able to do this stable-coalescing or if it’s only possible in some cases.

    My original point though was just that this headline is fairly sensationalist, and that people shouldn’t take too much hope from this collapse because we’re both aware of it, and are working to mitigate it (exactly like the paper itself cautions us to do)

    • Alphane MoonOP
      link
      fedilink
      English
      7
      edit-2
      4 months ago

      Thanks for the reply.

      I guess we’ll see what happens.

      I still find it difficult to get my head around how a decrease in novel training data will not eventually cause problems (even with techniques to work around this in the short term, which I am sure work well on a relative basis).

      A bit of an aside, I also have zero trust in the people behind current LLM, both the leadership (e.g. Altman) or the rank and file. If it’s in their interests do downplay the scope and impact of model degeneracy, they will not hesitate to lie about it.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        4 months ago

        Yikes. Well. I’ll be over here, conspiring with the other NASA lizard people on how best to deceive you by politely answering questions on a site where maaaaybe 20 total people will actually read it. Good luck getting your head around it, there’s lots of papers out there that might help (well, assuming I’m not lying to you about those, too).

        • Alphane MoonOP
          link
          fedilink
          English
          14 months ago

          This was a general comment, not aimed at you. Honestly, it wasn’t my intention to accuse you specifically. Apologies for that.