• @[email protected]
    link
    fedilink
    English
    21 day ago

    People still run or even continue pretrain llama2 for that reason, as its data is pre-slop.

    • @[email protected]
      link
      fedilink
      English
      11 day ago

      I really wish it were easier to fine-tune and run inference on GPT-J-6B as well… that was a gem of a base model for research purposes, and for a hot minute circa Dolly there were finally some signs it would become more feasible to run locally. But all the effort going into llama.cpp and GGUF kinda left GPT-J behind. GPT4All used to support it, I think, but last I checked the documentation had huge holes as to how exactly that’s done.

      • @[email protected]
        link
        fedilink
        English
        11 day ago

        Still perfectly runnable in kobold.cpp. There was a whole community built up around with Pygmalion.

        It is as dumb as dirt though. IMO that is going back too far.