• @[email protected]OP
    link
    fedilink
    44 months ago

    I considered self hosting, but the setup seems complicated. The need for a good gpu is stated everywhere. And my concern is how to get the database to even come close to chatGpt? I cant train on every book on existence, as they did

    • @[email protected]
      link
      fedilink
      English
      64 months ago

      Tip: try Oobabooga’s Text Generation WebUI with one of the WizardLM Uncensored models from HuggingFace in GGML or GGUF format.

      The GGML and GGUF formats perform very well with CPU inference when using LLamaCPP as the engine. My 10 years old 2.8 GHz CPUs generate about 2 words per second. Slightly below reading speed, but pretty solid. Just make sure to keep to the 7B models if you have 16 GiB of memory and 13B models if you have 32 GiB of memory.

        • @[email protected]
          link
          fedilink
          4
          edit-2
          4 months ago

          There’s a “models” directory inside the directory where you installed the webui. This is where the model files should go, but they also have supporting files (.yaml or .json) with important metadata about the model.

          The easiest way to install a model is to let the webui download the model itself:

          Screenshot of Oobaboga's WebUI with the model tab open and the model names from HuggingFace entered

          And after it finishes downloading, just load it into memory by clicking the refresh button, selecting it, choosing llama.cpp and then load (perhaps tick the ‘CPU’ box, but llama.cpp can do mixed CPU/GPU inference, too, if I remember right).

          Screen of the model page in Oobaboga's WebUI with the model ready to be loaded

          My install is a few months old, I hope the UI hasn’t changed to drastically in the meantime :)

    • @MyNamesNotRobert
      link
      4
      edit-2
      4 months ago

      Chatgpt is such a disloyal snarky piece of shit that a database 90% as good but 2000% more obedient is better in every way.

      For stable diffusion image generation you need an nvidia gpu for reasonable speeds. As long as you actually enable multithreading, in my case 8 cores, you can get really good performance in llamacpp (and by extension gpt4all since it runs on llamacpp). My uncensored ai is fast enough to be used on demand like chatgpt and I use it pretty much every day.