I switched from llamacpp to koboldcpp. Koboldcpp is really really fast because it can use gpu. The problem is that I’m having a hard time to get it to generate long enough outputs.

“write an essay about the history of the moon. It needs to be at least 500 words” for example is a prompt where the same model will give me an output that’s actually that long on llamacpp. Koboldcpp never gives me more than about 70 words per response. Pressing enter to make the ai continue writing or asking it to continue doesn’t work as well in my koboldcpp setup as it does on llamacpp. I’ve set the tokens to generate to 512, the highest number. I’ve set the context tokens to 4096.

What else can I do to try to get longer responses?

  • ffhein
    link
    fedilink
    English
    54 months ago

    llama.cpp uses the gpu if you compile it with gpu support and you tell it to use the gpu…

    Never used koboldcpp, so I don’t know why it would it would give you shorter responses if both the model and the prompt are the same (also assuming you’ve generated multiple times, and it’s always the same). If you don’t want to use discord to visit the official koboldcpp server, you might get more answers from a more llm-focused community such as [email protected]

    • @PenisWenisGeniusOP
      link
      English
      24 months ago

      Cool I didn’t know llamacpp could do gpu acceleration at all. I’m going to look into that.