• 0 Posts
  • 5 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • Definitely. It has some alignment, but it won’t straight up refuse to do anything. It will sometimes add notes saying that what you’ve asked is kinda maybe against the law, but will produce a great response regardless. It’s a 70b, so running it locally is kind of a challenge, but for those who can run it - there is simply no other LLM that you can run at home that gets even close to it. It follows instructions amazingly, it’s very consistent and barely hallucinates. There is some special mistral sauce in it for sure, even if it’s “just” a llama2-70b.



  • There is a bit of a conundrum here: in order to run a model that is any good in coding you want it to have a lot of parameters (the more the better) but also since it’s code and not some spoken language - precision matters here. Home hardware like 3090 is able to run ~30b models, but there is a catch - it just fits and only in quantized form = with 4x worse precision typically. Unless we see some breakthrough here that makes inference of huge models possible at full precision - the hosted AI will always be better for coding. Not saying such breakthrough is impossible though - quite the opposite in my opinion.


  • I’ve got a k80 and it’s… underwhelming.

    • It’s CUDA API is very old (11.4). Nothing works with it - you have to compile all the things from scratch.
    • The last driver that supported it is nvidia-driver-470 which is not even included anymore in 22.04 ubuntu…
    • Under debian, you can’t (I couldn’t…) install both cuda-drivers-470 and nvidia-driver version 470.
    • It doesn’t mix well with other modern cards like 3090.
    • It idles at around 70W and when in use makes my R730 sound like an industrial vacuum cleaner.
    • It’s not even a really-24Gb card. It’s two 12Gb cards wearing a trench-coat.

    I does run 30B models tho. And it is cheap.