Surprised pikachu face

  • @[email protected]
    link
    fedilink
    English
    105 days ago

    I like Ollama, and recommend it to tinker, but I admit this “LLM Explorer” is quite neat thanks to sections like “LLMs Fit 16GB VRAM”

    Ollama just works but it doesn’t help to pick which model best fits your needs.

    • @[email protected]
      link
      fedilink
      English
      24 days ago

      pick which model best fits your needs.

      What is the need I have to put the effort in to install all this locally. Websites win in terms of convenience.

      • morriscox
        link
        fedilink
        English
        23 days ago

        I want to work on my stuff in peace and in private without worrying about a company grabbing my stuff and using it for themselves and to give/sell it to other outfits, including the government. “If you have nothing to hide…” is bullshit and needs to die.

      • @[email protected]
        link
        fedilink
        English
        24 days ago

        I don’t think I understand your point, are you saying there is no benefit in running locally and that Websites or APIs are more convenient?

        • @[email protected]
          link
          fedilink
          English
          14 days ago

          I already have stable diffusion on a local machine. I was trying to find motivation to install a LLM locally. You answered my question in a different response

          use cases where customization helps while quality does matter much due to scale, i.e spam, then LLMs and related tools are amazing.