ChatGPT’s new AI store is struggling to keep a lid on all the AI girlfriends::OpenAI: ‘We also don’t allow GPTs dedicated to fostering romantic companionship’

  • @[email protected]
    link
    fedilink
    English
    2811 months ago

    I’d love to have an AI assistant/girlfriend like JOI from Bladerunner 2049, something I could jerk off to one minute, then have her prepare my taxes and order a pizza the next. However, these ChatGPT girlfriends all seem like they’re just subscription chatbots. Maybe some day we’ll get there and nerds will work up a local, open-source slutty AI girlfriend, but for now they’re all just crap.

    • Corroded
      link
      fedilink
      English
      611 months ago

      I think you can self host an AI chat not these days

      • @[email protected]
        link
        fedilink
        English
        1411 months ago

        You can, and it’s easier than you might think! Check out a platform like Oobabooga and find a nice 4-bit quantized LLM of a flavor you prefer. Check out TheBloke on hugging face, they quantized a ton of great LLMs.

            • @[email protected]
              link
              fedilink
              English
              211 months ago

              Exactly! If you only want to use a Large Language Model (LLM) to run your own local chatbot, then using a quantized version will dramatically improve speed and performance. It also allows consumer hardware to run larger models which would otherwise be prohibitively resource intensive.

            • @[email protected]B
              link
              fedilink
              English
              211 months ago

              Here’s the summary for the wikipedia article you mentioned in your comment:

              Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms. The difference between an input value and its quantized value (such as round-off error) is referred to as quantization error.

              to opt out, pm me ‘optout’. article | about

            • Lemminary
              link
              fedilink
              English
              111 months ago

              Ah, thanks! I’m only familiar with the word in other contexts so it made a lot of noise.