• @[email protected]
    link
    fedilink
    English
    750 minutes ago

    It could be that Gemini was unsettled by the user’s research about elder abuse, or simply tired of doing its homework.

    That’s… not how these work. Even if they were capable of feeling unsettled, that’s kind of a huge leap from a true or false question.

  • @[email protected]
    link
    fedilink
    English
    212 hours ago

    I’m still really struggling to see an actual formidable use case for AI outside of computation and aiding in scientific research. Stop being lazy and write stuff. Why are we trying to give up everything that makes us human by offloading it to a machine?

    • @[email protected]
      link
      fedilink
      English
      142 hours ago

      AI summaries of larger bodies of text work pretty well so long as the source text itself is not slop.

      Predictive text entry is a handy time saver so long as a human stays in the driver’s seat.

      Neither of these justify current levels of hype.

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        23 minutes ago

        Go look at the models available on huggingface.

        There’s applications in Visual Question Answering, Video to Text, Depth Estimation, 3D recreation from a photo, Object detection, visual classification, Translation from language to language, Text to realistic speech, Robotics Reinforcement learning, Weather Forecasting, and those are just surface-level models.

        It absolutely justifies current levels of hype because the research done now will absolutely put millions out of jobs; and will be much cheaper than paying people to do it.

        The people saying it’s hype are the same people who said the internet was a fad. Did we have a bubble of bullshit? Absolutely. But there is valid reason for the hype, and we will filter out the useless stuff eventually. It’s already changed entire industries practically overnight.

    • CubitOom
      link
      fedilink
      English
      41 hour ago

      It can be really good for text to speech and speech to text applications for disabled or people with learning disabilities.

      However it gets really funny and weird when it tries to read advanced mathematics formulas.

      I have also heard decent arguments for translation although in most cases it would still be better to learn the language or use a professional translator.

    • @[email protected]
      link
      fedilink
      English
      246 minutes ago

      I’m still really struggling to see an actual formidable use case

      It’s an excellent replacement for middle management blather. Content that has no backing in data or science but needs to sound important.

    • @[email protected]
      link
      fedilink
      English
      72 hours ago

      The relentless pursuit of capitalism and reduced labor costs. I still don’t think anyone knows how effective it’s going to be at this point. But companies are investing billions to find out.

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      38 minutes ago

      I don’t use it for writing directly, but I do like to use it for worldbuilding. Because I can think of a general concept that could be explored in so many different ways, it’s nice to be able to just give it to an LLM and ask it to consider all of the possible ways it could imagine such an idea playing out. it also kind of doubles as a test because I usually have some sort of idea for what I’d like, and if it comes up with something similar on its own that kind of makes me feel like it would be something which would easily resonate with people. Additionally, a lot of the times it will come up with things that I hadn’t considered that are totally worth exploring. But I do agree that the only as you say “formidable” use case for this stuff at the moment is to use this thing as basically a research assistant for helping you in serious intellectual pursuits.

  • @[email protected]
    link
    fedilink
    English
    51 hour ago

    I suspect it may be due to a similar habit I have when chatting with a corporate AI. I will intentionally salt my inputs with random profanity or non sequitur info, for lulz partly, but also to poison those pieces of shits training data.

      • @[email protected]
        link
        fedilink
        English
        222 minutes ago

        They don’t. The models are trained on sanitized data, and don’t permanently “learn”. They have a large context window to pull from (reaching 200k ‘tokens’ in some instances) but lots of people misunderstand how this stuff works on a fundamental level.