ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • @[email protected]
    link
    fedilink
    English
    81 year ago

    It’s even worse that someone actually did a study instead of simply acknowledging or realizing that ChatGPT is happy to just make stuff up.

    Sure, the world should just trust preconceptions instead of doing science to check our beliefs. That worked great for tens of thousands of years of prehistory.

    • @[email protected]
      link
      fedilink
      English
      28
      edit-2
      1 year ago

      It’s not merely a preconception. It’s a rather obvious and well-known limitation of these systems. What I am decrying is that some people, from apparent ignorance, think things like “ChatGPT can give a reliable cancer treatment plan!” or “here, I’ll have it write a legal brief and not even check it for accuracy”. But sure, I agree with you, minus the needless sarcasm. It’s useful to prove or disprove even absurd hypotheses. And clearly people need to be definitely told that ChatGPT is not always factual, so hopefully this helps.

      • @[email protected]
        link
        fedilink
        English
        81 year ago

        I’d say that a measurement always trumps arguments. At least you know how accurate they are, this statement cannot follow from reason:

        The JAMA study found that 12.5% of ChatGPT’s responses were “hallucinated,” and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.

        • @[email protected]
          link
          fedilink
          English
          51 year ago

          That’s useful. It’s also good to note that the information the agent can relay depends heavily on the data used to train the model, so it could change.

    • @[email protected]
      link
      fedilink
      English
      91 year ago

      Why the hell are people down voting you?

      This is absolutely correct. We need to do the science. Always. Doesn’t matter what the theory says. Doesn’t matter that our guess is probably correct.

      Plus, all these studies tell us much more than just the conclusion.

    • @[email protected]
      link
      fedilink
      English
      71 year ago

      It’s not even a preconception, it’s willful ignorance, the website itself tells you multiple times that it is not accurate.

      The bottom of every chat has this text: “Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT August 3 Version”

      And when you first use it, a modal pops up explaining the same thing.

    • @[email protected]
      link
      fedilink
      English
      71 year ago

      “After an extensive three-year study, I have discovered that touching a hot element with one’s bare hand does, in fact, hurt.”

      “That seems like it was unnecessary…”

      “Do U even science bro?!”

      Not everything automatically deserves a study. Were there any non-rando people out there claiming that ChatGPT could totally generate legit cancer treatment plans that people could then follow?

    • @[email protected]
      link
      fedilink
      English
      31 year ago

      ChatGPT isn’t some newly discovered sentient species.

      It’s a machine designed and built by human engineers.

      This is like suggesting that we study fortune cookies to see if they can accurately forecast the future. The manufacturer can simply tell you the limitation of their product… Being that they can not divine the future.