ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • Obinice
    link
    fedilink
    English
    331 year ago

    Well, it’s a good thing absolutely no clinician is using it to figure out how to treat their patient’s cancer… then?

    I imagine it also struggles when asked to go to the kitchen and make a cup of tea. Thankfully, nobody asks this, because it’s outside of the scope of the application.

    • @[email protected]
      link
      fedilink
      English
      111 year ago

      The fear is that hospital administrators equipped with their MBA degrees will think about using it to replace expensive, experienced physicians and diagnosticians

      • @[email protected]
        link
        fedilink
        English
        111 year ago

        They’ve been trying this shit for decades already with established AI like Big Blue. This isn’t a new pattern. Those in charge need to keep driving costs down and profit up.

        Race to the bottom.

      • Obinice
        link
        fedilink
        English
        21 year ago

        If that were legal, I’d absolutely be worried, you make a good point.

        Even Doctor need special additional qualifications to do things like diagnose illnesses via radiographic imagery, etc. Specialised AI is making good progress in aiding these sorts of things, but a generalised and very poor AI like ChatGPT will never be legally certified to do this sort of thing.

        Once we have a much more effective generalised AI, things will get more interesting. It’ll have to prove itself thoroughly though, before being certified, so it’ll still be a few years after it appears before we see it used in clinical applications.