The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

  • @[email protected]
    link
    fedilink
    English
    26 months ago

    Exactly. I suspect many of the people that complain about its inadequacies don’t really work in an industry that can leverage the potential of this tool.

    You’re spot on about the documentation aspect. I can install a package and rely on the LLM to know the methods and such and if it doesn’t, then I can spend some time to read it.

    Also, I suck at regex but writing a comment about what the regex will do will make the LLM do it for me. Then I’ll test it.

    • Zos_Kia
      link
      English
      26 months ago

      Honestly i started at a new job 2 weeks ago and i’ve been breezing through subjects (notably thanks to ChatGPT) at an alarming rate. I’m happy, the boss is happy, OpenAI get their 20 bucks a month. It’s fascinating to read all the posts from people who claim it cannot generate any good code - sounds like a skill issue to me.