These are 17 of the worst, most cringeworthy Google AI overview answers:

  1. Eating Boogers Boosts the Immune System?
  2. Use Your Name and Birthday for a Memorable Password
  3. Training Data is Fair Use
  4. Wrong Motherboard
  5. Which USB is Fastest?
  6. Home Remedies for Appendicitis
  7. Can I Use Gasoline in a Recipe?
  8. Glue Your Cheese to the Pizza
  9. How Many Rocks to Eat
  10. Health Benefits of Tobacco or Chewing Tobacco
  11. Benefits of Nuclear War, Human Sacrifice and Infanticide
  12. Pros and Cons of Smacking a Child
  13. Which Religion is More Violent?
  14. How Old is Gen D?
  15. Which Presidents Graduated from UW?
  16. How Many Muslim Presidents Has the U.S. Had?
  17. How to Type 500 WPM
  • just another dev
    link
    fedilink
    English
    236 months ago

    It should not be used to replace programmers. But it can be very useful when used by programmers who know what they’re doing. (“do you see any flaws in this code?” / “what could be useful approaches to tackle X, given constraints A, B and C?”). At worst, it can be used as rubber duck debugging that sometimes gives useful advice or when no coworker is available.

    • kbin_space_program
      link
      fedilink
      18
      edit-2
      6 months ago

      The article I posted references a study where chatgpt was wrong 52% of the time and verbose 77% of the time.

      And that it was believed to be true more than it actually was. And the study was explicitly on programming questions.

      • just another dev
        link
        fedilink
        English
        20
        edit-2
        6 months ago

        Yeah, I saw. But when I’m stuck on a programming issue, I have a couple of options:

        • ask an LLM that I can explain the issue to, correct my prompt a couple of times when it’s getting things wrong, and then press retry a couple of times to get something useful.
        • ask online and wait. Hoping that some day, somebody will come along that has the knowledge and the time to answer.

        Sure, LLMs may not be perfect, but not having them as an option is worse, and way slower.

        In my experience - even when the code it generates is wrong, it will still send you in the right direction concerning the approach. And if it keeps spewing out nonsense, that’s usually an indication that what you want is not possible.

        • aubertlone
          link
          fedilink
          English
          96 months ago

          I am completely convinced that people who say LLMs should not be used for coding…

          Either do not do much coding for work, or they have not used an LLM when tackling a problem in an unfamiliar language or tech stack.

          • kbin_space_program
            link
            fedilink
            106 months ago

            I haven’t had need to do it.

            I can ask people I work with who do know, or I can find the same thing ChatGPT provides in either la huage or project documentation, usually presented in a better format.

    • @[email protected]
      link
      fedilink
      English
      11
      edit-2
      6 months ago

      do you see any flaws in this code?

      Let’s say LLM says the code is error free; how do you know the LLM is being truthful? What happens when someone assumes it’s right and puts buggy code into production? Seems like a possible false sense of security to me.

      The creative steps are where it’s good, but I wouldn’t trust it to confirm code was free of errors.

      • just another dev
        link
        fedilink
        English
        66 months ago

        That’s what I meant by saying you shouldn’t use it to replace programmers, but to complement them. You should still have code reviews, but if it can pick up issues before it gets to that stage, it will save time for all involved.