Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • @[email protected]
    link
    fedilink
    English
    26 months ago

    it is absolutely capable to come up with it’s own logical stuff

    interesting, in my experience, it’s only been good at repeating things, and failing on unexpected inputs - it’s able to answer pretty accurately if a small number is even or odd, but not if it’s a large number, which indicates it’s not reasoning but parroting answers to me

    do you have example prompts where it showed clear logical reasoning?

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      6 months ago

      Examples showing that it comes up with it’s own solutions to an answer? Just ask it something that could not have been on the Internet before. Professor talking about AGI in GPT 4

      Personal examples would be to code python to solve a 2D thermal heat flux problem given some context and constraints.

      • @[email protected]
        link
        fedilink
        English
        16 months ago

        well, i just tried it, and its answer is meh –

        i asked it to transcribe “zenquistificationed” (made up word) in IPA, it gave me /ˌzɛŋˌkwɪstɪfɪˈkeɪʃənd/, which i agree with, that’s likely how a native english speaker would read that word.

        i then asked it to transcribe that into japaense katakana, it gave me “ゼンクィスティフィカションエッド” (zenkwisuthifikashon’eddo), which is not a great transcription at all - based on its earlier IPA transcription, カション (kashon’) should be ケーシュン (kēshun’), and the エッド (eddo) part at the end should just, not be there imo, or be shortened to just ド (do)

        • @[email protected]
          link
          fedilink
          English
          16 months ago

          this paper says it is capable of original thought. It also “speaks” of it in high regard in other things. That is also my experience using it for… over a year?! now.