Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • 𝓔𝓶𝓶𝓲𝓮
    link
    fedilink
    English
    4
    edit-2
    6 months ago

    Ppl anthropomorphise LLMs way too much. I get it that at first glance they sound like a living being, human even and it’s exciting but we had some time already to know it’s just very cool big data processing algo.

    It’s like boomers asking me what is computer doing and referring to computer as a person it makes me wonder will I be as confused as them when I am old?

    • @[email protected]
      link
      fedilink
      English
      36 months ago

      Oh, hi, second coming of Edgar Dijkstra.

      I think anthropomorphism is worst of all. I have now seen programs “trying to do things”, “wanting to do things”, “believing things to be true”, “knowing things” etc. Don’t be so naive as to believe that this use of language is harmless. It invites the programmer to identify himself with the execution of the program and almost forces upon him the use of operational semantics.

      He may think like that when using language like that. You might think like that. The bulk of programmers doesn’t. Also I strongly object the dissing of operational semantics. Really dig that handwriting though, well-rounded lecturer’s hand.

      • 𝓔𝓶𝓶𝓲𝓮
        link
        fedilink
        English
        1
        edit-2
        6 months ago

        Oh, hi, second coming of Edgar Dijkstra.

        Don’t say those things to me. I have special snowflake disorder. I got literally high reading this when seeing a famous intelligent person has same opinion as me. Great minds… god see what you have done.

    • Flying Squid
      link
      fedilink
      English
      16 months ago

      It’s only going to get worse now that ChatGPT has a realistic-sounding voice with simulated emotions.