Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • @[email protected]
    link
    fedilink
    English
    4
    edit-2
    1 month ago

    Ppl anthropomorphise LLMs way too much. I get it that at first glance they sound like a living being, human even and it’s exciting but we had some time already to know it’s just very cool big data processing algo.

    It’s like boomers asking me what is computer doing and referring to computer as a person it makes me wonder will I be as confused as them when I am old?

    • @[email protected]
      link
      fedilink
      English
      31 month ago

      Oh, hi, second coming of Edgar Dijkstra.

      I think anthropomorphism is worst of all. I have now seen programs “trying to do things”, “wanting to do things”, “believing things to be true”, “knowing things” etc. Don’t be so naive as to believe that this use of language is harmless. It invites the programmer to identify himself with the execution of the program and almost forces upon him the use of operational semantics.

      He may think like that when using language like that. You might think like that. The bulk of programmers doesn’t. Also I strongly object the dissing of operational semantics. Really dig that handwriting though, well-rounded lecturer’s hand.

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        1 month ago

        Oh, hi, second coming of Edgar Dijkstra.

        Don’t say those things to me. I have special snowflake disorder. I got literally high reading this when seeing a famous intelligent person has same opinion as me. Great minds… god see what you have done.

    • @[email protected]
      link
      fedilink
      English
      11 month ago

      Probably not about computers per se - like the Greatest generation knew a lot more about horses than the average person today - and similarly we know more about the things that have mattered to us over the course of our lifetimes.

      What would get weird for us is if when we are retirement age - ofc we cannot ever retire, bc capitalism - and someone talks about the new horglesplort based on alien vibrations which are computer-generated from the 11th dimension of string theory and we are all like “wut!?”

      fr fr no cap skibidi toilet rizz teabag

      That said, humanity seems to not only have slowed down the accretion of new knowledge but actually gone backwards - children today won’t live as long as boomers did, and e.g. despite being on mobile devices all day long, most don’t have the foggiest clue of how computing works as in programming or even binary. So we will likely be confused in the opposite way as in “why can’t you understand this?”

    • Flying Squid
      link
      fedilink
      English
      11 month ago

      It’s only going to get worse now that ChatGPT has a realistic-sounding voice with simulated emotions.