Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • kbin_space_program
    link
    fedilink
    10
    edit-2
    7 months ago

    There are a lot of people, including google itself, claiming that this behaviour is an isolated and basically blamed users for trolling them.

    https://www.bbc.com/news/articles/cd11gzejgz4o

    I was working on the concept of “hallucinations” being things returned that are unrelated to the input query, not directly part of the model as with the glue-pizza.

      • kbin_space_program
        link
        fedilink
        4
        edit-2
        7 months ago

        A Google spokesperson told the BBC they were “isolated examples”.

        Some of the answers appeared to be based on Reddit comments or articles written by satirical site, The Onion.

        But Google insisted the feature was generally working well.

        “The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences,” it said in a statement.

        It said it had taken action where “policy violations” were identified and was using them to refine its systems.

        That’s precisely what they are saying.