Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • FaceDeer
    link
    fedilink
    36 months ago

    No, my example is literally telling the AI that socks are edible and then asking it for a recipe.

    In your quoted text:

    When a model is trained on data with source-reference (target) divergence, the model can be encouraged to generate text that is not necessarily grounded and not faithful to the provided source.

    Emphasis added. The provided source in this case would be telling the AI that socks are edible, and so if it generates a recipe for how to cook socks the output is faithful to the provided source.

    A hallucination is when you train the AI with a certain set of facts in its training data and then its output makes up new facts that were not in that training data. For example if I’d trained an AI on a bunch of recipes, none of which included socks, and then I asked it for a recipe and it gave me one with socks in it then that would be a hallucination. The sock recipe came out of nowhere, I didn’t tell it to make it up, it didn’t glean it from any other source.

    In this specific case what’s going on is that the user does a websearch for something, the search engine comes up with some web pages that it thinks are relevant, and then the content of those pages is shown to the AI and it is told “write a short summary of this material.” When the content that the AI is being shown literally has a recipe for socks in it (or glue-based pizza sauce, in the real-life example that everyone’s going on about) then the AI is not hallucinating when it gives you that recipe. It is generating a grounded and faithful summary of the information that it was provided with.

    The problem is not the AI here. The problem is that you’re giving it wrong information, and then blaming it when it accurately uses the information that it was given.

      • FaceDeer
        link
        fedilink
        1
        edit-2
        6 months ago

        Wait… while true that that sounds like not hallucination then, what does that have to do with this discussion?

        Because that’s exactly what happened here. When someone Googles “how can I make my cheese stick to my pizza better?” Google does a web search that comes up with various relevant pages. One of the pages has some information in it that includes the suggestion to use glue in your pizza sauce. The Google Overview AI is then handed the text of that page and told “write a short summary of this information.” And the Overview AI does so, accurately and without hallucination.

        “Hallucination” is a technical term in LLM parliance. It means something specific, and the thing that’s happening here does not fit that definition. So the fact that my socks example is not a hallucination is exactly my point. This is the same thing that’s happening with Google Overview, which is also not a hallucination.