- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.
Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.
Because that’s exactly what happened here. When someone Googles “how can I make my cheese stick to my pizza better?” Google does a web search that comes up with various relevant pages. One of the pages has some information in it that includes the suggestion to use glue in your pizza sauce. The Google Overview AI is then handed the text of that page and told “write a short summary of this information.” And the Overview AI does so, accurately and without hallucination.
“Hallucination” is a technical term in LLM parliance. It means something specific, and the thing that’s happening here does not fit that definition. So the fact that my socks example is not a hallucination is exactly my point. This is the same thing that’s happening with Google Overview, which is also not a hallucination.
Removed by mod