- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Google is coming in for sharp criticism after video went viral of the Google Nest assistant refusing to answer basic questions about the Holocaust — but having no problem answer questions about the Nakba.
Any idea why they don’t just apply LLMs to natural language processing? “Turn the living room lights off and bedroom lights on” should be pretty simple to parse, yet my assistant has a breakdown any time I do anything more than one command at a time.
It’s expensive and slow. Especially to do well and to connect to 3rd party system calls like “turn_off_lights([“living room”])”.