- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.
interesting, in my experience, it’s only been good at repeating things, and failing on unexpected inputs - it’s able to answer pretty accurately if a small number is even or odd, but not if it’s a large number, which indicates it’s not reasoning but parroting answers to me
do you have example prompts where it showed clear logical reasoning?
Examples showing that it comes up with it’s own solutions to an answer? Just ask it something that could not have been on the Internet before. Professor talking about AGI in GPT 4
Personal examples would be to code python to solve a 2D thermal heat flux problem given some context and constraints.
well, i just tried it, and its answer is meh –
i asked it to transcribe “zenquistificationed” (made up word) in IPA, it gave me /ˌzɛŋˌkwɪstɪfɪˈkeɪʃənd/, which i agree with, that’s likely how a native english speaker would read that word.
i then asked it to transcribe that into japaense katakana, it gave me “ゼンクィスティフィカションエッド” (zenkwisuthifikashon’eddo), which is not a great transcription at all - based on its earlier IPA transcription, カション (kashon’) should be ケーシュン (kēshun’), and the エッド (eddo) part at the end should just, not be there imo, or be shortened to just ド (do)
this paper says it is capable of original thought. It also “speaks” of it in high regard in other things. That is also my experience using it for… over a year?! now.
Here is an alternative Piped link(s):
Professor talking about AGI in GPT 4
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.