@[email protected] to [email protected]English • 6 months agoWe have to stop ignoring AI’s hallucination problemwww.theverge.commessage-square196fedilinkarrow-up1529
arrow-up1529external-linkWe have to stop ignoring AI’s hallucination problemwww.theverge.com@[email protected] to [email protected]English • 6 months agomessage-square196fedilink
minus-square@[email protected]linkfedilinkEnglish2•edit-26 months agoThey are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
minus-square@[email protected]linkfedilinkEnglish1•6 months agoYour 1 sentence makes more sense than the slop above.
They are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
Your 1 sentence makes more sense than the slop above.