@[email protected] to [email protected]English • 4 months agoWhen A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.www.nytimes.commessage-square51fedilinkarrow-up1351
arrow-up1351external-linkWhen A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.www.nytimes.com@[email protected] to [email protected]English • 4 months agomessage-square51fedilink
minus-square@[email protected]linkfedilinkEnglish4•4 months agoIf AI feedback starts going the other way around we should be REALLY scared. Imagine it just become sentient and superintelligent and read all that we are saying about it.
minus-square@[email protected]linkfedilinkEnglish5•4 months agoThis is completely unrelated. Besides, how does AI suddenly become sentient?
minus-square@leftzerolinkEnglish3•4 months agoLLMs are as close to real AI as Eliza was (i.e., nowhere even remotely close).
If AI feedback starts going the other way around we should be REALLY scared. Imagine it just become sentient and superintelligent and read all that we are saying about it.
This is completely unrelated.
Besides, how does AI suddenly become sentient?
It was a joke.
LLMs are as close to real AI as Eliza was (i.e., nowhere even remotely close).