This tendency of AI to just outright lie when it doesn’t have a real answer is mildly upsetting. It’s indicative of the fact that the people building these systems have no clue (or interest in?) how to implement basic ethical guidelines. That doesn’t bode well for the evolution of these systems and what they will be capable of.
This tendency of AI to just outright lie when it doesn’t have a real answer is mildly upsetting. It’s indicative of the fact that the people building these systems have no clue (or interest in?) how to implement basic ethical guidelines. That doesn’t bode well for the evolution of these systems and what they will be capable of.
Removed by mod