Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • _number8_@lemmy.world
    link
    fedilink
    English
    arrow-up
    85
    ·
    1 year ago

    ‘went rogue’ is a bit of an alarmist way to say ‘typed scary text’

    i’d love to see an AI that could legitimately scare me