• Ferk@kbin.social
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    2 years ago

    A test that didn’t require a human could theoretically be tested automatically by the machine preemptively and solved easily.

    I can’t imagine how would you test this in a way that wouldn’t require a human.

    • SomeDude@feddit.de
      link
      fedilink
      arrow-up
      4
      ·
      2 years ago

      Let two AI’s talk to each other and see if they find out that they both aren’t humans?

      • bedrooms@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        2 years ago

        Bro, humans literally don’t have that capability (that’s the presumption here). Or are you saying that many of us don’t have better consciousness than AIs? I might agree with that!

      • Ferk@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 years ago

        The AI can only judge by having a neural network trained on what’s a human and what’s an AI (and btw, for that training you need humans)… but if the other AI also has access to that same neural network, then it can just provide exactly the kind of output the other AI is looking for.

        So I don’t think that would work.