Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)

    • Daniel
      link
      fedilink
      English
      21
      edit-2
      10 months ago

      Worldcoin, founded by US tech entrepreneur Sam Altman, offers free crypto tokens to people who agree to have their eyeballs scanned.

      What a perfect sentence to sum up 2023 with.

    • @[email protected]
      link
      fedilink
      English
      1010 months ago

      Mr Altman, who founded Open AI which built chat bot ChatGPT, says he hopes the initiative will help confirm if someone is a human or a robot.

      That last line kinda creeps me out.

        • @[email protected]
          link
          fedilink
          English
          7
          edit-2
          10 months ago

          Yeah that’s most most sci-fi dystopian article I’ve read in a while.

          The line where one of the people waiting to get their eyes scanned is well eye opening " I don’t care what they do with the data, I just want the money", this is why they want us poor, so we need money so badly that we will impatiently hand over everything that makes us.

          But we already happily hand over our DNA genome to private corporations, so what’s an eye scan gonna do…

  • Bipta
    link
    fedilink
    3910 months ago

    That’s why they just removed the military limitations in their terms of service I guess…

  • @[email protected]
    link
    fedilink
    English
    3210 months ago

    Considering what we’ve decided to call AI can’t actually make decisions, that’s a no-brainer.

  • @[email protected]
    link
    fedilink
    English
    3110 months ago

    I also want to sell my shit for every purpose but take zero responsibility for consequences.

  • @[email protected]
    link
    fedilink
    English
    1910 months ago

    Shouldn’t, but there’s absolutely nothing stopping it, and lazy tech companies absolutely will. I mean we live in a world where Boeing built a plane that couldn’t fly straight so they tried to fix it with software. The tech will be abused so long as people are greedy.

    • @[email protected]
      link
      fedilink
      English
      510 months ago

      So long as people are rewarded for being greedy. Greedy and awful people will always exist, but the issue is in allowing them to control how things are run.

      • @[email protected]
        link
        fedilink
        English
        610 months ago

        More than just that, they’re shielded from repercussions. The execs involved with ignoring all the safety concerns should be in jail right now for manslaughter. They knew better and gambled with other people’s lives.

    • @[email protected]
      link
      fedilink
      English
      4
      edit-2
      10 months ago

      They fixed it with software and then charged extra for the software safety feature. It wasn’t until the planes started falling out of the sky that they decided they would gracefully offer it for free.

  • Optional
    link
    fedilink
    English
    1710 months ago

    Has anyone checked on the sister?

    OpenAI went from interesting to horrifying so quickly, I just can’t look.

  • Nei
    link
    fedilink
    English
    1310 months ago

    OpenAI went from an interesting and insightful company to a horrible and a weird one in a very little time.

    • TurtleJoe
      link
      fedilink
      English
      510 months ago

      People only thought it was the former before they actually learned anything about them. They were always this way.

    • @[email protected]
      link
      fedilink
      English
      410 months ago

      Remember when they were saying GPT-2 was too dangerous to release because people might use it to create fake news or articles about topics people commonly Google?

      Hah, good times.

    • @[email protected]
      link
      fedilink
      English
      8
      edit-2
      10 months ago

      Yup, my job sent us to an AI/ML training program from a top cloud computing provider, and there were a few hospital execs there too.

      They were absolutely giddy about being able to use it to deny unprofitable medical care. It was disgusting.

  • @[email protected]
    link
    fedilink
    English
    610 months ago

    Agreed, but also one doomsday-prepping capitalist shouldn’t be making AI decisions. If only there was some kind of board that would provide safeguards that ensured AI was developed for the benefit of humanity rather than profit…

  • @[email protected]
    link
    fedilink
    English
    5
    edit-2
    10 months ago

    I am sure Zergerberg is also claiming that they are not making any life-or-death decisions. Lets see you in a couple years when the military gets involved with your shit. Oh wait they already did but I guess they will just use AI to improve soldiers’ canteen experience.

  • @[email protected]
    link
    fedilink
    English
    410 months ago

    So just like shitty biased algorithms shouldn’t be making life changing decisions on folks’ employability, loan approvals, which areas get more/tougher policing, etc. I like stating obvious things, too. A robot pulling the trigger isn’t the only “life-or-death” choice that will be (is!) automated.