• @[email protected]
    link
    fedilink
    3024 hours ago

    That’s the training process. After that you can just run it with a single GPU, in a few seconds.

    • @[email protected]
      link
      fedilink
      1116 hours ago

      Yes, thankfully the reasonable tech companies offering these services have decided to stop the training process after it was done once. The insane increase in energy consumption and hardware manufacturing for datacenter components and accelerators is purely coincidental and has nothing to do with demand for gimmicky generative AI services. Let’s also conveniently ignore the increasing inference cost of more complex models, while we’re at it.