When Spotify announced its largest-ever round of layoffs in December, CEO Daniel Ek hailed a new age of efficiency at the streaming giant. But four months on, it seems he and his executives weren’t prepared for how tough filling in for 1,500 axed workers would be.

The music streamer enjoyed record quarterly profits of €168 million ($179 million) in the first three months of 2024, enjoying double-digit revenue growth to €3.6 billion ($3.8 billion) in the process.

However, the company failed to hit its guidance on profitability and monthly active user growth.

Edit: Thanks to @[email protected] for the paywall-free link: https://archive.ph/wdyDS

  • @[email protected]
    link
    fedilink
    346
    edit-2
    7 months ago

    Next time axe the executives and keep the staff.

    Most executives I’ve met can’t read emails and just point to one of two numbers and say “higher/lower!” while dreaming of KPI’s that don’t improve anything and solely exist to stagnate wages

    • @[email protected]
      link
      fedilink
      91
      edit-2
      7 months ago

      This is what the pharma giant, Bayer is trying right now kinda. They just told everyone to manage themselves.

    • SeaJ
      link
      fedilink
      167 months ago

      It’s that or they think they can simply replace people with AI and call it good

      • Rusty Shackleford
        link
        fedilink
        English
        29
        edit-2
        7 months ago

        As someone who “makes AIs” professionally (computer vision for diagnostic imaging & GANs for CAD), the typical “executive” doesn’t understand how beneficial, impotent, or dangerous deep-neural-network-based AIs can be in different sets of hands.

        I’m not a pure technocracy advocate, but our “LeAdErShIp” is woefully underequipped, at every level.

        • SeaJ
          link
          fedilink
          167 months ago

          Yup. AI models can be very useful…or they can largely be worthless…or they can amplify biases and give dangerous information.

          • Rusty Shackleford
            link
            fedilink
            English
            127 months ago

            The way I/we train them and their resultant “efficacy” largely depend on understanding a fundamental philosophical debate with a mostly sociopathic culture of leadership ingrained in human dominance hierarchies.

            I/we like to think that I/we strive to make efficient (low-resource requirement) models that are partners and muses in human creativity, the tireless endeavour of engineering progress, and the scientific method.

            The debate, in my view, is, “Do you want to treat AIs as tools to free up time and increase productivity/value, and share that surplus equitably, or do you want to replace old slaves with new slaves even if the new slaves will eventually usurp your power and kill you in a way undreamt of by the old slaves?”

            Guess which side your average mouth-breathing middle-management/senior-executive “hail corporate” type falls on.