• KISSmyOS@lemmy.world
    link
    fedilink
    arrow-up
    68
    arrow-down
    1
    ·
    2 years ago

    If the company gave a noob unlimited access and can’t restore their data from backups, it’s really their fault, not the employee’s.

    • DrM@feddit.de
      link
      fedilink
      arrow-up
      28
      ·
      edit-2
      2 years ago

      We had a management course in the university where this was one of the main things they highlighted:

      Managers faults are the managers fault.
      Employees faults are the managers fault. Without exception.

      And if you think about it, that’s completely true. If an employee does something stupid, it’s most of the time because they a) had the opportunity to do it and b) they weren’t taught well enough. If the employee keeps doing this mistake, the manager is at fault because he allows the employee to do the job where he can make the mistake. He obviously isn’t fit for that position.

      • LetterboxPancake@sh.itjust.works
        link
        fedilink
        Deutsch
        arrow-up
        10
        ·
        2 years ago

        “Eh, go away. I suppose it’ll work flawlessly. I’ll test it if I need it. I’ll have to look into the procedure anyways. Get off my back!”

        • Knusper@feddit.de
          link
          fedilink
          arrow-up
          6
          ·
          2 years ago

          The only way to test restoring a backup is to actually restore it. And for that, you do need spare hardware.

          So, to answer your question, I don’t test my home backups either. I reckon pretty much no one is dedicated enough to do that.

          I’m hoping, if shit really hits the fan that I can still pick out my important files and just manually re-setup the rest of the system. So, with a longer downtime in that sense.

          That strategy is just absolutely not viable for companies, where downtimes are more expensive than spare hardware, and where you really can’t tell users you restored some files, they should do the rest.

  • Sotuanduso@lemm.ee
    link
    fedilink
    English
    arrow-up
    56
    ·
    2 years ago

    Wasn’t there some saying about if you’re in a server room, the calmer the “Oops,” the worse the problem?

    • jarfil@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      2 years ago

      “Ooopppsss… 💤”, both containers of the UPS flow battery ruptured at the same time and flooded the whole server room… call me tomorrow for the planning meeting when things stop burning and firefighters have had a chance to enter the building.

  • MrNesser@lemmy.world
    link
    fedilink
    arrow-up
    24
    ·
    2 years ago

    Had a colleague do this to the local AD server years ago.

    Thankfully they pulled the plug before the changes could propagate through the network completely but it still took 3 days to recover the data and restore the AD server.

    • thepianistfroggollum
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 years ago

      That’s on the company for not having a proper disaster recovery plan in place.

      Or DR test was literally the CIO wiping a critical server or DB and we had to have it back up in under an hour.

    • Stamets@startrek.websiteOP
      link
      fedilink
      arrow-up
      8
      ·
      2 years ago

      Yikes. At least it was only 3 days and not weeks or months of cleanup trying to rebuild shit!

      You might like this little video then. Well, it’s 10 minutes long but still. It’s a story detailing a Dev who deleted their entire production database. Real story that actually happened. If you went through something similar then you definitely gonna relate a little.

    • LetterboxPancake@sh.itjust.works
      link
      fedilink
      Deutsch
      arrow-up
      2
      ·
      edit-2
      2 years ago

      Yeah, that was extremely funny, but I had nothing stored there at that moment. I guess some gitlab administrator lost twenty pounds in sweat that day.