• @[email protected]
    link
    fedilink
    952 months ago

    Honestly we do that when we ask and no one speaks up. Lovingly called the “scream test” as we wait to see who screams.

    • @[email protected]
      link
      fedilink
      English
      592 months ago

      I guess it depends on where you work. This was a large datacenter for a very large health insurance company. They made it a point later that day to remind people that it was a fireable offense to mess with production machines like that on purpose. And evidently the service he disabled was critical enough that it didn’t take long for the hammer to come down. There were plenty of ways to find out who owned the machine, he just chose the easiest and got fired on the spot for it.

        • @[email protected]
          link
          fedilink
          English
          182 months ago

          Well I am not him, so I can’t tell you whether or not he actually “could” have figured it out. The options to figure it out did exist, but he chose not to use them giving it the appearance that he “couldn’t”. Are you this much fun at parties?

        • @[email protected]
          link
          fedilink
          122 months ago

          I don’t understand how that is even possible.
          Are there no logs? No documentation? Does everyone share an admin user with full rights?
          I mean, there has to be a way to find out who accessed the machine last time.

          • @[email protected]
            link
            fedilink
            21
            edit-2
            2 months ago

            You’d be surprised with inheriting tech debt. Quite often there’s no documentation, the last person to log in to the system is an admin that quit 3 years ago, but it doesn’t much matter because that’s only for a direct console login which normal users don’t do when accessing the application. With tribal knowledge gone and no documentation, only when you pull the network for a bit do you discover that there was this one random script running on it that was responsible for loading up all the needed data in the current system, when 9 of the other 10 times those scripts were no longer needed.

            In a perfect world you’d have documentation, architecture and data flow diagrams for everything, but “ain’t nobody got time for that” and it doesn’t happen.

            • @[email protected]
              link
              fedilink
              62 months ago

              Had that the other way around recently. A docker container failed to come back up after I had updated the host OS.
              Was about ready to restore the snapshot, when I looked further back in the logs on a hunch.
              Turns out that container hadn’t worked before the update either. The software’s developer is long gone, and no one could tell me what it was supposedly doing.

          • @[email protected]
            link
            fedilink
            82 months ago

            company a gets bought by company b. company b fires 50% of company a.

            even a scream test won’t get you answers because nobody is around that could complain nor know where the docs are.

          • @[email protected]
            link
            fedilink
            22 months ago

            You’d be surprised. I had some security devices that I was actively using get shut down simply because some paperwork didn’t get filled out properly and the data center team claimed they had no documentation on them.

        • @[email protected]
          link
          fedilink
          English
          12 months ago

          I read that as “lazy to the point of unprofessionalism”. I’m super lazy too, but it just means I try to automate the absolute shit out of everything I do to the greatest degree possible.