If the company gave a noob unlimited access and can’t restore their data from backups, it’s really their fault, not the employee’s.
We had a management course in the university where this was one of the main things they highlighted:
Managers faults are the managers fault.
Employees faults are the managers fault. Without exception.And if you think about it, that’s completely true. If an employee does something stupid, it’s most of the time because they a) had the opportunity to do it and b) they weren’t taught well enough. If the employee keeps doing this mistake, the manager is at fault because he allows the employee to do the job where he can make the mistake. He obviously isn’t fit for that position.
And people wonder why manager is paid more
When’s the last time you tested backup restore and how long did it take?
“Eh, go away. I suppose it’ll work flawlessly. I’ll test it if I need it. I’ll have to look into the procedure anyways. Get off my back!”
deleted by creator
The only way to test restoring a backup is to actually restore it. And for that, you do need spare hardware.
So, to answer your question, I don’t test my home backups either. I reckon pretty much no one is dedicated enough to do that.
I’m hoping, if shit really hits the fan that I can still pick out my important files and just manually re-setup the rest of the system. So, with a longer downtime in that sense.
That strategy is just absolutely not viable for companies, where downtimes are more expensive than spare hardware, and where you really can’t tell users you restored some files, they should do the rest.
Wasn’t there some saying about if you’re in a server room, the calmer the “Oops,” the worse the problem?
“Ooopppsss… 💤”, both containers of the UPS flow battery ruptured at the same time and flooded the whole server room… call me tomorrow for the planning meeting when things stop burning and firefighters have had a chance to enter the building.
If there isn’t then there should be.
Forget coffee, this will wake you up. There’s nothing like dropping the wrong database scheme on a lazy Monday morning.
If you can, always set the title of whatever window you’re working on to capital bold letters, preferably red, saying PRODUCTION SERVER - DON’T FUCK IT UP. This has saved my dumbass a few times when I looked up before hitting enter.
I use IntelliJ for this and my prod connection is red, has warning symbols and it’s read only. I can switch on write mode if necessary, but it will prompt for it. Saves me a lot of stress.
What’s the setting to make it in read only?
https://www.jetbrains.com/help/idea/configuring-database-connections.html#connection-modes
It’s pretty easy to set up but very helpful
This here is wisdom
💖
Had a colleague do this to the local AD server years ago.
Thankfully they pulled the plug before the changes could propagate through the network completely but it still took 3 days to recover the data and restore the AD server.
That’s on the company for not having a proper disaster recovery plan in place.
Or DR test was literally the CIO wiping a critical server or DB and we had to have it back up in under an hour.
To be fair to the company it was a friday afternoon when said person ran a script
Yikes. At least it was only 3 days and not weeks or months of cleanup trying to rebuild shit!
You might like this little video then. Well, it’s 10 minutes long but still. It’s a story detailing a Dev who deleted their entire production database. Real story that actually happened. If you went through something similar then you definitely gonna relate a little.
F*cking Gitlab moment
Yeah, that was extremely funny, but I had nothing stored there at that moment. I guess some gitlab administrator lost twenty pounds in sweat that day.
That’s not an oopsie daisy that’s the whole oopsie bouquet
This is funny, cute, and too relatable.
internally screaming