Today I was working with my DBA on a server migration– the replacement server is configured, loaded, secured- and before we lit it up, she asked me to take one special final full cold backup. No problem!
…until I started monitoring the backup (to estimate when it would be done) – and was seeing well, absolutely horrible backup rates (averaging “2,645KB/S”). Terrible.
So, I started checking the logs to see what kind of tape device issues I was having. Huh- nothing. Backup configuration (parallelism, etc)? Set just right.
Then I remembered- this server was one of those servers that was very recently moved to another rack. I shipped out one of my favorite sysadmin tools, ‘ethtool’. OUCH. Not only was my network connection at 100MB (instead of gig speed), but it was 100/half. With one command, I verified that not only was the server plugged into the wrong port in the switch, but that the promised “we’ll fix that next week” response from the network team was never followed through (they were supposed to fix the broken autonegotiation on the 100mb ports weeks ago).
Off to the data center, ripped out the cable from the wrong port, clicked it into place. New backup rate? “73,934KB/S”. Yeah, just a little better.
Despite the robustness of today’s network gear, stuff can still be misconfigured, or you can have as simple as a problem as something plugged into the wrong port. (Or in my case, BOTH).
Backups slow? Network performance doesn’t feel right? Check the cable. Really.
PS: When I told the DBA about all of this, she replied “ok, well THAT explains why my file exports to the NFS server took so freaking long the other day”. Yeah, she never bothered to tell me. Oops.