Apathy – A quick path to the naughty list.

Leave a comment

December 8, 2015 by Kenneth Fisher

T-SQL Tuesday

It’s the Christmas season! And it’s T-SQL Tuesday! And Bradley Ball (b/t), SQL’s very own superhero (one of many really but he’s the only one I know who dresses the part)

SQLBallsCapAmerica

has decided to combine the two. With this month’s subject Naughty or Nice he’d like us to talk about some of the best (or worst) things in our environments.

The environment I work with? There is certainly a lot to put us on the naughty list. I’ve ranted several times about nolock, nolock and nolock. We have so many problems with security that it prompted me to write a session on SQL Server security and sp_dbpermissions and sp_srvpermissions (tools for security research). We have other issues but honestly I don’t feel like writing about them right now.

And the nice list? The things we do right? A lot of things. Our team environment is amazing. My team has 7 people on it and everyone is more than willing to go the extra mile to help each other out. Need someone to cover your on call? You are likely to get 5-6 volunteers. Need help with a problem? Pick your SME they will drop everything to see what they can do. We are working to fix the various problems we have and are slowly succeeding.

Unfortunately there is a silent killer that will drag you from the nice list to the naughty list without you even noticing.

Apathy

Defined as:

lack of interest, enthusiasm, or concern

Nothing will destroy your environment more easily than apathy. It is the slow and silent killer of good intentions. Here are a couple of examples:

  • You’ve determined that your security needs work. You implement a new policy to require AD groups and roles be used whenever possible. You make a concentrated effort to clean up your existing environment. And amazingly you finish! Security is clean, it’s simple, it’s perfect. Over the next six months or so everyone holds to the new policies and everything is great.

    Then apathy roles in. One day several people change teams at once or someone needs their permissions right now. It’s faster and easier not to follow the policy and just grant them the security directly. It’s only this once. You’re busy. One time won’t hurt right? You’ll fix it later (sound familiar?). Then next week the same thing happens. And again a few weeks later. After a while your beautiful system is back where it started. All because everyone got just a little busy, maybe a little lazy. Let’s face it, they got tired. Maintaining things perfectly takes work. They cut a few corners here and there. And now instead of being happily on the nice list they are back on that dreaded naughty list.

  • You’ve implemented this great new enterprise wide system (Not naming names. Maybe you built it yourself, maybe some other DBA with great hair wrote it) that manages your backups, your indexes, it even runs checkdbs on a regular basis. It manages everything smoothly, avoids event storming, and logs the &@#$ out of everything. It takes a few days but soon enough all of your instances are set up on the new system. You breathe a sigh of relief. Your on-call gets easier and in fact so does your day to day life.
     
    You settle in with the new system and for a while everything is better. Then one of your new DBAs who doesn’t know about the system installs a new server but forgets to add it to the control tables. Everyone’s busy and because it will only take a few minutes to add the new server you decide you’ll do it tomorrow. Maybe next week. Well eventually anyway. It’s only one instance right? Then it’s two. Then half a dozen. There might even be a big side by side upgrade and dozens of your instances that were managed beautifully are now on their own. Problems start to crop up. Someone suggests (and implements) a half measure. Maybe they start to use maintenance plans. I mean those work great right? On-call starts to get worse. There are more and more problems with reindexing and missed backups. No one is quite sure what happened but they know something went drastically wrong somewhere. And the worst part? That other system is still sitting there happily managing the few instances still left to it. You know them right? The ones you don’t get calls on?

 
So what’s the solution? Constant Vigilance! Never ending constant vigilance!

Maybe that sounds a bit harsh, but honestly awareness is the only defense. Be aware that you will be apathetic occasionally, and do your best to keep those times as short as possible, and fix the damage caused by those periods as quickly as possible.

Good luck staying on the nice list!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,146 other followers

Follow me on Twitter

ToadWorld Pro of the Month November 2013
%d bloggers like this: