What the ???? T-SQL Tuesday #12011
November 12, 2019 by Kenneth Fisher
Hey, it’s T-SQL Tuesday again! I almost forgot about it what with Summit and all. Wayne Sheffield (blog|twitter) is our host and wants to know What were you thinking?!? If you’ve ever had the joy of sitting down with a handful of people in your profession (and I don’t think it matters what profession) you’ll start hearing horror stories. I mean we all like to tell them right? And coming just off of Summit I got to hear a bunch of them. So here are a few of my favorites that I tell.
You think your trigger is bad?
On one of our systems, we have a trigger (I could almost stop there but I won’t). This trigger calls a stored procedure (yep), and that stored procedure calls another stored procedure .. on a different server using a linked server. That stored procedure hits a table with you guessed it, another trigger. That trigger calls a stored procedure (you can kind of see where this is going can’t you?) that uses a linked server to get some data back from the first server. The first time I was asked to find out why it took so long to add data to this table and took a look I laughed, and cried, and laughed.
Now, in the interest of What were they thinking? I’m pretty sure this is one of those things that built up over time by several developers who weren’t looking at what the previous people had done. FYI it still exists because the performance isn’t too bad and they don’t want to spend time fixing it.
You think you have a NOLOCK problem?
I had a conversation over the Summit where someone mentioned that if you don’t like NOLOCK how about SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED? Note: NOLOCK is just another term for READ UNCOMMITTED, the main difference here is that changing the TRANSACTION ISOLATION LEVEL affects the session until you change it back, or everything within a SP after the point you changed it. NOLOCK is used as a query hint and only affects that particular use of a table.
Anyway, my response? We have a system where they use SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED at the beginning of almost every SP and NOLOCK on every table reference. (To be fair, only the selects. Even these people don’t use NOLOCK on an update, insert or delete. And yes, I’ve seen it done.) If you’re wondering why that matters, that’s kind of the point, it doesn’t. One or the other people .. well .. by preference neither.
I don’t really have an excuse here. They have always done it this way, NOLOCK is a go-fast button and we haven’t been able to convince them otherwise.
Category: SQLServerPedia Syndication, T-SQL Tuesday | Tags: NOLOCK, T-SQL Tuesday, triggers
11 thoughts on “What the ???? T-SQL Tuesday #120”
Leave a Reply Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Don’t knock the nolocks… they may be a go – fast button but they actually work, unlike the old turbo buttons.
I’m not fond of using the SET TRANSACTION… approach, however. If you use any dynamic SQL, I don’t think it will use that isolation level since it runs as a child process.
Programmers don’t always remember to add the nolock hints in their dynamic SQL since they don’t need them normally.
Better to just do it the same way all the time.
As for nolocks on updates… SQL Server will Judy ignore them but their manager should fire their sorry ass for programming without paying attention.
I’m not 100% against NOLOCK in general .. it’s just fine as long as you don’t care about the data you are getting back. I wouldn’t want to use it in production though.
It’s Production that needs the nolocks… It allows you to run reports that need up-to-date data (or where the user “forgets” to use the reporting database (if you even have one).
The nolocks don’t (just) help people running reports… they prevent failed updates due to locks imposed by another user… and, in my experience, the error handling of failed updates in most applications is somewhere between non-existent and terrible. Nolock hints make the updates MUCH more reliable
In most systems, there are almost no explicit transactions so the data coming back from a dirty read is identical to the data that would have been returned a second earlier or later anyway.
I have literally never been alerted to a data error due to a nolock hint and I’ve been using them (and the Informix predecessor) for 30 years
Well, a couple of misconceptions here.
1st) Just to be sure, you realize that nolock/read uncommitted does still take locks right? And can cause blocking?
2nd) The problem isn’t just dirty reads. With read uncommitted you can get duplicate or missing rows from data that is not in flight.
3rd) Last but not least you can have dirty data even if you don’t have an explicit transaction. Every command you run exists within it’s own implicit transaction and so you can read bad data even then.
Note: Those last two mean that if you are reading financial data (for example) with read uncommitted you have a not insignificant chance of getting different information each time you run a report and in fact run a chance of getting data that is just flat out wrong.
1. My understanding is that (nolock) hints do not cause locks that (normally) affect anything else. They DO create Schema Change Locks but, who in their right mind changes the schema during production hours ?!?
2. and 3. Again, my understanding is that the data read with a (nolock) hint might be from before the update or from after the update but, UNLESS explicit transactions are being used or a bulk update is being performed, the data returned should be internally consistent. Sure, if someone is updating the Gender field to change blanks to a ‘U’ value, a query with (nolock) hints that runs at just the wrong time will get some records with a blank value and some with a ‘U’ but, again, Production hours ?!? Oh, and I totally agree that (nolock) hints are a problem on systems that use explicit transactions. But very few systems do.
(nolock) hints suck. But not using them sucks WAY more because most systems are not written very well and are MUCH more likely to be corrupted by an update that fails because a SELECT statement is holding an unnecessary lock than because of a bad (nolock) hint.
The standard response of “Don’t run reports on Production” is great advice, IF you can afford a Reporting Database which has up-to-date data and you can enforce the use of it. But, in the real world, (nolock) hints are the cheapest way to improve the performance of any Database based application that does not use explicit Transactions.
I have literally changed reports from running in hours to running in seconds just by adding (nolock) hints and this means that the data generated on the report is actually more accurate with the (nolock) hints because it wasn’t pulling some data from 10am and some from 1pm. This also meant that the other users got fewer errors, if any, that required them to resubmit their changes.
THIS is why (nolock) hints are my go-to first step on any optimization – they work. But, I do know when to not use them and I can usually make other changes to improve performance as well. For example, I often load data into a temp table and query/manipulate it there, only writing any changes back to the database once the processing is complete. And I do mean only writing changes, rather than clearing the old data and saving the new data even if nothing has changed. Did I mention that most systems are not written very well ?
[…] Fisher (Twitter | Blog | Article): Kenneth wrote about two things: First, a trigger calling a stored procedure that calls another […]
I don’t dislike triggers because of any perceived slowness, but because they are not easily visible. Yes, there is a folder for them under each table but nobody ever checks there as a matter of course. Database triggers, on the other hand, are completely invisible, there is no entry under programmability for database-level triggers. The advent of Temporal Tables has removed the main reason I ever used triggers, to track data changes.
If I remember correctly you can find the triggers in sys.triggers, sp_helptrigger and sys.server_triggers. I’ve used triggers for a bunch of stuff and don’t object to them in general, but they certainly can cause problems (as with many things) if they aren’t used correctly.
Theres tons of advantages to using nolock!
When you marshal data for a subsequent update for instance, use nolock! This way, if your update goes sour, you can brush up on your troubleshooting skills.
Heck, even with selects, you’ll only be off a few records. Which records? Well any, of course! You could be off one cent, one box, one sale, one truck, one company, whatever, but its only one record, why are you guys whining? When your boss complains or your billion dollar spacecraft crashes, just call it a “glitch”. Everyone knows software has “glitches”, so don’t be such a picky nerd and use nolock today!
After all, you don’t want the I in “ACID”, nobody wants to be isolated, we should all work together to store data, so lets just not be so picky about what actually gets stored, it’ll all work out eventually right?
I appreciate the sarcasm but, as I said, the ONLY problems I have actually experienced are because an update failed due to locks caused by a report being run. The user SHOULD have double-checked what was actually saved and corrected it, but they had just disconnected because their session had hung and they had to get their kid from school or whatever.
In theory, I know that (nolock) hints CAN produce incorrect data but there are very few reports that actually care to the minute.
And, in my 30 year career programming in SQL, I have never once been made aware of a report that was wrong because of a discrepancy due to a dirty read. Maybe this is because 27 widgets is functionally the same as 28 widgets for our purposes and so no-one noticed that there was a discrepancy but the fact is that they HAVE noticed the issues caused by failed updates due to reports holding unnecessary locks.
(nolock) hints should NOT be used in every environment but there are extremely good reasons to use them in some, probably most, environments.
Still, you go right ahead avoiding them and let me look like a God when I can exponentially improve the performance and reliability of your databases.
Theres always the chance for failed updates no matter what. If you have not coded for failed updates, you’re doing it wrong. Additionally, coordinating for multiuser access is the job we signed up for, whether we actually do this or instead negligently omit these mandatory tasks is up to the programmer. Don’t get me wrong, you have plenty of company out there. There are plenty of the “close enough” adherents out there.
Additionally, 27 widgets are never functionally equivalent to 28 widgets, especially if you are marshalling billing information. For instance, say you have 28 trucks set for departure with containers full of product, yet your report only arranged for logistics for 27 of them. Hows the 28th driver going to think you are “God like” for leaving him sitting around?
I’m not sure lack of accuracy is “God like” when it comes to programming computers LOL.
Don’t get me wrong, you sound like the sort who dismisses these concerns, I’m really typing for the benefit of others, I can tell you aren’t going to be persuaded to honestly assess and evaluate your “God like” status.