I have decided to (finally) do a technical article… This was one of the ‘promised things’ from my very first entry, and now 6 months later, I am finally fulfilling that promise. Haha, come to think of it, the title of this blog is also very appropriate to my failed dedication in keeping this blog regular… but I am trying people.. and I will get there eventually!
Keep in mind that these articles aren’t the mammoth things that you see in other technical blogs, with advanced techniques on everything from customised and extended WinForm controls, to the new fandangled WCF with its advanced Web Service implementations (those are pretty damn cool though). Its more a guide to some of those ‘little things’ that I stumble across from time to time… the little things that you will spend hours on trying to figure out what the heck is going on, only to realise at the end just how obscure some of these things can be.
This first article is about SQL 2000/2005 execution plans, hence the title ‘When Execution Plans Go Bad’.
A little while back, my co-workers and I came across a very puzzling problem that was affecting our ‘main’ (for lack of a better term) application, in production no less. We were getting seemingly random locks in various parts of the application. So while we all frantically tried to figure out just exactly what the heck was going on, we eventually realised that one of the procs responsible for the basic client searches was locking up (and this search functionality is re-used in several places in the application). The strange thing was, when this proc was run from Sql Management Studio, the proc ran perfectly! We had recently come across a similar problem, but in Sql Reporting Services… the report was locking out, but the proc was being run from Sql Management Studio just fine! After a bit of research we discovered that the execution plan was to blame. Somehow, Sql had cached a terrible execution plan… We still arent 100%, but we think that it came from a job that runs every week on the DB, that re-builds all the indexes. And therein lay the problem… execution plans become inefficient (or even invalid) when certain changes to the database are made…
- Schema Changes
- Statistics Changes (this is also a possibility)
- Dropped Indexes
- Large numbers of changes to keys (bulk insert/delete statements to the table in question)
It seemed that one of the tables had their index dropped along the way, and re-created (or possibly the statistics of the table changed) and made the current cached execution plan inefficient and/or invalid. Applications, when accessing Sql Server, will usually automatically use these cached execution plans (queries run directly from Sql Management Studio do not suffer from this problem as they seem to bypass the cached execution plan).
There are several ways to fix this, the easiest (in my opinion) method being a call to one of the system procs, called sp_recompile, which (in case you didnt realise :P) re-compiles the object in question. Passing the name of the object into the proc as a varchar (or equally string related type) will cause the proc to be recompiled, caching a new execution plan next time any proc/function/query related to the object (ideally a table, but a proc will work ok) is called, whether it be from Sql Management Studio or from an external application.
An sp_recompile call for the uspLogError Stored Procedure
in the AdventureWorks Database.
EXEC sp_recompile ‘dbo.uspLogError’
An alternate way (though one to be wary of, due to possible performance issues) is by placing a WITH RECOMPILE clause in the definition of the proc, before the AS keyword. The problem with this method, is that if you do implement it, Sql Server will discard the execution plan every single time the proc is called and re-create the execution plan from scratch (but it might be good for when you are still developing an application and the database and its structure could change constantly).
A WITH RECOMPILE clause example, which causes a stored procedure
to be recomplied each time it is called.
CREATE PROCEDURE [dbo].[uspLogError]
@ErrorLogID [int] = 0 OUTPUT
The final way, is by calling a proc using another form of WITH RECOMPILE. This is a once-off call, similar to the sp_recompile proc, however this will only work with procs, as you (obviously) cant use this sort of method on a table.
Executing a stored procedure using the WITH RECOMPILE clause, to force
Sql to recompile the proc, and refresh the execution plan.
EXEC dbo.uspLogError WITH RECOMPILE
Once we forced the proc to recompile, the new execution plan was cached… and the locks vanished!! Huzzah! The application started functioning normally and we all breathed a huge sigh of relief… at least until the next time it happened! 😛 Thankfully we were all prepared and instantly recognised the issue when it arose.
Well, there you have it. My first ‘technical’ article. I hope it was an enjoyable and interesting read. And please, if you can offer any feedback or point out any mistakes, dont hesitate to do so… how else does anyone learn 🙂
http://downloads.red-gate.com/ebooks/HighPerformanceSQL_ebook.zip (HIGHLY recommended!)