Before I get started on this one, I wanted to give thanks to RK KuppĆ£la for pointing me to the great article and code written by Michelle Ufford (aka SQLFool) - this has been a HUGE help!
I am blogging about this so that I can keep my thoughts in one spot and due to the potential length of the post. This needs to be introduced with the back story of how I got to this point. Our Student Information System is produced by a vendor. That vendor has certain specifications that we are required to follow - if they are not followed, the first thing their support folks point to is that we are not following protocol (although most of the time it is not the issue) when we have problems. The one that we get hit with a lot is the Maintenance Plans for rebuilding indexes and updating statistics not running to completion. Note that the maintenance plan rebuilt every index in the entire database and the update statistics run a full scan on every index in the database.
We determined that this was happening because the system is available 24/7 and a lot of the users were accessing data through the application that were in deadlock with the maintenance plan which wound up being the loser in the deadlock. Obviously the maintenance plan has no control over deadlock priority and neither do we when it comes to the application code.
There are also several processes in place that have to be scheduled each night to check student attendance and modify the all day information based on how many periods a student missed. Because of the amount of workload that these processes take, we had to switch things around as to when different plans run. The attendance schedule runs two different processes and takes about 4 hours in total. The maintenance plans were taking about 8 hours.
I decided to rebuild how the maintenance plans run by dividing them into groups according to what was needed for the attendance schedule and completing those first, then running the other tables according to order of how often they are "touched" during the day. I took out all temporary tables, any manual backups and lookup tables that rarely get changed. The only problem with this method is that it is no longer dynamic and would need to be reviewed if the vendor added tables when they do upgrades. This took a good hour off of what we had before, and it allowed me to start the attendance processes earlier since their tables were no longer being used by the plan.
The problem we are faced with now is that we are still 11 hours with these processes we have no good time to schedule others. I contacted the vendor when I looked over the previously mentioned script to see if they would approve of it - they did! I have been testing all week a modified version of the script and have gotten the index portion down to less than an hour (obviously variable based on percent of fragmentation). The problem is that the update statistics with the fullscan takes anywhere between 3 and 4 hours. Yes, that has me down to 4 to 5 hours which is a total reduction of 3, but my hopes were to get it down to less than 2 hours so that we can truly optimize our time.
Here (finally) is where the question comes in - doing research on sp_updatestats vs. update statistics has shown that most run the stored procedure nightly and then the other once a week or once a month. Because the vendor requires the fullscan on every index. what am I losing with running the stored procedure if it truly sees that an update is not required? Because this runs so quickly (usually less than 10 minutes), would it be a bad idea to run this a few times a day just to make sure everything is accurate?
I have also seen that when you rebuild an index, it fully updates the statistics anyway - if I have to stick with the update statistics, could I not just skip all of the ones that were rebuilt? Would it hurt and/or help to run sp_updatestats after the individual update statistics has run, or would that process actually cause issues with the previously run update because of the method it uses to scan?
Thanks in advance for any help!!!