Keeping database performance at optimal levels is the main challenge that all administrators face. But before you can begin to troubleshoot issues and make improvements, you need to find what’s falling short in the first place.
To do that, you have to familiarize yourself with the most important and informative SQL server metrics, of which there are several.

I/O activity

Many of the metrics worth monitoring are hardware related, and looking into disk activity is a good starting point.
When you monitor the SQL server for which you are responsible, examining I/O metrics will not just give you insights that can help with troubleshooting today, but will also help you plan for the future.
If certain files and processes are monopolizing the read and write resources of your storage setup, or if you can see that there is a general upward trend in I/O activity which might eventually lead to a bottleneck, you can act upon this info and cut performance problems off at the pass.

Wait stats

Statistics relating to query wait times are another must-check aspect of SQL database performance, as they can be indicative of a few common foibles which need to be fixed.
As you might expect, queries that are not executed quickly can suggest that they are suboptimally composed in their own right, or that some other process is preventing them from accessing the resources they need to fulfill their remits.
Longer than usual wait times might be a symptom of blocking, for example. This occurs when exclusive locks are held by other queries for more time than is strictly necessary. Deadlocking is also something to be aware of in this context, and scrutinizing wait stats will serve up a potential solution.

CPU & memory usage

If your server is locally hosted or you have access to performance data for remote hardware, then being attuned to the ebb and flow of the requirements placed upon the processor and memory of the database is worthwhile.
CPU usage metrics, expressed as a percentage of its total capacity, are not necessarily useful in isolation, but when contextualized further with insights into which processes are responsible for this, next steps can be determined.
Sometimes a process can go rogue and create a one-off instance of hogging hardware resources, which is not an issue in isolation. If this happens regularly, then further investigation and troubleshooting will be necessary.
Memory usage goes hand in hand with this, although you have a little more control in this scenario because you can choose to allocate more of the server’s available memory to dealing with database operations if necessary.
Ultimately your approach to measuring CPU and memory metrics has to be similar to that of checking in on I/O performance. Use the figures and trends to plan for future upgrades, as well as to fix short term snafus.

Automating analysis

It is worth noting that these metrics and many more can be recorded and analyzed automatically using tools specifically designed to work with SQL databases.
This makes monitoring and maintenance less of a chore for DBAs, and will also alert you to potential problems sooner rather than later. So when performance plummets and you do not know where to start, persistent monitoring with third party tools will save you from the dreaded possibility of unplanned downtime.
In this sense, SQL server maintenance needs to both preempt problems, as well as be built around the idea of responding rapidly to unexpected performance dips when they do arise, rather than leaving them unaddressed for long periods.


Claire Ward