via: SQL Authority
I recently talked with Mark Wright, CTO of Nitrosphere, a company that optimizes SQL Server application performance. In his career, he has seen many “standard” practices that often negatively affect performance of the application even though they may make things easier for the SQL Server developer or DBA. He offered up several tips, some of which are quite easy to implement, that result in getting the most out of your SQL Server applications in your current environment. While some of these tips are oriented towards developers of SQL Server applications, many times DBAs are held accountable for poor practices that negatively impact application performance.
When using SSIS/DTS with SQL Server, set your packet size to 32K. This setting better (but not optimally) uses TCP, which is a streaming protocol. Many suggest that the packet be sized to physical attributes of your network, which is only true in very edge cases, and truly finding that sweet spot is more trouble than it’s worth, as the savings would be minimal. Equally absurd is setting the packet to a smaller size because your application typically sends and receives small amounts of data. SQL Server doesn’t send 4k just because the packet is set to 4k. It will send fewer bytes if that’s all that is required.
If you have a .NET SQL Server application that processes large blocks of data, then use .NET 4.5 with asynchronous processing. This .NET facility allows your application to read and process data simultaneously, so your application is less likely to block on waiting for data from the network.
For threaded.NET applications, use connection pooling along with multiple connections to run queries in parallel. Connection pooling streamlines connections for an application that maintains multiple connections or closes and re-opens connections to SQL Server. When applications are designed to be threaded and possibly running multiple queries to update the UI, these queries should use separate connections. The alternative is MARS (see below).
Tell your developer not to use Multiple Active Result Sets (MARS). While almost no DBAs know about MARS, for SQL Server applications that go beyond the LAN, MARS will almost always adversely affect performance. Per Microsoft, MARS simplifies application design with the following new capabilities:
- Applications can have multiple default result sets open and can interleave reading from them.
- Applications can execute other statements (for example, INSERT, UPDATE, DELETE, and stored procedure calls) while default result sets are open.
While not a default, many developers connect this way either because it was already in another piece of code or because they take Microsoft’s advice above. This is something DBAs should know about since you are accountable for the SQL Server performance. For many applications, it’s a matter of removing it from the connection string. In cases where the developers truly leverage the MARS capabilities, re-architecting the app would be required.
Many developers build chatty applications that overdo handshaking with SQL Server. One example is forms that generate a query/update every time a field is updated. It’s better, if possible, to batch up the form data and send it all at once rather than one field at a time. In some cases, this data may be redundant, this would be better if cached locally within the application.
Using these tips, you can better advise developers on how to make sure your SQL Server applications are fully optimized. Or you can take things into your own hands and use NitroAccelerator to gain the benefits of the tips without having to change the application. NitroAccelerator has built-in capabilities that optimize TDS packet size, accelerate MARS applications, and provide for local caching of redundant queries.
I read Michael Bunyard’s blog, Why monitor application performance if you don’t fix it?, which is both entertaining and really supports what we are trying to do at Nitrosphere. It’s one thing to monitor all your systems, databases, applications, etc. – this is important because you need to know if something is down or going down – but not enough importance is placed on SQL Server remediation or even prevention. He refers to this very entertaining commercial from LifeLock that really drives home what monitoring alone achieves.
Monitors at their core just advise you that there is something wrong or about to be wrong. Some people even ignore a lot of the alerts until someone in an organization complains about, for example, application performance. They reinforce a reactive approach to systems. So rather than taking a big picture view such as “How can I help the business be more productive” or “How can I help the business make more money?”, the viewpoint is “How do I turn off that red light?”.
Sometimes I think organizations lose sight of what their real purpose is and just continue doing what they do because that’s what they do – it’s called organizational inertia – without considering the big picture. If an application is performing poorly and costing productivity, does the business owner care if it’s a database problem or a network problem? The owner just wants productivity improved (always actually). They don’t want to be “advised” that they have a problem, they want the problem fixed ASAP, and, in fact, would prefer if problems were prevented from happening in the first place.
At Nitrosphere, we do just that – we fix the problem quickly and efficiently – and, for proactive organizations, we prevent the problem from happening in the first place. Thanks for a great blog Michael that drives that message home!