We have quite a few customers using NitroAccelerator to improve SQL Server Replication performance. I have seen multiple uses for SQL Server Replication including:
- Moving data from a centralized publishing database to a remote database so that the data is closer to the application users
- Like above but using, for example, SQL Server Express, as a local cache on the end-user system to speed up the application
- Moving data from remote locations into a central database
Kendra Little has a good blog entry, Performance Tuning SQL Server Transactional Replication: A Checklist, that addresses several things to consider around replication, including the network. When the database servers are in physically separate locations – whether across town or across continents (as the case with our customer, Dynatrace) – then the network will become the central issue. The farther apart the servers, the more likely latency will become a factor. Additionally, when the servers are in other countries or remote regions, you can’t always control the level of bandwidth, or it might be impractically expensive to upgrade it. Argenis Fernandez’ blog entry on Transactional Replication and WAN links is a good reference for the further tuning of replication across the WAN and the perils of using WAN accelerators.
Yet the network connection can still constrain performance and cause unacceptably high replication latency. That’s when NitroAccelerator comes into the picture. According to Pinal Dave, in SQL Server – When to Use a Sledgehammer and When to use a Screwdriver:
“A common issue when using replication over long distances is that it can fall hopelessly behind. I have seen many companies leverage NitroAccelerator from Nitrosphere to mitigate this issue by attaining near gigabit LAN speeds over these high-latency connections. As a result, they outperform the Always On feature at a fraction of the price.”
Real-time replication across even the slowest connections is a simple reality with NitroAccelerator. Maybe it’s time to start leveraging NitroAccelerator in your environment!
via: SQL Authority
I recently talked with Mark Wright, CTO of Nitrosphere, a company that optimizes SQL Server application performance. In his career, he has seen many “standard” practices that often negatively affect performance of the application even though they may make things easier for the SQL Server developer or DBA. He offered up several tips, some of which are quite easy to implement, that result in getting the most out of your SQL Server applications in your current environment. While some of these tips are oriented towards developers of SQL Server applications, many times DBAs are held accountable for poor practices that negatively impact application performance.
When using SSIS/DTS with SQL Server, set your packet size to 32K. This setting better (but not optimally) uses TCP, which is a streaming protocol. Many suggest that the packet be sized to physical attributes of your network, which is only true in very edge cases, and truly finding that sweet spot is more trouble than it’s worth, as the savings would be minimal. Equally absurd is setting the packet to a smaller size because your application typically sends and receives small amounts of data. SQL Server doesn’t send 4k just because the packet is set to 4k. It will send fewer bytes if that’s all that is required.
If you have a .NET SQL Server application that processes large blocks of data, then use .NET 4.5 with asynchronous processing. This .NET facility allows your application to read and process data simultaneously, so your application is less likely to block on waiting for data from the network.
For threaded.NET applications, use connection pooling along with multiple connections to run queries in parallel. Connection pooling streamlines connections for an application that maintains multiple connections or closes and re-opens connections to SQL Server. When applications are designed to be threaded and possibly running multiple queries to update the UI, these queries should use separate connections. The alternative is MARS (see below).
Tell your developer not to use Multiple Active Result Sets (MARS). While almost no DBAs know about MARS, for SQL Server applications that go beyond the LAN, MARS will almost always adversely affect performance. Per Microsoft, MARS simplifies application design with the following new capabilities:
- Applications can have multiple default result sets open and can interleave reading from them.
- Applications can execute other statements (for example, INSERT, UPDATE, DELETE, and stored procedure calls) while default result sets are open.
While not a default, many developers connect this way either because it was already in another piece of code or because they take Microsoft’s advice above. This is something DBAs should know about since you are accountable for the SQL Server performance. For many applications, it’s a matter of removing it from the connection string. In cases where the developers truly leverage the MARS capabilities, re-architecting the app would be required.
Many developers build chatty applications that overdo handshaking with SQL Server. One example is forms that generate a query/update every time a field is updated. It’s better, if possible, to batch up the form data and send it all at once rather than one field at a time. In some cases, this data may be redundant, this would be better if cached locally within the application.
Using these tips, you can better advise developers on how to make sure your SQL Server applications are fully optimized. Or you can take things into your own hands and use NitroAccelerator to gain the benefits of the tips without having to change the application. NitroAccelerator has built-in capabilities that optimize TDS packet size, accelerate MARS applications, and provide for local caching of redundant queries.
There is a good summary of SQL Server 2016 features in this article in TechCrunch. A key highlight is the vast improvement in performance as, “queries should execute 25 percent faster on the same hardware. Once you start making use of new features like SQL Server 2016’s in-memory updatable column stores, those speed-ups could hit 100x for some types of queries.” This kind of performance improvement is incredible and sets apart SQL Server from its competitors. However, if you look at my blog from last week about network congestion, for those with already congested networks or with branches and users who have low or variable bandwidth connections, they may not realize the full benefit of these performance gains. This is akin to the great gains in CPU power that were made prior to the advent of SSD storage where hard disk technology was not able to supply data fast enough for the CPU to consume. Thus techniques like on-disk caching were used. I believe that improvements like this will increase the need for network performance acceleration and other techniques to enable end-users to fully realize the benefits of the upcoming SQL Server 2016 release.