I am amazed at the number of client-server applications out in the real world. We hear every day about cloud applications and every aspect of monitoring and managing cloud performance, but there is very little chatter about venerable client-server (2-tier) applications. There are literally thousands of client-server applications ranging from packaged products like Sage 100, SolidWorks PDM, and many others to in-house applications for every kind of business imaginable. Riverbed sold a lot of WAN optimization, Steelhead, appliances and Citrix has sold a lot of virtual desktop infrastructure (VDI), Xenapp, to enterprises around the world. These solutions are great if you are in IT and have a lot of time, resources, and money to install and manage hardware and software. They are also great for the vendors since, once installed, it’s a huge effort to move away from their platforms. In both cases, the impact to the end user is a side-effect. I mean, is it okay to let end-users suffer bad performance for months or years?
Regarding WAN appliances, there is an article in ComputerWeekly.com that talks about the pros and cons of WAN optimization appliances. Part of the article is focused on why that market is flat. He says, “WAN optimisation projects can prove costly and complex, and that is why the market has stalled in recent years.” Cost and complexity play into the vendors advantage as I say above. See also this entertaining article from Pinal Dave about how these appliances can be like using a sledgehammer to do a screwdriver’s job.
Regarding VDI, much of IT saw VDI as a panacea for addressing application performance. There was a perception that VDI was low cost compared to the WAN appliance options, but when objectively analyzed as in this article from NetworkComputing.com. The article concludes, “The bottom line is you can justify VDI for a number of reasons–easier support, better security, perhaps better availability or a more appropriate system for task workers–but you can’t justify it on hardware and software savings. The numbers won’t work.”
When we are talking about WAN appliances or VDI, what are we forgetting about? The end-users and the applications they are using. At Nitrosphere, we take the opposite perspective and are laser focused on improving application performance for the end-users. We are not selling hardware, a software framework, and services to install, train, and manage the WAN or VDI platforms. Customers don’t have to wait months or years and pay hundreds of thousands of dollars for the solution. We install in seconds and customers see benefits within minutes. We make application optimization fast, secure, and simple.
We have quite a few customers using NitroAccelerator to improve SQL Server Replication performance. I have seen multiple uses for SQL Server Replication including:
- Moving data from a centralized publishing database to a remote database so that the data is closer to the application users
- Like above but using, for example, SQL Server Express, as a local cache on the end-user system to speed up the application
- Moving data from remote locations into a central database
Kendra Little has a good blog entry, Performance Tuning SQL Server Transactional Replication: A Checklist, that addresses several things to consider around replication, including the network. When the database servers are in physically separate locations – whether across town or across continents (as the case with our customer, Dynatrace) – then the network will become the central issue. The farther apart the servers, the more likely latency will become a factor. Additionally, when the servers are in other countries or remote regions, you can’t always control the level of bandwidth, or it might be impractically expensive to upgrade it. Argenis Fernandez’ blog entry on Transactional Replication and WAN links is a good reference for the further tuning of replication across the WAN and the perils of using WAN accelerators.
Yet the network connection can still constrain performance and cause unacceptably high replication latency. That’s when NitroAccelerator comes into the picture. According to Pinal Dave, in SQL Server – When to Use a Sledgehammer and When to use a Screwdriver:
“A common issue when using replication over long distances is that it can fall hopelessly behind. I have seen many companies leverage NitroAccelerator from Nitrosphere to mitigate this issue by attaining near gigabit LAN speeds over these high-latency connections. As a result, they outperform the Always On feature at a fraction of the price.”
Real-time replication across even the slowest connections is a simple reality with NitroAccelerator. Maybe it’s time to start leveraging NitroAccelerator in your environment!