I am amazed at the number of client-server applications out in the real world. We hear every day about cloud applications and every aspect of monitoring and managing cloud performance, but there is very little chatter about venerable client-server (2-tier) applications. There are literally thousands of client-server applications ranging from packaged products like Sage 100, SolidWorks PDM, and many others to in-house applications for every kind of business imaginable. Riverbed sold a lot of WAN optimization, Steelhead, appliances and Citrix has sold a lot of virtual desktop infrastructure (VDI), Xenapp, to enterprises around the world. These solutions are great if you are in IT and have a lot of time, resources, and money to install and manage hardware and software. They are also great for the vendors since, once installed, it’s a huge effort to move away from their platforms. In both cases, the impact to the end user is a side-effect. I mean, is it okay to let end-users suffer bad performance for months or years?
Regarding WAN appliances, there is an article in ComputerWeekly.com that talks about the pros and cons of WAN optimization appliances. Part of the article is focused on why that market is flat. He says, “WAN optimisation projects can prove costly and complex, and that is why the market has stalled in recent years.” Cost and complexity play into the vendors advantage as I say above. See also this entertaining article from Pinal Dave about how these appliances can be like using a sledgehammer to do a screwdriver’s job.
Regarding VDI, much of IT saw VDI as a panacea for addressing application performance. There was a perception that VDI was low cost compared to the WAN appliance options, but when objectively analyzed as in this article from NetworkComputing.com. The article concludes, “The bottom line is you can justify VDI for a number of reasons–easier support, better security, perhaps better availability or a more appropriate system for task workers–but you can’t justify it on hardware and software savings. The numbers won’t work.”
When we are talking about WAN appliances or VDI, what are we forgetting about? The end-users and the applications they are using. At Nitrosphere, we take the opposite perspective and are laser focused on improving application performance for the end-users. We are not selling hardware, a software framework, and services to install, train, and manage the WAN or VDI platforms. Customers don’t have to wait months or years and pay hundreds of thousands of dollars for the solution. We install in seconds and customers see benefits within minutes. We make application optimization fast, secure, and simple.
We have quite a few customers using NitroAccelerator to improve SQL Server Replication performance. I have seen multiple uses for SQL Server Replication including:
- Moving data from a centralized publishing database to a remote database so that the data is closer to the application users
- Like above but using, for example, SQL Server Express, as a local cache on the end-user system to speed up the application
- Moving data from remote locations into a central database
Kendra Little has a good blog entry, Performance Tuning SQL Server Transactional Replication: A Checklist, that addresses several things to consider around replication, including the network. When the database servers are in physically separate locations – whether across town or across continents (as the case with our customer, Dynatrace) – then the network will become the central issue. The farther apart the servers, the more likely latency will become a factor. Additionally, when the servers are in other countries or remote regions, you can’t always control the level of bandwidth, or it might be impractically expensive to upgrade it. Argenis Fernandez’ blog entry on Transactional Replication and WAN links is a good reference for the further tuning of replication across the WAN and the perils of using WAN accelerators.
Yet the network connection can still constrain performance and cause unacceptably high replication latency. That’s when NitroAccelerator comes into the picture. According to Pinal Dave, in SQL Server – When to Use a Sledgehammer and When to use a Screwdriver:
“A common issue when using replication over long distances is that it can fall hopelessly behind. I have seen many companies leverage NitroAccelerator from Nitrosphere to mitigate this issue by attaining near gigabit LAN speeds over these high-latency connections. As a result, they outperform the Always On feature at a fraction of the price.”
Real-time replication across even the slowest connections is a simple reality with NitroAccelerator. Maybe it’s time to start leveraging NitroAccelerator in your environment!
If you are concerned with SQL Server application performance and want to learn how to fully optimize for performance, we have some great new materials for you. We recently recorded a webinar that we sponsored with Pinal Dave from SQLAuthority.com. He provides some excellent tips for tuning SQL Server that include some obscure but simple settings that can result in a dramatic improvement in performance. It’s not all about NitroAccelerator, but his demo at the end is excellent and a powerful reminder of how you can improve performance with the product in just minutes.
We have also just released a tech brief targeted at application developers that gives some good reasons for them not to immediately discount the venerable 2-tier architecture. For SQL Server and .NET developers there are some important considerations for them in making decisions on their application architecture.
Last, but not least, you may want to refresh your knowledge on how to diagnose and optimize SQL Server network performance. Read our white paper or read Pinal’s excellent blog, SQL SERVER – Identifying Application vs Network Performance Issues.
Many people think that applications, databases, and networks are all completely separate areas relating to performance and security. At Nitrosphere, we are committed to breaking down those walls and enabling the owners of the applications to fix performance regardless of who might be “responsible”. The materials I mention above can help you learn more about this approach.
via: SQL Authority
I recently talked with Mark Wright, CTO of Nitrosphere, a company that optimizes SQL Server application performance. In his career, he has seen many “standard” practices that often negatively affect performance of the application even though they may make things easier for the SQL Server developer or DBA. He offered up several tips, some of which are quite easy to implement, that result in getting the most out of your SQL Server applications in your current environment. While some of these tips are oriented towards developers of SQL Server applications, many times DBAs are held accountable for poor practices that negatively impact application performance.
When using SSIS/DTS with SQL Server, set your packet size to 32K. This setting better (but not optimally) uses TCP, which is a streaming protocol. Many suggest that the packet be sized to physical attributes of your network, which is only true in very edge cases, and truly finding that sweet spot is more trouble than it’s worth, as the savings would be minimal. Equally absurd is setting the packet to a smaller size because your application typically sends and receives small amounts of data. SQL Server doesn’t send 4k just because the packet is set to 4k. It will send fewer bytes if that’s all that is required.
If you have a .NET SQL Server application that processes large blocks of data, then use .NET 4.5 with asynchronous processing. This .NET facility allows your application to read and process data simultaneously, so your application is less likely to block on waiting for data from the network.
For threaded.NET applications, use connection pooling along with multiple connections to run queries in parallel. Connection pooling streamlines connections for an application that maintains multiple connections or closes and re-opens connections to SQL Server. When applications are designed to be threaded and possibly running multiple queries to update the UI, these queries should use separate connections. The alternative is MARS (see below).
Tell your developer not to use Multiple Active Result Sets (MARS). While almost no DBAs know about MARS, for SQL Server applications that go beyond the LAN, MARS will almost always adversely affect performance. Per Microsoft, MARS simplifies application design with the following new capabilities:
- Applications can have multiple default result sets open and can interleave reading from them.
- Applications can execute other statements (for example, INSERT, UPDATE, DELETE, and stored procedure calls) while default result sets are open.
While not a default, many developers connect this way either because it was already in another piece of code or because they take Microsoft’s advice above. This is something DBAs should know about since you are accountable for the SQL Server performance. For many applications, it’s a matter of removing it from the connection string. In cases where the developers truly leverage the MARS capabilities, re-architecting the app would be required.
Many developers build chatty applications that overdo handshaking with SQL Server. One example is forms that generate a query/update every time a field is updated. It’s better, if possible, to batch up the form data and send it all at once rather than one field at a time. In some cases, this data may be redundant, this would be better if cached locally within the application.
Using these tips, you can better advise developers on how to make sure your SQL Server applications are fully optimized. Or you can take things into your own hands and use NitroAccelerator to gain the benefits of the tips without having to change the application. NitroAccelerator has built-in capabilities that optimize TDS packet size, accelerate MARS applications, and provide for local caching of redundant queries.
I read Michael Bunyard’s blog, Why monitor application performance if you don’t fix it?, which is both entertaining and really supports what we are trying to do at Nitrosphere. It’s one thing to monitor all your systems, databases, applications, etc. – this is important because you need to know if something is down or going down – but not enough importance is placed on SQL Server remediation or even prevention. He refers to this very entertaining commercial from LifeLock that really drives home what monitoring alone achieves.
Monitors at their core just advise you that there is something wrong or about to be wrong. Some people even ignore a lot of the alerts until someone in an organization complains about, for example, application performance. They reinforce a reactive approach to systems. So rather than taking a big picture view such as “How can I help the business be more productive” or “How can I help the business make more money?”, the viewpoint is “How do I turn off that red light?”.
Sometimes I think organizations lose sight of what their real purpose is and just continue doing what they do because that’s what they do – it’s called organizational inertia – without considering the big picture. If an application is performing poorly and costing productivity, does the business owner care if it’s a database problem or a network problem? The owner just wants productivity improved (always actually). They don’t want to be “advised” that they have a problem, they want the problem fixed ASAP, and, in fact, would prefer if problems were prevented from happening in the first place.
At Nitrosphere, we do just that – we fix the problem quickly and efficiently – and, for proactive organizations, we prevent the problem from happening in the first place. Thanks for a great blog Michael that drives that message home!
I’m happy to announce NitroAccelerator 6.0! We now can significantly improve performance of “chatty” applications. Chattiness can be caused by applications that require a lot of validations or that involve complex user interfaces that need to contact the server every time data is entered for a field. A very common cause of chattiness is when the developer uses multiple active result sets (MARS) to connect to SQL Server. This option causes extra round trips to SQL Server to retrieve data which amplifies the latency of your network.
We have had many customers experience this effect without even knowing their application was using MARS. In the past we needed to detect that MARS was in use so we could let the customer know that there was no way to address their performance issue. Now that is no longer a factor and, upon installing NitroAccelerator, they will see performance improvement without having to know about MARS!
I hope you enjoyed the amazing Super Bowl! There’s another game going on with organizations migrating workloads to the cloud. Migrating applications to the cloud is a balancing act between performance and unique cloud-related cost factors including data egress, bandwidth, and compute time. These costs can mount quickly, especially when trying to optimize your applications for performance. Add to this challenge both a mobile and remote workforce where bandwidth and latency are likely to cause lost productivity, which is itself another hidden cost.
I recently interviewed Jason Schlueter from Communicus where he explained how NitroAccelerator addresses these issues for them and so much more. NitroAccelerator has enabled Communicus to move SQL Server-based business intelligence applications from on premise to Microsoft Azure cloud. Now their analysts have access to the data they need all over the world. Without NitroAccelerator these analysts were dealing with delays of 40 minutes or more to access this data. NitroAccelerator brought this delay down to a mere 2 or 3 minutes! This was not only a huge boost to productivity, but it also impacted direct costs through reduced data and virtual machine charges. This interview made me feel like Nitrosphere won the Super Bowl!
Watch the interview below!
Back in December we announced that NitroAccelerator now accelerates SQL Server Analysis Services which was a major breakthrough for us in that we are now beyond TDS (Tabular Data Stream) protocol and operating at the TCP/IP level. Now there’s a great by Jen Underwood of ImpactAnalytix talking about it. This is a significant expansion of our value proposition to businesses both small and large. And, while a lot of what we do sounds technical, the most gratifying thing about our business is that we solve a business problem. Every customer I talk to tells me a variation of how we make their business better. We help companies move information faster so they become more productive and agile in their markets. Stay tuned next week for a real-life story on that…
Wow! I can’t believe January is already almost over. We hit the ground running this year with honing our focus as a company, taking care of our existing customers, and onboarding new ones. In 2016, we made major inroads in stabilizing and adding some key new features to NitroAccelerator such as intelligent protocol detection and HyperCache which significantly speeds up client-server applications. We also announced support for SQL Server Analysis Services (SSAS) which is our first foray into accelerating applications purely at the TCP/IP level. In 2017 you will see new capabilities put into NitroAccelerator which will further broaden our market appeal in the SQL Server space and you will also see a new product that takes what we learned with SSAS and brings acceleration and security to ALL Windows-based applications.
In 2016 we won back several customers who decided that, indeed, we are mission critical to their operation, and we also expanded our customer base significantly. In the last few months, we have won customers in Europe, Asia, and Africa (as well as the USA) and are now working opportunities that will put us on EVERY continent in the world.
So, we have been busy! This week we are returning to our bread and butter in putting a focus on SQL Server Replication. Many of our original customers are using NitroAccelerator to speed up replication between geographically dispersed locations and we believe that replication is a good way to co-locate data with the applications to improve performance at those locations. Pinal Dave wrote a nice piece this week about this use for replication, as well as other replication scenarios in his latest nicely title blog, When to Use a Sledgehammer and When to use a Screwdriver.
I’m also looking forward to my first video blog where I interview a Nitrosphere customer about the before and after relating to NitroAccelerator. It’s a great interview and I hope you watch it. I’ll be doing this on a regular basis with customers and other people in our industry.
With that, a belated but excited Happy 2017!
Pinal Dave blogged today on how to speed up SQL Server performance without any code changes. Part of this blog includes an awesome demo he created showing one way to accomplish this with NitroAccelerator. One amazing thing he showed was how NItroAccelerator can even help improve performance and reduce network congestion on a high-speed LAN. We’ve never really focused on LAN environments since we believed the primary benefit would be for customers with Cloud or WAN applications. This is a real game changer! And now we have brought this capability to Business Intelligence and are rolling out further platform support in the next few weeks.
This shows the value of working with industry experts like Pinal who look at the big picture with the combination of his great technical acumen and his direct experience with customers. He uses myriad tools to help customers get the most of their environment and now NItroAccelerator is one of those tools!