I am amazed at the number of client-server applications out in the real world. We hear every day about cloud applications and every aspect of monitoring and managing cloud performance, but there is very little chatter about venerable client-server (2-tier) applications. There are literally thousands of client-server applications ranging from packaged products like Sage 100, SolidWorks PDM, and many others to in-house applications for every kind of business imaginable. Riverbed sold a lot of WAN optimization, Steelhead, appliances and Citrix has sold a lot of virtual desktop infrastructure (VDI), Xenapp, to enterprises around the world. These solutions are great if you are in IT and have a lot of time, resources, and money to install and manage hardware and software. They are also great for the vendors since, once installed, it’s a huge effort to move away from their platforms. In both cases, the impact to the end user is a side-effect. I mean, is it okay to let end-users suffer bad performance for months or years?
Regarding WAN appliances, there is an article in ComputerWeekly.com that talks about the pros and cons of WAN optimization appliances. Part of the article is focused on why that market is flat. He says, “WAN optimisation projects can prove costly and complex, and that is why the market has stalled in recent years.” Cost and complexity play into the vendors advantage as I say above. See also this entertaining article from Pinal Dave about how these appliances can be like using a sledgehammer to do a screwdriver’s job.
Regarding VDI, much of IT saw VDI as a panacea for addressing application performance. There was a perception that VDI was low cost compared to the WAN appliance options, but when objectively analyzed as in this article from NetworkComputing.com. The article concludes, “The bottom line is you can justify VDI for a number of reasons–easier support, better security, perhaps better availability or a more appropriate system for task workers–but you can’t justify it on hardware and software savings. The numbers won’t work.”
When we are talking about WAN appliances or VDI, what are we forgetting about? The end-users and the applications they are using. At Nitrosphere, we take the opposite perspective and are laser focused on improving application performance for the end-users. We are not selling hardware, a software framework, and services to install, train, and manage the WAN or VDI platforms. Customers don’t have to wait months or years and pay hundreds of thousands of dollars for the solution. We install in seconds and customers see benefits within minutes. We make application optimization fast, secure, and simple.
We have quite a few customers using NitroAccelerator to improve SQL Server Replication performance. I have seen multiple uses for SQL Server Replication including:
- Moving data from a centralized publishing database to a remote database so that the data is closer to the application users
- Like above but using, for example, SQL Server Express, as a local cache on the end-user system to speed up the application
- Moving data from remote locations into a central database
Kendra Little has a good blog entry, Performance Tuning SQL Server Transactional Replication: A Checklist, that addresses several things to consider around replication, including the network. When the database servers are in physically separate locations – whether across town or across continents (as the case with our customer, Dynatrace) – then the network will become the central issue. The farther apart the servers, the more likely latency will become a factor. Additionally, when the servers are in other countries or remote regions, you can’t always control the level of bandwidth, or it might be impractically expensive to upgrade it. Argenis Fernandez’ blog entry on Transactional Replication and WAN links is a good reference for the further tuning of replication across the WAN and the perils of using WAN accelerators.
Yet the network connection can still constrain performance and cause unacceptably high replication latency. That’s when NitroAccelerator comes into the picture. According to Pinal Dave, in SQL Server – When to Use a Sledgehammer and When to use a Screwdriver:
“A common issue when using replication over long distances is that it can fall hopelessly behind. I have seen many companies leverage NitroAccelerator from Nitrosphere to mitigate this issue by attaining near gigabit LAN speeds over these high-latency connections. As a result, they outperform the Always On feature at a fraction of the price.”
Real-time replication across even the slowest connections is a simple reality with NitroAccelerator. Maybe it’s time to start leveraging NitroAccelerator in your environment!
If you are concerned with SQL Server application performance and want to learn how to fully optimize for performance, we have some great new materials for you. We recently recorded a webinar that we sponsored with Pinal Dave from SQLAuthority.com. He provides some excellent tips for tuning SQL Server that include some obscure but simple settings that can result in a dramatic improvement in performance. It’s not all about NitroAccelerator, but his demo at the end is excellent and a powerful reminder of how you can improve performance with the product in just minutes.
We have also just released a tech brief targeted at application developers that gives some good reasons for them not to immediately discount the venerable 2-tier architecture. For SQL Server and .NET developers there are some important considerations for them in making decisions on their application architecture.
Last, but not least, you may want to refresh your knowledge on how to diagnose and optimize SQL Server network performance. Read our white paper or read Pinal’s excellent blog, SQL SERVER – Identifying Application vs Network Performance Issues.
Many people think that applications, databases, and networks are all completely separate areas relating to performance and security. At Nitrosphere, we are committed to breaking down those walls and enabling the owners of the applications to fix performance regardless of who might be “responsible”. The materials I mention above can help you learn more about this approach.
via: SQL Authority
I recently talked with Mark Wright, CTO of Nitrosphere, a company that optimizes SQL Server application performance. In his career, he has seen many “standard” practices that often negatively affect performance of the application even though they may make things easier for the SQL Server developer or DBA. He offered up several tips, some of which are quite easy to implement, that result in getting the most out of your SQL Server applications in your current environment. While some of these tips are oriented towards developers of SQL Server applications, many times DBAs are held accountable for poor practices that negatively impact application performance.
When using SSIS/DTS with SQL Server, set your packet size to 32K. This setting better (but not optimally) uses TCP, which is a streaming protocol. Many suggest that the packet be sized to physical attributes of your network, which is only true in very edge cases, and truly finding that sweet spot is more trouble than it’s worth, as the savings would be minimal. Equally absurd is setting the packet to a smaller size because your application typically sends and receives small amounts of data. SQL Server doesn’t send 4k just because the packet is set to 4k. It will send fewer bytes if that’s all that is required.
If you have a .NET SQL Server application that processes large blocks of data, then use .NET 4.5 with asynchronous processing. This .NET facility allows your application to read and process data simultaneously, so your application is less likely to block on waiting for data from the network.
For threaded.NET applications, use connection pooling along with multiple connections to run queries in parallel. Connection pooling streamlines connections for an application that maintains multiple connections or closes and re-opens connections to SQL Server. When applications are designed to be threaded and possibly running multiple queries to update the UI, these queries should use separate connections. The alternative is MARS (see below).
Tell your developer not to use Multiple Active Result Sets (MARS). While almost no DBAs know about MARS, for SQL Server applications that go beyond the LAN, MARS will almost always adversely affect performance. Per Microsoft, MARS simplifies application design with the following new capabilities:
- Applications can have multiple default result sets open and can interleave reading from them.
- Applications can execute other statements (for example, INSERT, UPDATE, DELETE, and stored procedure calls) while default result sets are open.
While not a default, many developers connect this way either because it was already in another piece of code or because they take Microsoft’s advice above. This is something DBAs should know about since you are accountable for the SQL Server performance. For many applications, it’s a matter of removing it from the connection string. In cases where the developers truly leverage the MARS capabilities, re-architecting the app would be required.
Many developers build chatty applications that overdo handshaking with SQL Server. One example is forms that generate a query/update every time a field is updated. It’s better, if possible, to batch up the form data and send it all at once rather than one field at a time. In some cases, this data may be redundant, this would be better if cached locally within the application.
Using these tips, you can better advise developers on how to make sure your SQL Server applications are fully optimized. Or you can take things into your own hands and use NitroAccelerator to gain the benefits of the tips without having to change the application. NitroAccelerator has built-in capabilities that optimize TDS packet size, accelerate MARS applications, and provide for local caching of redundant queries.
I read Michael Bunyard’s blog, Why monitor application performance if you don’t fix it?, which is both entertaining and really supports what we are trying to do at Nitrosphere. It’s one thing to monitor all your systems, databases, applications, etc. – this is important because you need to know if something is down or going down – but not enough importance is placed on SQL Server remediation or even prevention. He refers to this very entertaining commercial from LifeLock that really drives home what monitoring alone achieves.
Monitors at their core just advise you that there is something wrong or about to be wrong. Some people even ignore a lot of the alerts until someone in an organization complains about, for example, application performance. They reinforce a reactive approach to systems. So rather than taking a big picture view such as “How can I help the business be more productive” or “How can I help the business make more money?”, the viewpoint is “How do I turn off that red light?”.
Sometimes I think organizations lose sight of what their real purpose is and just continue doing what they do because that’s what they do – it’s called organizational inertia – without considering the big picture. If an application is performing poorly and costing productivity, does the business owner care if it’s a database problem or a network problem? The owner just wants productivity improved (always actually). They don’t want to be “advised” that they have a problem, they want the problem fixed ASAP, and, in fact, would prefer if problems were prevented from happening in the first place.
At Nitrosphere, we do just that – we fix the problem quickly and efficiently – and, for proactive organizations, we prevent the problem from happening in the first place. Thanks for a great blog Michael that drives that message home!
I hope you enjoyed the amazing Super Bowl! There’s another game going on with organizations migrating workloads to the cloud. Migrating applications to the cloud is a balancing act between performance and unique cloud-related cost factors including data egress, bandwidth, and compute time. These costs can mount quickly, especially when trying to optimize your applications for performance. Add to this challenge both a mobile and remote workforce where bandwidth and latency are likely to cause lost productivity, which is itself another hidden cost.
I recently interviewed Jason Schlueter from Communicus where he explained how NitroAccelerator addresses these issues for them and so much more. NitroAccelerator has enabled Communicus to move SQL Server-based business intelligence applications from on premise to Microsoft Azure cloud. Now their analysts have access to the data they need all over the world. Without NitroAccelerator these analysts were dealing with delays of 40 minutes or more to access this data. NitroAccelerator brought this delay down to a mere 2 or 3 minutes! This was not only a huge boost to productivity, but it also impacted direct costs through reduced data and virtual machine charges. This interview made me feel like Nitrosphere won the Super Bowl!
Watch the interview below!
Pinal Dave blogged today on how to speed up SQL Server performance without any code changes. Part of this blog includes an awesome demo he created showing one way to accomplish this with NitroAccelerator. One amazing thing he showed was how NItroAccelerator can even help improve performance and reduce network congestion on a high-speed LAN. We’ve never really focused on LAN environments since we believed the primary benefit would be for customers with Cloud or WAN applications. This is a real game changer! And now we have brought this capability to Business Intelligence and are rolling out further platform support in the next few weeks.
This shows the value of working with industry experts like Pinal who look at the big picture with the combination of his great technical acumen and his direct experience with customers. He uses myriad tools to help customers get the most of their environment and now NItroAccelerator is one of those tools!
Last week I talked about garbage-in, garbage-out with respect to data-based decision making. Another symptom I have seen in many organizations is what I call the paralysis of analysis. All that data is so interesting that it can be almost hypnotizing. I’ve had situations where I’ve done exhaustive analysis and provided reams of data only to be asked to go out and get more. I came to realize that no matter how much information I provided, there was never enough to reduce the risk or provide enlightenment to actually move forward in a bold way. What generally resulted were half-funded initiatives where the original goals were kept but with inadequate resources. On the other side is action. Why wait for every piece of data before trying different things to either optimize or grow an initiative? I had a recent personal experience that drove this home for me.
I’m an avid cyclist (actually, I’m avid about cycling, swimming, and general exercise). About 4 years ago I bought my first carbon fiber road bike to replace my venerable Lemond Zurich steel framed bike. I was sure that my performance would improve on this new bike as, not only was it outfitted well, but I had it professionally fitted. Over the four years I’ve never been really happy with my speed and hill-climbing on this bike. To be honest, I mostly blamed it on me getting older and just can’t expect the performance I used to have. Then last weekend midway thru my ride, I decided to raise my seat height by ¾ of an inch. The result was dramatic. On the way back were the two toughest hills of the ride. I reached the top well faster than I ever have and with much less fatigue than I would usually feel. Then I passed 3-4 riders who had passed me on the way out. I couldn’t believe that all this time I had put up with the status quo and by just making a minor adjustment I achieved fantastic results! How does this relate to organizations?
Back to data. Data-centric decision making can be good if you are looking at the right data. However, sometimes it is more effective to make tweaks to an organization or process or business model and just see what happens. Experimentation can be a good thing and can deliver dramatic results. Another thing I have found is that people who are affected by change will have a tendency to overstate the negative effects of the change. Of course, for the leader, a major part of the job is to assess risk, and then to initiate action. I believe that there are many instances that data analysis is used to inhibit action rather than to spur change.
At Nitrosphere, our goal is to create products that provide dramatic results while keeping the risk of using the product minimal. This means we put a lot of effort into making highly complex processes appear simple. We want people to be able to just try it because there’s nothing to lose if they do and A LOT to gain.
I had lunch the other day with Joel Trammell, CEO of Khorus and serial Austin entrepreneur. Khorus provides a software based management system for CEOs to optimize the performance and alignment of their organizations. Joel was telling me how his philosophy is to let the people in the functional areas of the organization worry about data. What he tries to get out from the functional leaders on down is predictability and accountability for those predictions. CEO’s and other company leaders are drowning in data, but how is that helping them know how the company is doing relative to its strategic objectives? Forecasting results is a lot more than just data analysis and requires a combination of good data, knowledge of the people in the organization, and comfortable transparency that allows people to provide real information regardless of their position in the corporate hierarchy. Khorus provides a software platform for enabling this information flow throughout the company. People input their forecasts (revenue, product delivery, budget, anything that affects strategic objectives), then they are held accountable via incentive programs and management against those projections. It’s common sense management that provides visibility to the CEO on the reality of what’s happening in the company. Data can be used to inform or to mislead – at the end of the day you need people to be accountable for performance.
Regarding validity of data, I read this article, “Most Scientific Findings Are Wrong or Useless”, that talks about how a lot of the data that scientists use to support their “findings” are either flawed or downright wrong. Its conclusion is that “science isn’t self-correcting, it’s self-destructing.” And this is happening in many corporations as they gain more integration between systems thus access to data. There is nothing wrong with using metrics/data to measure performance and to even predict performance. I call this using trailing edge and leading edge indicators to help measure organizational performance. At the end, data is great for accountability and also useful for driving organizational change. But all data is not useful everywhere in an organization. Make sure it’s accessible to the experts (the people doing the work), then hold them accountable for meeting the targets they have provided.
Pinal Dave has a great blog on how to Identify Application vs Network Performance Issues using SQL Server Dynamic Management Views (DMV). We provided some sample scripts on getting the data from sys.dm_os_wait_stats to identify whether you have a problem with the client side of the application or the network. We now have a white paper that shows how to drill down further in checking SQL Server sys.configurations for the network packet size. The SQL Server network packet size does not actually affect the network layer, but changes the size of the Tabular Data Stream (TDS) packets which are then sent to TCP/IP for transmission.
The white paper shows how to optimize the TDS packets manually or simply use a tool like NitroAccelerator to make SQL Server faster on the network regardless of the packet settings. To learn more, check out DBA Tactics for Optimizing SQL Server Network Performance which is written by Kenneth Fisher (@SQLStudies) and Robert L. Davis (@SQLSoldier), two SQL Server experts and bloggers who have also tested NitroAccelerator in their own labs.