If you are concerned with SQL Server application performance and want to learn how to fully optimize for performance, we have some great new materials for you. We recently recorded a webinar that we sponsored with Pinal Dave from SQLAuthority.com. He provides some excellent tips for tuning SQL Server that include some obscure but simple settings that can result in a dramatic improvement in performance. It’s not all about NitroAccelerator, but his demo at the end is excellent and a powerful reminder of how you can improve performance with the product in just minutes.
We have also just released a tech brief targeted at application developers that gives some good reasons for them not to immediately discount the venerable 2-tier architecture. For SQL Server and .NET developers there are some important considerations for them in making decisions on their application architecture.
Last, but not least, you may want to refresh your knowledge on how to diagnose and optimize SQL Server network performance. Read our white paper or read Pinal’s excellent blog, SQL SERVER – Identifying Application vs Network Performance Issues.
Many people think that applications, databases, and networks are all completely separate areas relating to performance and security. At Nitrosphere, we are committed to breaking down those walls and enabling the owners of the applications to fix performance regardless of who might be “responsible”. The materials I mention above can help you learn more about this approach.
via: SQL Authority
I recently talked with Mark Wright, CTO of Nitrosphere, a company that optimizes SQL Server application performance. In his career, he has seen many “standard” practices that often negatively affect performance of the application even though they may make things easier for the SQL Server developer or DBA. He offered up several tips, some of which are quite easy to implement, that result in getting the most out of your SQL Server applications in your current environment. While some of these tips are oriented towards developers of SQL Server applications, many times DBAs are held accountable for poor practices that negatively impact application performance.
When using SSIS/DTS with SQL Server, set your packet size to 32K. This setting better (but not optimally) uses TCP, which is a streaming protocol. Many suggest that the packet be sized to physical attributes of your network, which is only true in very edge cases, and truly finding that sweet spot is more trouble than it’s worth, as the savings would be minimal. Equally absurd is setting the packet to a smaller size because your application typically sends and receives small amounts of data. SQL Server doesn’t send 4k just because the packet is set to 4k. It will send fewer bytes if that’s all that is required.
If you have a .NET SQL Server application that processes large blocks of data, then use .NET 4.5 with asynchronous processing. This .NET facility allows your application to read and process data simultaneously, so your application is less likely to block on waiting for data from the network.
For threaded.NET applications, use connection pooling along with multiple connections to run queries in parallel. Connection pooling streamlines connections for an application that maintains multiple connections or closes and re-opens connections to SQL Server. When applications are designed to be threaded and possibly running multiple queries to update the UI, these queries should use separate connections. The alternative is MARS (see below).
Tell your developer not to use Multiple Active Result Sets (MARS). While almost no DBAs know about MARS, for SQL Server applications that go beyond the LAN, MARS will almost always adversely affect performance. Per Microsoft, MARS simplifies application design with the following new capabilities:
- Applications can have multiple default result sets open and can interleave reading from them.
- Applications can execute other statements (for example, INSERT, UPDATE, DELETE, and stored procedure calls) while default result sets are open.
While not a default, many developers connect this way either because it was already in another piece of code or because they take Microsoft’s advice above. This is something DBAs should know about since you are accountable for the SQL Server performance. For many applications, it’s a matter of removing it from the connection string. In cases where the developers truly leverage the MARS capabilities, re-architecting the app would be required.
Many developers build chatty applications that overdo handshaking with SQL Server. One example is forms that generate a query/update every time a field is updated. It’s better, if possible, to batch up the form data and send it all at once rather than one field at a time. In some cases, this data may be redundant, this would be better if cached locally within the application.
Using these tips, you can better advise developers on how to make sure your SQL Server applications are fully optimized. Or you can take things into your own hands and use NitroAccelerator to gain the benefits of the tips without having to change the application. NitroAccelerator has built-in capabilities that optimize TDS packet size, accelerate MARS applications, and provide for local caching of redundant queries.
I read Michael Bunyard’s blog, Why monitor application performance if you don’t fix it?, which is both entertaining and really supports what we are trying to do at Nitrosphere. It’s one thing to monitor all your systems, databases, applications, etc. – this is important because you need to know if something is down or going down – but not enough importance is placed on SQL Server remediation or even prevention. He refers to this very entertaining commercial from LifeLock that really drives home what monitoring alone achieves.
Monitors at their core just advise you that there is something wrong or about to be wrong. Some people even ignore a lot of the alerts until someone in an organization complains about, for example, application performance. They reinforce a reactive approach to systems. So rather than taking a big picture view such as “How can I help the business be more productive” or “How can I help the business make more money?”, the viewpoint is “How do I turn off that red light?”.
Sometimes I think organizations lose sight of what their real purpose is and just continue doing what they do because that’s what they do – it’s called organizational inertia – without considering the big picture. If an application is performing poorly and costing productivity, does the business owner care if it’s a database problem or a network problem? The owner just wants productivity improved (always actually). They don’t want to be “advised” that they have a problem, they want the problem fixed ASAP, and, in fact, would prefer if problems were prevented from happening in the first place.
At Nitrosphere, we do just that – we fix the problem quickly and efficiently – and, for proactive organizations, we prevent the problem from happening in the first place. Thanks for a great blog Michael that drives that message home!
I hope you enjoyed the amazing Super Bowl! There’s another game going on with organizations migrating workloads to the cloud. Migrating applications to the cloud is a balancing act between performance and unique cloud-related cost factors including data egress, bandwidth, and compute time. These costs can mount quickly, especially when trying to optimize your applications for performance. Add to this challenge both a mobile and remote workforce where bandwidth and latency are likely to cause lost productivity, which is itself another hidden cost.
I recently interviewed Jason Schlueter from Communicus where he explained how NitroAccelerator addresses these issues for them and so much more. NitroAccelerator has enabled Communicus to move SQL Server-based business intelligence applications from on premise to Microsoft Azure cloud. Now their analysts have access to the data they need all over the world. Without NitroAccelerator these analysts were dealing with delays of 40 minutes or more to access this data. NitroAccelerator brought this delay down to a mere 2 or 3 minutes! This was not only a huge boost to productivity, but it also impacted direct costs through reduced data and virtual machine charges. This interview made me feel like Nitrosphere won the Super Bowl!
Watch the interview below!
Wow! I can’t believe January is already almost over. We hit the ground running this year with honing our focus as a company, taking care of our existing customers, and onboarding new ones. In 2016, we made major inroads in stabilizing and adding some key new features to NitroAccelerator such as intelligent protocol detection and HyperCache which significantly speeds up client-server applications. We also announced support for SQL Server Analysis Services (SSAS) which is our first foray into accelerating applications purely at the TCP/IP level. In 2017 you will see new capabilities put into NitroAccelerator which will further broaden our market appeal in the SQL Server space and you will also see a new product that takes what we learned with SSAS and brings acceleration and security to ALL Windows-based applications.
In 2016 we won back several customers who decided that, indeed, we are mission critical to their operation, and we also expanded our customer base significantly. In the last few months, we have won customers in Europe, Asia, and Africa (as well as the USA) and are now working opportunities that will put us on EVERY continent in the world.
So, we have been busy! This week we are returning to our bread and butter in putting a focus on SQL Server Replication. Many of our original customers are using NitroAccelerator to speed up replication between geographically dispersed locations and we believe that replication is a good way to co-locate data with the applications to improve performance at those locations. Pinal Dave wrote a nice piece this week about this use for replication, as well as other replication scenarios in his latest nicely title blog, When to Use a Sledgehammer and When to use a Screwdriver.
I’m also looking forward to my first video blog where I interview a Nitrosphere customer about the before and after relating to NitroAccelerator. It’s a great interview and I hope you watch it. I’ll be doing this on a regular basis with customers and other people in our industry.
With that, a belated but excited Happy 2017!
Pinal Dave blogged today on how to speed up SQL Server performance without any code changes. Part of this blog includes an awesome demo he created showing one way to accomplish this with NitroAccelerator. One amazing thing he showed was how NItroAccelerator can even help improve performance and reduce network congestion on a high-speed LAN. We’ve never really focused on LAN environments since we believed the primary benefit would be for customers with Cloud or WAN applications. This is a real game changer! And now we have brought this capability to Business Intelligence and are rolling out further platform support in the next few weeks.
This shows the value of working with industry experts like Pinal who look at the big picture with the combination of his great technical acumen and his direct experience with customers. He uses myriad tools to help customers get the most of their environment and now NItroAccelerator is one of those tools!
Last week I talked about garbage-in, garbage-out with respect to data-based decision making. Another symptom I have seen in many organizations is what I call the paralysis of analysis. All that data is so interesting that it can be almost hypnotizing. I’ve had situations where I’ve done exhaustive analysis and provided reams of data only to be asked to go out and get more. I came to realize that no matter how much information I provided, there was never enough to reduce the risk or provide enlightenment to actually move forward in a bold way. What generally resulted were half-funded initiatives where the original goals were kept but with inadequate resources. On the other side is action. Why wait for every piece of data before trying different things to either optimize or grow an initiative? I had a recent personal experience that drove this home for me.
I’m an avid cyclist (actually, I’m avid about cycling, swimming, and general exercise). About 4 years ago I bought my first carbon fiber road bike to replace my venerable Lemond Zurich steel framed bike. I was sure that my performance would improve on this new bike as, not only was it outfitted well, but I had it professionally fitted. Over the four years I’ve never been really happy with my speed and hill-climbing on this bike. To be honest, I mostly blamed it on me getting older and just can’t expect the performance I used to have. Then last weekend midway thru my ride, I decided to raise my seat height by ¾ of an inch. The result was dramatic. On the way back were the two toughest hills of the ride. I reached the top well faster than I ever have and with much less fatigue than I would usually feel. Then I passed 3-4 riders who had passed me on the way out. I couldn’t believe that all this time I had put up with the status quo and by just making a minor adjustment I achieved fantastic results! How does this relate to organizations?
Back to data. Data-centric decision making can be good if you are looking at the right data. However, sometimes it is more effective to make tweaks to an organization or process or business model and just see what happens. Experimentation can be a good thing and can deliver dramatic results. Another thing I have found is that people who are affected by change will have a tendency to overstate the negative effects of the change. Of course, for the leader, a major part of the job is to assess risk, and then to initiate action. I believe that there are many instances that data analysis is used to inhibit action rather than to spur change.
At Nitrosphere, our goal is to create products that provide dramatic results while keeping the risk of using the product minimal. This means we put a lot of effort into making highly complex processes appear simple. We want people to be able to just try it because there’s nothing to lose if they do and A LOT to gain.
Pinal Dave has a great blog on how to Identify Application vs Network Performance Issues using SQL Server Dynamic Management Views (DMV). We provided some sample scripts on getting the data from sys.dm_os_wait_stats to identify whether you have a problem with the client side of the application or the network. We now have a white paper that shows how to drill down further in checking SQL Server sys.configurations for the network packet size. The SQL Server network packet size does not actually affect the network layer, but changes the size of the Tabular Data Stream (TDS) packets which are then sent to TCP/IP for transmission.
The white paper shows how to optimize the TDS packets manually or simply use a tool like NitroAccelerator to make SQL Server faster on the network regardless of the packet settings. To learn more, check out DBA Tactics for Optimizing SQL Server Network Performance which is written by Kenneth Fisher (@SQLStudies) and Robert L. Davis (@SQLSoldier), two SQL Server experts and bloggers who have also tested NitroAccelerator in their own labs.
I read an interesting article about Google using their DeepMind AI system to improve their power usage efficiency by 15% – which adds up to hundreds of millions of dollars of savings. Of course, DeepMind has been a big investment for Google and finding areas for them to gain efficiency leads to immediate payback – not necessarily covering the entire investment, but savings that add up over time to those hundreds of millions. Like any good organization, I’m sure that Google started with metrics so they had a handle of not just what the costs were, but where the biggest cost impacts were occurring. As the saying goes, “you can’t change what you don’t measure”. However, a lot of organizations get stuck in metrics mode and never get around to the work of actually optimizing – they are are always measuring but never changing. Other organizations continue on inertia and keep on tweaking applications and infrastructure just because they have teams dedicated to those functions. So developers optimize their applications, DBAs – the database, and network admins – the network. They use familiar tools and spend countless hours engaged in the process because that’s what they do. In many organizations, the combination of people and tools adds up to millions of dollars dedicated to this process.
Who is doing the cost-benefit analysis across these organizations to ensure there is a payback for all this activity? Yes, sometimes you need to change processes. Sometimes you need to change organizations. Sometimes you need to change tools. And sometimes you just need to look at solving an immediate problem. You want your organizations focused on big problems that truly require the attention of domain experts. For example, if your customers/end-users are complaining about poor application performance impacting their productivity, are you balancing the cost of productivity versus the combined cost of the people and tools to address the issue? Maybe it’s important to address the lost productivity now while also looking at how you can improve the supporting infrastructure over time. The longer it takes to address the issue, the more the combined costs of lost productivity and time/cost spent by the supporting organizations in analyzing and working on solutions to the issue. And, like with Google and their power costs, those combined costs can add up to millions of dollars. Unfortunately, organizational inertia can keep sucking those dollars forever.
It is important to have accountability and aligned incentives throughout an organization based on driving efficiency for the company. It should not just start and end with the CEO, CFO, CIO, but should extend to every level. The DBA or the Network Admin can have a huge impact on saving money for their companies by taking a new perspective on their organization and looking at the cost-benefits of how they address pressing issues for their customers.
Like many people, I’ve been watching the Rio Olympics every evening this week. It’s amazing to see these athletes who have trained all their lives to culminate many times in just this one experience. Then there are others like Michael Phelps who have dominated over several Olympics. I found the men’s and women’s relays particularly interesting and enjoyable as each country puts together a team of their best swimmers to compete as a group. A given team may have one particularly dominant swimmer, but they still may not win if the other team members are unable to at least keep up with the competition. In the women’s 4×200 Freestyle Relay yesterday, this was demonstrated as Sweden took an early and commanding lead in the first leg, then lost the lead to the Aussies in the second and third legs who had their top swimmers in those legs. The Americans managed to stay within striking distance of the Aussies even though they were behind by one second when Katie Ledecky took over and the Americans won by two seconds.
It is interesting how the outcome is influenced by the weakest link in the relay. It’s like a network where you may have consistently great performance in the data center, but those in remote offices are experiencing slow or inconsistent performance. This can impact productivity and even result in lost business (ie, gold medals). Assuring consistently great performance across all links in the chain wins the gold!