I hope you enjoyed the amazing Super Bowl! There’s another game going on with organizations migrating workloads to the cloud. Migrating applications to the cloud is a balancing act between performance and unique cloud-related cost factors including data egress, bandwidth, and compute time. These costs can mount quickly, especially when trying to optimize your applications for performance. Add to this challenge both a mobile and remote workforce where bandwidth and latency are likely to cause lost productivity, which is itself another hidden cost.
I recently interviewed Jason Schlueter from Communicus where he explained how NitroAccelerator addresses these issues for them and so much more. NitroAccelerator has enabled Communicus to move SQL Server-based business intelligence applications from on premise to Microsoft Azure cloud. Now their analysts have access to the data they need all over the world. Without NitroAccelerator these analysts were dealing with delays of 40 minutes or more to access this data. NitroAccelerator brought this delay down to a mere 2 or 3 minutes! This was not only a huge boost to productivity, but it also impacted direct costs through reduced data and virtual machine charges. This interview made me feel like Nitrosphere won the Super Bowl!
Watch the interview below!
Pinal Dave has a great blog on how to Identify Application vs Network Performance Issues using SQL Server Dynamic Management Views (DMV). We provided some sample scripts on getting the data from sys.dm_os_wait_stats to identify whether you have a problem with the client side of the application or the network. We now have a white paper that shows how to drill down further in checking SQL Server sys.configurations for the network packet size. The SQL Server network packet size does not actually affect the network layer, but changes the size of the Tabular Data Stream (TDS) packets which are then sent to TCP/IP for transmission.
The white paper shows how to optimize the TDS packets manually or simply use a tool like NitroAccelerator to make SQL Server faster on the network regardless of the packet settings. To learn more, check out DBA Tactics for Optimizing SQL Server Network Performance which is written by Kenneth Fisher (@SQLStudies) and Robert L. Davis (@SQLSoldier), two SQL Server experts and bloggers who have also tested NitroAccelerator in their own labs.
I read an interesting article about Google using their DeepMind AI system to improve their power usage efficiency by 15% – which adds up to hundreds of millions of dollars of savings. Of course, DeepMind has been a big investment for Google and finding areas for them to gain efficiency leads to immediate payback – not necessarily covering the entire investment, but savings that add up over time to those hundreds of millions. Like any good organization, I’m sure that Google started with metrics so they had a handle of not just what the costs were, but where the biggest cost impacts were occurring. As the saying goes, “you can’t change what you don’t measure”. However, a lot of organizations get stuck in metrics mode and never get around to the work of actually optimizing – they are are always measuring but never changing. Other organizations continue on inertia and keep on tweaking applications and infrastructure just because they have teams dedicated to those functions. So developers optimize their applications, DBAs – the database, and network admins – the network. They use familiar tools and spend countless hours engaged in the process because that’s what they do. In many organizations, the combination of people and tools adds up to millions of dollars dedicated to this process.
Who is doing the cost-benefit analysis across these organizations to ensure there is a payback for all this activity? Yes, sometimes you need to change processes. Sometimes you need to change organizations. Sometimes you need to change tools. And sometimes you just need to look at solving an immediate problem. You want your organizations focused on big problems that truly require the attention of domain experts. For example, if your customers/end-users are complaining about poor application performance impacting their productivity, are you balancing the cost of productivity versus the combined cost of the people and tools to address the issue? Maybe it’s important to address the lost productivity now while also looking at how you can improve the supporting infrastructure over time. The longer it takes to address the issue, the more the combined costs of lost productivity and time/cost spent by the supporting organizations in analyzing and working on solutions to the issue. And, like with Google and their power costs, those combined costs can add up to millions of dollars. Unfortunately, organizational inertia can keep sucking those dollars forever.
It is important to have accountability and aligned incentives throughout an organization based on driving efficiency for the company. It should not just start and end with the CEO, CFO, CIO, but should extend to every level. The DBA or the Network Admin can have a huge impact on saving money for their companies by taking a new perspective on their organization and looking at the cost-benefits of how they address pressing issues for their customers.
Like many people, I’ve been watching the Rio Olympics every evening this week. It’s amazing to see these athletes who have trained all their lives to culminate many times in just this one experience. Then there are others like Michael Phelps who have dominated over several Olympics. I found the men’s and women’s relays particularly interesting and enjoyable as each country puts together a team of their best swimmers to compete as a group. A given team may have one particularly dominant swimmer, but they still may not win if the other team members are unable to at least keep up with the competition. In the women’s 4×200 Freestyle Relay yesterday, this was demonstrated as Sweden took an early and commanding lead in the first leg, then lost the lead to the Aussies in the second and third legs who had their top swimmers in those legs. The Americans managed to stay within striking distance of the Aussies even though they were behind by one second when Katie Ledecky took over and the Americans won by two seconds.
It is interesting how the outcome is influenced by the weakest link in the relay. It’s like a network where you may have consistently great performance in the data center, but those in remote offices are experiencing slow or inconsistent performance. This can impact productivity and even result in lost business (ie, gold medals). Assuring consistently great performance across all links in the chain wins the gold!
There is a good summary of SQL Server 2016 features in this article in TechCrunch. A key highlight is the vast improvement in performance as, “queries should execute 25 percent faster on the same hardware. Once you start making use of new features like SQL Server 2016’s in-memory updatable column stores, those speed-ups could hit 100x for some types of queries.” This kind of performance improvement is incredible and sets apart SQL Server from its competitors. However, if you look at my blog from last week about network congestion, for those with already congested networks or with branches and users who have low or variable bandwidth connections, they may not realize the full benefit of these performance gains. This is akin to the great gains in CPU power that were made prior to the advent of SSD storage where hard disk technology was not able to supply data fast enough for the CPU to consume. Thus techniques like on-disk caching were used. I believe that improvements like this will increase the need for network performance acceleration and other techniques to enable end-users to fully realize the benefits of the upcoming SQL Server 2016 release.
I read an interesting article this week about Sonoco, a large multinational packaging company, using containers to save on licensing. While the prime subject was interesting in itself, this quote from Nancy Lawson, the primary SQL Server DBA, caught my eye, “My main concern is that we have enough issues with network performance within our own data centers.” While nearly everyone is looking for ways to create a fatter pipe between data centers using tools like WAN accelerators, these techniques don’t reduce overall traffic within the network. So then, the organizations use complex QoS and application prioritization algorithms to attempt at ensuring reasonable performance for critical applications. Then this leads to adding discovery tools, monitoring systems, and more and more complexity – not just of the application infrastructure, but for the management infrastructure for ensuring performance and availability. This complexity causes every infrastructure management decision to become a strategic decision, not just due to complexity, but because these become six to seven figure spending decisions. If you can reduce overall network traffic you get several benefits including:
- better application performance,
- reduced complexity of the application and management infrastructure,
- lower costs of bandwidth and infrastructure.
The only way to address this is at the endpoints (the sum of all the infrastructure to support the applications: servers, desktops, laptops, etc.) Endpoints are where the network traffic begins and ends. If you don’t address network performance at the root, the endpoint, that’s when you need to add huge complexity to your environment. Implementing acceleration at the endpoint will improve application network performance while also reducing complexity and overall congestion across the entire network (LAN, WAN, Cloud).
Fred Johannessen- CEO