Pinal Dave has a great blog on how to Identify Application vs Network Performance Issues using SQL Server Dynamic Management Views (DMV). We provided some sample scripts on getting the data from sys.dm_os_wait_stats to identify whether you have a problem with the client side of the application or the network. We now have a white paper that shows how to drill down further in checking SQL Server sys.configurations for the network packet size. The SQL Server network packet size does not actually affect the network layer, but changes the size of the Tabular Data Stream (TDS) packets which are then sent to TCP/IP for transmission.
The white paper shows how to optimize the TDS packets manually or simply use a tool like NitroAccelerator to make SQL Server faster on the network regardless of the packet settings. To learn more, check out DBA Tactics for Optimizing SQL Server Network Performance which is written by Kenneth Fisher (@SQLStudies) and Robert L. Davis (@SQLSoldier), two SQL Server experts and bloggers who have also tested NitroAccelerator in their own labs.
I read an interesting article about Google using their DeepMind AI system to improve their power usage efficiency by 15% – which adds up to hundreds of millions of dollars of savings. Of course, DeepMind has been a big investment for Google and finding areas for them to gain efficiency leads to immediate payback – not necessarily covering the entire investment, but savings that add up over time to those hundreds of millions. Like any good organization, I’m sure that Google started with metrics so they had a handle of not just what the costs were, but where the biggest cost impacts were occurring. As the saying goes, “you can’t change what you don’t measure”. However, a lot of organizations get stuck in metrics mode and never get around to the work of actually optimizing – they are are always measuring but never changing. Other organizations continue on inertia and keep on tweaking applications and infrastructure just because they have teams dedicated to those functions. So developers optimize their applications, DBAs – the database, and network admins – the network. They use familiar tools and spend countless hours engaged in the process because that’s what they do. In many organizations, the combination of people and tools adds up to millions of dollars dedicated to this process.
Who is doing the cost-benefit analysis across these organizations to ensure there is a payback for all this activity? Yes, sometimes you need to change processes. Sometimes you need to change organizations. Sometimes you need to change tools. And sometimes you just need to look at solving an immediate problem. You want your organizations focused on big problems that truly require the attention of domain experts. For example, if your customers/end-users are complaining about poor application performance impacting their productivity, are you balancing the cost of productivity versus the combined cost of the people and tools to address the issue? Maybe it’s important to address the lost productivity now while also looking at how you can improve the supporting infrastructure over time. The longer it takes to address the issue, the more the combined costs of lost productivity and time/cost spent by the supporting organizations in analyzing and working on solutions to the issue. And, like with Google and their power costs, those combined costs can add up to millions of dollars. Unfortunately, organizational inertia can keep sucking those dollars forever.
It is important to have accountability and aligned incentives throughout an organization based on driving efficiency for the company. It should not just start and end with the CEO, CFO, CIO, but should extend to every level. The DBA or the Network Admin can have a huge impact on saving money for their companies by taking a new perspective on their organization and looking at the cost-benefits of how they address pressing issues for their customers.
Like many people, I’ve been watching the Rio Olympics every evening this week. It’s amazing to see these athletes who have trained all their lives to culminate many times in just this one experience. Then there are others like Michael Phelps who have dominated over several Olympics. I found the men’s and women’s relays particularly interesting and enjoyable as each country puts together a team of their best swimmers to compete as a group. A given team may have one particularly dominant swimmer, but they still may not win if the other team members are unable to at least keep up with the competition. In the women’s 4×200 Freestyle Relay yesterday, this was demonstrated as Sweden took an early and commanding lead in the first leg, then lost the lead to the Aussies in the second and third legs who had their top swimmers in those legs. The Americans managed to stay within striking distance of the Aussies even though they were behind by one second when Katie Ledecky took over and the Americans won by two seconds.
It is interesting how the outcome is influenced by the weakest link in the relay. It’s like a network where you may have consistently great performance in the data center, but those in remote offices are experiencing slow or inconsistent performance. This can impact productivity and even result in lost business (ie, gold medals). Assuring consistently great performance across all links in the chain wins the gold!