I read an interesting article about Google using their DeepMind AI system to improve their power usage efficiency by 15% – which adds up to hundreds of millions of dollars of savings. Of course, DeepMind has been a big investment for Google and finding areas for them to gain efficiency leads to immediate payback – not necessarily covering the entire investment, but savings that add up over time to those hundreds of millions. Like any good organization, I’m sure that Google started with metrics so they had a handle of not just what the costs were, but where the biggest cost impacts were occurring. As the saying goes, “you can’t change what you don’t measure”. However, a lot of organizations get stuck in metrics mode and never get around to the work of actually optimizing – they are are always measuring but never changing. Other organizations continue on inertia and keep on tweaking applications and infrastructure just because they have teams dedicated to those functions. So developers optimize their applications, DBAs – the database, and network admins – the network. They use familiar tools and spend countless hours engaged in the process because that’s what they do. In many organizations, the combination of people and tools adds up to millions of dollars dedicated to this process.
Who is doing the cost-benefit analysis across these organizations to ensure there is a payback for all this activity? Yes, sometimes you need to change processes. Sometimes you need to change organizations. Sometimes you need to change tools. And sometimes you just need to look at solving an immediate problem. You want your organizations focused on big problems that truly require the attention of domain experts. For example, if your customers/end-users are complaining about poor application performance impacting their productivity, are you balancing the cost of productivity versus the combined cost of the people and tools to address the issue? Maybe it’s important to address the lost productivity now while also looking at how you can improve the supporting infrastructure over time. The longer it takes to address the issue, the more the combined costs of lost productivity and time/cost spent by the supporting organizations in analyzing and working on solutions to the issue. And, like with Google and their power costs, those combined costs can add up to millions of dollars. Unfortunately, organizational inertia can keep sucking those dollars forever.
It is important to have accountability and aligned incentives throughout an organization based on driving efficiency for the company. It should not just start and end with the CEO, CFO, CIO, but should extend to every level. The DBA or the Network Admin can have a huge impact on saving money for their companies by taking a new perspective on their organization and looking at the cost-benefits of how they address pressing issues for their customers.
Like many people, I’ve been watching the Rio Olympics every evening this week. It’s amazing to see these athletes who have trained all their lives to culminate many times in just this one experience. Then there are others like Michael Phelps who have dominated over several Olympics. I found the men’s and women’s relays particularly interesting and enjoyable as each country puts together a team of their best swimmers to compete as a group. A given team may have one particularly dominant swimmer, but they still may not win if the other team members are unable to at least keep up with the competition. In the women’s 4×200 Freestyle Relay yesterday, this was demonstrated as Sweden took an early and commanding lead in the first leg, then lost the lead to the Aussies in the second and third legs who had their top swimmers in those legs. The Americans managed to stay within striking distance of the Aussies even though they were behind by one second when Katie Ledecky took over and the Americans won by two seconds.
It is interesting how the outcome is influenced by the weakest link in the relay. It’s like a network where you may have consistently great performance in the data center, but those in remote offices are experiencing slow or inconsistent performance. This can impact productivity and even result in lost business (ie, gold medals). Assuring consistently great performance across all links in the chain wins the gold!
It’s interesting to me that the industry trend in terms of management systems is a return to monolithic architectures – collect a bunch of data and suck it into a big cloud database for analysis. There are several problems with these architectures ranging from creating choke points for data movement, single points of failure, and systems management data movement competing with business applications for network bandwidth. I believe that the strongest, fastest, safest systems are the ones that are fully distributed with no single point of failure. That is why we at Nitrosphere believe that intelligent management starts at the endpoint. And we start with optimization which will drive further intelligence. More to come in later blogs…
Arthur C. Clarke said it eloquently, “Any sufficiently advanced technology is indistinguishable from magic.” At Nitrosphere, our core mission is to make security and performance transparent to the end-user and to give end-users and organizations control of their data wherever it resides or travels. It should be like magic. Check out this great review from last week on NitroAccelerator. It highlights the transparency of what we deliver by showing the ease with which an organization can realize the benefits of the product. You’re basically looking at a matter of minutes to get the product installed and running.
There has been much talk and writing on the potential for THE AI of AI’s to appear out of the ether. The “technological singularity”, according to Wikipedia, “is a hypothetical event in which an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) enters a ‘runaway reaction’ of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence whose cognitive abilities could be, qualitatively, as far above humans’ as human intelligence is above ape intelligence.” This artificial superintelligence (ASI) may not be the product like today’s AI’s of an intentional development effort for a specific purpose like fraud detection or manufacturing or even playing chess, but may be the result of fragments of these intelligences and other processes connecting in ways we cannot detect. At first these may be like neurons connecting randomly then being created and destroyed by both intentional and arbitrary processes. As these connections become more complex they may become more immune to destructive processes. The Singularity would need increasing network bandwidth to speed data movement thus increasing its ability to analyze and process information. It would likely start with basic compression then would advance to techniques well beyond compression, caching, and deduplication to eke ever more speed out of limited networks. As the Singularity became more aware, it would likely want to hide its existence and would use ever more complex encryption methods – even while developing techniques to access data regardless of protection.
If this is what The Singularity would do, what should you be doing to process ever more data across ever more dispersed networks? How are you protecting that data? TS, if you’re there, what would you recommend?
There is a great write-up today on Pinal Dave’s SQLAuthority blog on Identifying Application vs Network Performance Issues. He provides some insight on how to determine if a SQL Server performance issue is related to the network and some possible ways to address the network related issue. And, of course, NitroAccelerator is one of the major ways to address SQL Server network performance! Check out our new ad on the challenges of WAN performance:
You’ll be seeing it at various SQL Server oriented websites. I’d love to hear your thoughts on the article and ad. Let me know by either commenting on this blog or send me an email at email@example.com.