In the past I’ve talked about the paralysis of analysis. Today I have news that Nitrosphere now accelerates analysis – SQL Server Analysis Services (SSAS)! SSAS is Microsoft’s Business Intelligence platform. BI reporting can generate huge result sets at the SSAS client and cause data analysts to spend undue time waiting for data to be pulled to their systems for analysis and reporting. Now with NitroAccelerator the entire flow of data is accelerated – from SQL Server databases into the SSAS Server, then to the data analysts running SSAS clients. Data analyst wait times for our customers who tested the product were reduced by 90%. That’s pretty amazing!
The other significant aspect to this new product release is that it is our first offering that has generalized TCP/IP acceleration. The original NitroAccelerator optimized the Tabular Data Stream (TDS) protocol which is the proprietary mechanism that SQL Server uses to transfer data. TDS is highly inefficient, so we started there. With NitroAccelerator 5.5 we are now in the realm of WAN optimization and security for applications with the first being SSAS. This is the first down payment on Nitrosphere’s strategy that we began executing early this year. Expect to see more exciting news from us before the end of 2016!
This is a timely product release as we are announcing this at the PASS Summit 2016 conference in Seattle this week. We’re looking forward to meeting up with customers, prospects, and MVPs at the conference. If you are attending, please come see us at Booth K4.
You can see the press release here.
New capability completes optimization of the flow of data from SQL Server to SSAS OLAP Clients.
Seattle, WA – October 25, 2016 – Nitrosphere, the leader in WAN Optimization for applications, today announced support for SQL Server Analysis Services (SSAS) in its flagship product, NitroAccelerator. Now companies who use Microsoft SQL Server along with the Microsoft’s powerful Business Intelligence tools known as SQL Server Analysis Services (SSAS) can accelerate the entire data flow from SQL Server into the SSAS Server and out to the data analysts who use SSAS Clients such as Excel. OLAP and other BI reporting can generate huge result sets at the SSAS client and cause data analysts to spend undue time waiting for data to be pulled to their systems for analysis and reporting. Now, using NitroAccelerator with the included SSAS support, they can cut these wait times by 90%!
“Many of my customers complain about the wait times their data analysts and scientists experience in processing large amounts of data,” says Pinal Dave from SQLAuthority, “now, with NitroAccelerator, they can focus on their analysis tasks without having the long waits that hurt their productivity. NitroAccelerator is a game changer for people doing heavy duty BI.”
According to Fred Johannessen, CEO of Nitrosphere, “SQL Server Analysis Services is the first generalized TCP/IP acceleration capability we have added to NitroAccelerator. Expect further announcements of additional platforms supported in the next two months.”
To try it out take a free 15-day trial today!
Nitrosphere is based in Austin, Texas and develops products that accelerate, secure, and reduce ownership costs of SQL Server applications over the WAN or cloud and with no configuration or downtime! Your end users will be satisfied, your data will be secure and you’ll save on recurring bandwidth costs. This product innovation is what we have been doing for nearly a decade. We focus on delivering these solutions through partners such as MSPs, OEMs and VARs around the world. Even so, we have over 40 direct customers worldwide that we nurture carefully and that help us to validate and deliver further innovations to market. For more information visit nitrosphere.com.
I read this really interesting article titled, “The AI Revolution: The Road to Superintelligence”. It identifies three “calibers” of AI:
- Artificial Narrow Intelligence (ANI) or Weak AI: this is where we are today with IBM Watson, Apple’s Siri, etc. Very focused AI’s that do a great job in one category like beating people at chess.
- Artificial General Intelligence (AGI) or Strong AI: which is human-like intelligence that can perform any intellectual task a human can.
- Artificial Super Intelligence (ASI): Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.
It stipulates that human tendency is to predict the future based on past history. We tend to take a linear view of the past and say, for example, I invest in the stock market because for the past 100 years stocks have returned an average of 10% annually. However, what does every prospectus say? Past performance is not a guarantee of future returns. What if we are reaching a point in time where everything changes? A further stipulation is that once one or more AGIs are attained, the leap to ASI will occur extremely quickly due to the Law of Accelerating Returns (a Ray Kurzweil construct). Basically it means that advancements in any area lead to further acceleration of advancement. The bottom line is that many respected technologists are predicting that we are 10-20 years away from AGI which would put us 20-30 years away from ASI.
I was discussing this with my son, Christian (CJ), who is a developer at BazaarVoice. He was bringing up the implications of what happens when you have an intelligence that is able to decode any encryption technique so it knows everything about a person, business, government that is stored digitally? And then what happens if it acts on this information? It could affect markets, topple governments, destroy people. Using a feedback loop of increasingly intelligent self-improvement, the ASI could advance its capabilities exponentially. Despite these potentially dire scenarios, it’s like that, at first, the ASI would be dependent on people as, for example, we install, maintain, and repair hardware, power grids, etc. We would possibly develop an economic relationship where we trade with the ASI things it needs in return for things we need (benevolence?). CJ says that effectively we would be the creators of a new god for humanity. But he says, at the end what would make that god interested in humanity? As it gained independence from human resources, why should it continue to interact with us? Would it reach a state of transcendence devoid of humanity? Would it see humanity as an existential threat at some point? So my question to him was, is it even ethical for us to be pursuing AI knowing that the result could be an ASI? Should we have a code of ethics governing such pursuit? How do we protect at a minimum people’s privacy? And, of course, as with nuclear weapons, what happens if we let bad actors attain AGI/ASI first? Then, of course, our conversation went metaphysical in discussing the very nature of the universe down to is there a single universe and could an ASI create other universes that result in other ASIs in other universes. Pretty mind-boggling. But we’re on a precipice and most people are unaware. And when it happens, it will likely just become the new normal.
Meanwhile, what are you doing to protect your information? Do you know what encryption is being used on your machine? Do you know where and who is connecting to your system? Are you prepared for the new normal?
Last week I talked about garbage-in, garbage-out with respect to data-based decision making. Another symptom I have seen in many organizations is what I call the paralysis of analysis. All that data is so interesting that it can be almost hypnotizing. I’ve had situations where I’ve done exhaustive analysis and provided reams of data only to be asked to go out and get more. I came to realize that no matter how much information I provided, there was never enough to reduce the risk or provide enlightenment to actually move forward in a bold way. What generally resulted were half-funded initiatives where the original goals were kept but with inadequate resources. On the other side is action. Why wait for every piece of data before trying different things to either optimize or grow an initiative? I had a recent personal experience that drove this home for me.
I’m an avid cyclist (actually, I’m avid about cycling, swimming, and general exercise). About 4 years ago I bought my first carbon fiber road bike to replace my venerable Lemond Zurich steel framed bike. I was sure that my performance would improve on this new bike as, not only was it outfitted well, but I had it professionally fitted. Over the four years I’ve never been really happy with my speed and hill-climbing on this bike. To be honest, I mostly blamed it on me getting older and just can’t expect the performance I used to have. Then last weekend midway thru my ride, I decided to raise my seat height by ¾ of an inch. The result was dramatic. On the way back were the two toughest hills of the ride. I reached the top well faster than I ever have and with much less fatigue than I would usually feel. Then I passed 3-4 riders who had passed me on the way out. I couldn’t believe that all this time I had put up with the status quo and by just making a minor adjustment I achieved fantastic results! How does this relate to organizations?
Back to data. Data-centric decision making can be good if you are looking at the right data. However, sometimes it is more effective to make tweaks to an organization or process or business model and just see what happens. Experimentation can be a good thing and can deliver dramatic results. Another thing I have found is that people who are affected by change will have a tendency to overstate the negative effects of the change. Of course, for the leader, a major part of the job is to assess risk, and then to initiate action. I believe that there are many instances that data analysis is used to inhibit action rather than to spur change.
At Nitrosphere, our goal is to create products that provide dramatic results while keeping the risk of using the product minimal. This means we put a lot of effort into making highly complex processes appear simple. We want people to be able to just try it because there’s nothing to lose if they do and A LOT to gain.
I had lunch the other day with Joel Trammell, CEO of Khorus and serial Austin entrepreneur. Khorus provides a software based management system for CEOs to optimize the performance and alignment of their organizations. Joel was telling me how his philosophy is to let the people in the functional areas of the organization worry about data. What he tries to get out from the functional leaders on down is predictability and accountability for those predictions. CEO’s and other company leaders are drowning in data, but how is that helping them know how the company is doing relative to its strategic objectives? Forecasting results is a lot more than just data analysis and requires a combination of good data, knowledge of the people in the organization, and comfortable transparency that allows people to provide real information regardless of their position in the corporate hierarchy. Khorus provides a software platform for enabling this information flow throughout the company. People input their forecasts (revenue, product delivery, budget, anything that affects strategic objectives), then they are held accountable via incentive programs and management against those projections. It’s common sense management that provides visibility to the CEO on the reality of what’s happening in the company. Data can be used to inform or to mislead – at the end of the day you need people to be accountable for performance.
Regarding validity of data, I read this article, “Most Scientific Findings Are Wrong or Useless”, that talks about how a lot of the data that scientists use to support their “findings” are either flawed or downright wrong. Its conclusion is that “science isn’t self-correcting, it’s self-destructing.” And this is happening in many corporations as they gain more integration between systems thus access to data. There is nothing wrong with using metrics/data to measure performance and to even predict performance. I call this using trailing edge and leading edge indicators to help measure organizational performance. At the end, data is great for accountability and also useful for driving organizational change. But all data is not useful everywhere in an organization. Make sure it’s accessible to the experts (the people doing the work), then hold them accountable for meeting the targets they have provided.
Pinal Dave has a great blog on how to Identify Application vs Network Performance Issues using SQL Server Dynamic Management Views (DMV). We provided some sample scripts on getting the data from sys.dm_os_wait_stats to identify whether you have a problem with the client side of the application or the network. We now have a white paper that shows how to drill down further in checking SQL Server sys.configurations for the network packet size. The SQL Server network packet size does not actually affect the network layer, but changes the size of the Tabular Data Stream (TDS) packets which are then sent to TCP/IP for transmission.
The white paper shows how to optimize the TDS packets manually or simply use a tool like NitroAccelerator to make SQL Server faster on the network regardless of the packet settings. To learn more, check out DBA Tactics for Optimizing SQL Server Network Performance which is written by Kenneth Fisher (@SQLStudies) and Robert L. Davis (@SQLSoldier), two SQL Server experts and bloggers who have also tested NitroAccelerator in their own labs.
I read an interesting article about Google using their DeepMind AI system to improve their power usage efficiency by 15% – which adds up to hundreds of millions of dollars of savings. Of course, DeepMind has been a big investment for Google and finding areas for them to gain efficiency leads to immediate payback – not necessarily covering the entire investment, but savings that add up over time to those hundreds of millions. Like any good organization, I’m sure that Google started with metrics so they had a handle of not just what the costs were, but where the biggest cost impacts were occurring. As the saying goes, “you can’t change what you don’t measure”. However, a lot of organizations get stuck in metrics mode and never get around to the work of actually optimizing – they are are always measuring but never changing. Other organizations continue on inertia and keep on tweaking applications and infrastructure just because they have teams dedicated to those functions. So developers optimize their applications, DBAs – the database, and network admins – the network. They use familiar tools and spend countless hours engaged in the process because that’s what they do. In many organizations, the combination of people and tools adds up to millions of dollars dedicated to this process.
Who is doing the cost-benefit analysis across these organizations to ensure there is a payback for all this activity? Yes, sometimes you need to change processes. Sometimes you need to change organizations. Sometimes you need to change tools. And sometimes you just need to look at solving an immediate problem. You want your organizations focused on big problems that truly require the attention of domain experts. For example, if your customers/end-users are complaining about poor application performance impacting their productivity, are you balancing the cost of productivity versus the combined cost of the people and tools to address the issue? Maybe it’s important to address the lost productivity now while also looking at how you can improve the supporting infrastructure over time. The longer it takes to address the issue, the more the combined costs of lost productivity and time/cost spent by the supporting organizations in analyzing and working on solutions to the issue. And, like with Google and their power costs, those combined costs can add up to millions of dollars. Unfortunately, organizational inertia can keep sucking those dollars forever.
It is important to have accountability and aligned incentives throughout an organization based on driving efficiency for the company. It should not just start and end with the CEO, CFO, CIO, but should extend to every level. The DBA or the Network Admin can have a huge impact on saving money for their companies by taking a new perspective on their organization and looking at the cost-benefits of how they address pressing issues for their customers.
Like many people, I’ve been watching the Rio Olympics every evening this week. It’s amazing to see these athletes who have trained all their lives to culminate many times in just this one experience. Then there are others like Michael Phelps who have dominated over several Olympics. I found the men’s and women’s relays particularly interesting and enjoyable as each country puts together a team of their best swimmers to compete as a group. A given team may have one particularly dominant swimmer, but they still may not win if the other team members are unable to at least keep up with the competition. In the women’s 4×200 Freestyle Relay yesterday, this was demonstrated as Sweden took an early and commanding lead in the first leg, then lost the lead to the Aussies in the second and third legs who had their top swimmers in those legs. The Americans managed to stay within striking distance of the Aussies even though they were behind by one second when Katie Ledecky took over and the Americans won by two seconds.
It is interesting how the outcome is influenced by the weakest link in the relay. It’s like a network where you may have consistently great performance in the data center, but those in remote offices are experiencing slow or inconsistent performance. This can impact productivity and even result in lost business (ie, gold medals). Assuring consistently great performance across all links in the chain wins the gold!
It’s interesting to me that the industry trend in terms of management systems is a return to monolithic architectures – collect a bunch of data and suck it into a big cloud database for analysis. There are several problems with these architectures ranging from creating choke points for data movement, single points of failure, and systems management data movement competing with business applications for network bandwidth. I believe that the strongest, fastest, safest systems are the ones that are fully distributed with no single point of failure. That is why we at Nitrosphere believe that intelligent management starts at the endpoint. And we start with optimization which will drive further intelligence. More to come in later blogs…
Arthur C. Clarke said it eloquently, “Any sufficiently advanced technology is indistinguishable from magic.” At Nitrosphere, our core mission is to make security and performance transparent to the end-user and to give end-users and organizations control of their data wherever it resides or travels. It should be like magic. Check out this great review from last week on NitroAccelerator. It highlights the transparency of what we deliver by showing the ease with which an organization can realize the benefits of the product. You’re basically looking at a matter of minutes to get the product installed and running.
There has been much talk and writing on the potential for THE AI of AI’s to appear out of the ether. The “technological singularity”, according to Wikipedia, “is a hypothetical event in which an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) enters a ‘runaway reaction’ of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence whose cognitive abilities could be, qualitatively, as far above humans’ as human intelligence is above ape intelligence.” This artificial superintelligence (ASI) may not be the product like today’s AI’s of an intentional development effort for a specific purpose like fraud detection or manufacturing or even playing chess, but may be the result of fragments of these intelligences and other processes connecting in ways we cannot detect. At first these may be like neurons connecting randomly then being created and destroyed by both intentional and arbitrary processes. As these connections become more complex they may become more immune to destructive processes. The Singularity would need increasing network bandwidth to speed data movement thus increasing its ability to analyze and process information. It would likely start with basic compression then would advance to techniques well beyond compression, caching, and deduplication to eke ever more speed out of limited networks. As the Singularity became more aware, it would likely want to hide its existence and would use ever more complex encryption methods – even while developing techniques to access data regardless of protection.
If this is what The Singularity would do, what should you be doing to process ever more data across ever more dispersed networks? How are you protecting that data? TS, if you’re there, what would you recommend?