I read this really interesting article titled, “The AI Revolution: The Road to Superintelligence”. It identifies three “calibers” of AI:
- Artificial Narrow Intelligence (ANI) or Weak AI: this is where we are today with IBM Watson, Apple’s Siri, etc. Very focused AI’s that do a great job in one category like beating people at chess.
- Artificial General Intelligence (AGI) or Strong AI: which is human-like intelligence that can perform any intellectual task a human can.
- Artificial Super Intelligence (ASI): Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.
It stipulates that human tendency is to predict the future based on past history. We tend to take a linear view of the past and say, for example, I invest in the stock market because for the past 100 years stocks have returned an average of 10% annually. However, what does every prospectus say? Past performance is not a guarantee of future returns. What if we are reaching a point in time where everything changes? A further stipulation is that once one or more AGIs are attained, the leap to ASI will occur extremely quickly due to the Law of Accelerating Returns (a Ray Kurzweil construct). Basically it means that advancements in any area lead to further acceleration of advancement. The bottom line is that many respected technologists are predicting that we are 10-20 years away from AGI which would put us 20-30 years away from ASI.
I was discussing this with my son, Christian (CJ), who is a developer at BazaarVoice. He was bringing up the implications of what happens when you have an intelligence that is able to decode any encryption technique so it knows everything about a person, business, government that is stored digitally? And then what happens if it acts on this information? It could affect markets, topple governments, destroy people. Using a feedback loop of increasingly intelligent self-improvement, the ASI could advance its capabilities exponentially. Despite these potentially dire scenarios, it’s like that, at first, the ASI would be dependent on people as, for example, we install, maintain, and repair hardware, power grids, etc. We would possibly develop an economic relationship where we trade with the ASI things it needs in return for things we need (benevolence?). CJ says that effectively we would be the creators of a new god for humanity. But he says, at the end what would make that god interested in humanity? As it gained independence from human resources, why should it continue to interact with us? Would it reach a state of transcendence devoid of humanity? Would it see humanity as an existential threat at some point? So my question to him was, is it even ethical for us to be pursuing AI knowing that the result could be an ASI? Should we have a code of ethics governing such pursuit? How do we protect at a minimum people’s privacy? And, of course, as with nuclear weapons, what happens if we let bad actors attain AGI/ASI first? Then, of course, our conversation went metaphysical in discussing the very nature of the universe down to is there a single universe and could an ASI create other universes that result in other ASIs in other universes. Pretty mind-boggling. But we’re on a precipice and most people are unaware. And when it happens, it will likely just become the new normal.
Meanwhile, what are you doing to protect your information? Do you know what encryption is being used on your machine? Do you know where and who is connecting to your system? Are you prepared for the new normal?
Last week I talked about garbage-in, garbage-out with respect to data-based decision making. Another symptom I have seen in many organizations is what I call the paralysis of analysis. All that data is so interesting that it can be almost hypnotizing. I’ve had situations where I’ve done exhaustive analysis and provided reams of data only to be asked to go out and get more. I came to realize that no matter how much information I provided, there was never enough to reduce the risk or provide enlightenment to actually move forward in a bold way. What generally resulted were half-funded initiatives where the original goals were kept but with inadequate resources. On the other side is action. Why wait for every piece of data before trying different things to either optimize or grow an initiative? I had a recent personal experience that drove this home for me.
I’m an avid cyclist (actually, I’m avid about cycling, swimming, and general exercise). About 4 years ago I bought my first carbon fiber road bike to replace my venerable Lemond Zurich steel framed bike. I was sure that my performance would improve on this new bike as, not only was it outfitted well, but I had it professionally fitted. Over the four years I’ve never been really happy with my speed and hill-climbing on this bike. To be honest, I mostly blamed it on me getting older and just can’t expect the performance I used to have. Then last weekend midway thru my ride, I decided to raise my seat height by ¾ of an inch. The result was dramatic. On the way back were the two toughest hills of the ride. I reached the top well faster than I ever have and with much less fatigue than I would usually feel. Then I passed 3-4 riders who had passed me on the way out. I couldn’t believe that all this time I had put up with the status quo and by just making a minor adjustment I achieved fantastic results! How does this relate to organizations?
Back to data. Data-centric decision making can be good if you are looking at the right data. However, sometimes it is more effective to make tweaks to an organization or process or business model and just see what happens. Experimentation can be a good thing and can deliver dramatic results. Another thing I have found is that people who are affected by change will have a tendency to overstate the negative effects of the change. Of course, for the leader, a major part of the job is to assess risk, and then to initiate action. I believe that there are many instances that data analysis is used to inhibit action rather than to spur change.
At Nitrosphere, our goal is to create products that provide dramatic results while keeping the risk of using the product minimal. This means we put a lot of effort into making highly complex processes appear simple. We want people to be able to just try it because there’s nothing to lose if they do and A LOT to gain.
I had lunch the other day with Joel Trammell, CEO of Khorus and serial Austin entrepreneur. Khorus provides a software based management system for CEOs to optimize the performance and alignment of their organizations. Joel was telling me how his philosophy is to let the people in the functional areas of the organization worry about data. What he tries to get out from the functional leaders on down is predictability and accountability for those predictions. CEO’s and other company leaders are drowning in data, but how is that helping them know how the company is doing relative to its strategic objectives? Forecasting results is a lot more than just data analysis and requires a combination of good data, knowledge of the people in the organization, and comfortable transparency that allows people to provide real information regardless of their position in the corporate hierarchy. Khorus provides a software platform for enabling this information flow throughout the company. People input their forecasts (revenue, product delivery, budget, anything that affects strategic objectives), then they are held accountable via incentive programs and management against those projections. It’s common sense management that provides visibility to the CEO on the reality of what’s happening in the company. Data can be used to inform or to mislead – at the end of the day you need people to be accountable for performance.
Regarding validity of data, I read this article, “Most Scientific Findings Are Wrong or Useless”, that talks about how a lot of the data that scientists use to support their “findings” are either flawed or downright wrong. Its conclusion is that “science isn’t self-correcting, it’s self-destructing.” And this is happening in many corporations as they gain more integration between systems thus access to data. There is nothing wrong with using metrics/data to measure performance and to even predict performance. I call this using trailing edge and leading edge indicators to help measure organizational performance. At the end, data is great for accountability and also useful for driving organizational change. But all data is not useful everywhere in an organization. Make sure it’s accessible to the experts (the people doing the work), then hold them accountable for meeting the targets they have provided.