Using Economic Value to Reduce Artificial Intelligence Development Risks

Most enterprise artificial intelligence projects fail. They suffer from a fundamental flaw that prevents them from achieving their stated business goals. The flaw is so deep and so embedded that it can’t be engineered away or risk managed out. The flaw is not about improper learning data or poor platform selection. It’s not even about project management. This AI flaw is more devastating than those could ever be. Enterprise AI projects fail because they never started with large enough business value to support their cost of implementation.

Artificial intelligence is about making better decisions at scale, which can be implemented through computers and not humans. AI is fundamentally designed to replace one of the most time-consuming processes humans have – decision making. In order to economically justify an AI program, therefore, we must start with an understanding of the business value that results when we make better decisions. Not just any decision, but those that result in measurable actions. Measurable actions that result in better outcomes. It all starts with understanding value-based outcome.

AI business value is not the only consideration we need to make when when justifying a project. We need to also take a look at its cost, the economic impact of the effort we put into realize the capability. If AI to achieve its business end game, we need to ensure that the implementation cost is much less than the business benefits it achieve. This is common sense, but often overlooked. A question we struggle with in this area is, “How much more value does an AI project need to generate over its cost before we justify the start of the project?” I am glad you asked.

Best practices in the industry show that at the start of an AI project the baseline value to cost ratio should be at least 10 to 1. This mean for every 10x of arbitrary business value created, the cost of realizing the program should not exceed 1x. This results in the 10:1 model. This kind of return on value is a model that anybody would agree to. Who wouldn’t line up at a bank teller if they were giving ten dollars out for every one dollar given. But there’s a problem with this rule of thumb.

The problem is that humans overestimate value and underestimate costs all the time. Business benefit of AI projects are often overestimated by as little as 2X. That original 10x in business value only generates 5x in real results. At the same time, these projects woefully underestimate the effort it takes to build them. Instead of that 1x in cost, we see real costs are at least twice that. At the end of an actual project, the business is achieving more of a 5 to 2 return on value (5:2). This is still a great return. Again, who wouldn’t want to get $5 for every $2 given?

But estimating modern AI programs doesn’t stop with the value-based economic model. We also need to economically manage risk across all stages of its implementation. Implementations that run from proof of value (POV), to pilots, and into enterprise deployments. Each of these stages should explicitly generate economic value on the effort it took to build them. Again, there are some new rules of thumb that increase the likelihood of economic success for AI projects.

2018-03-09_11-54-44

An AI project starts with a proof of value phase. This phase is not a proof of concept (POC) or a proof of technology (POT). POV explicitly demonstrates end-user economic value that can be scaled through pilot and enterprise phases in order to achieve the targeted business results. Our economic value target on the POV phase is just 1% of the cost it takes to build. This gets the “RPM gauge” off the lower peg. It shows the engine is running. It is a minimal demonstration of real business value. So for every 1x of cost to implement a POV project, we are looking to achieve a 0.01x of value in return.

Next is the pilot phase. This stage is all about scaling the AI implementation demonstrated in the POV phase. It’s not about implementing more AI features or functions. It’s about demonstrating that the value from deploying this minimal AI capability across a larger user base (a region, a class of product, etc) can generate more revenue than the cost of doing so. In many cases, a pilot implementation cost around 0.5x to deploy with a targeted 1x of economic return. This provides for a breakeven result under similar assumption from above, should the implementation cost are higher and benefits are lower.

Finally, the enterprise stage is all about the mass rollout of the piloted AI capability across all targeted user groups (all regions, products, etc.). For this phase, the rule of thumb is that for the additional 0.1x in enterprise deployment costs, there should be another 2x in economic value generation. This extreme high return ratio is conditioned on the assumption that there is no additional development costs. This is about deployment for value generation only.

Following this approach of proof of value, pilot, enterprise deployments driven by a value return, we see that we get an overall program return of about 2 to 1(1.9:1). This is a reasonable net return for any global AI program while managing risk using evaluate each stage. The highest economic risk is limited to the POV phase, where only 6% of the project cost is ensured before value is proven.

Artificial intelligence is all about value. The same value generated by their human counter parts. AI project fail because they do not explicitly start out by both defining that economic value and ensuring the value to cost ratio is high enough to achieve a targeted risk weighted returns. In addition, to effectively manage AI development risk, each phase of the project needs to have it own phased value to cost targets. By managing to a value-based model, AI projects will sustain 10:1, 5:2, or at worst 2:1 returns, while exposing only 6% of the project cost before customer value is proven. Who wouldn’t want that.

SaveSave

The Last Iceberg – How Artificial Intelligence Is Unlocking Humanities Deep Frozen Secrets

Tip of the iceberg 90839Icebergs are common meme used throughout the Internet. You see them everywhere, from depicting social media to human behaviors. They are used to explain knowledge we know, above the surface, and those things we don’t know, below the surface. Icebergs are interesting. They’re secretive. The ten percent we see is the literal tip of what is possible. Below the waterline, just out of sight, are dark secrets. Secrets that are often out of the reach and unusable. That is, until now.

Artificial intelligence (AI) is changing our the way we live. AI is doing more than just helping us find patterns in data or helping us make better decisions, it’s truly unlocking unexpected insights and extending our knowledge in ways that only humans are capable of doing… capable of controlling. Prior to AI, human beings had to use their minds to harvest knowledge from everyday life events. It’s a hard process. A process that required countless hours of dedication just to discover one new meaningful insight that can lead to massive improvements in our lives. But this is now changing as we begin to rely on new cognitive technologies that generate knowledge for us…knowledge without us.

Iceberg Breaking Slowing Global WarmingAI is melting those data icebergs. In essence, it is becoming the global warming of the knowledge age. It’s unlocking their deep hidden secrets. AI is unleashing those hidden insights, producing more knowledge that is then used to melts even more icebergs. It’s an exothermic knowledge activity and that is exponentially generating more insights than used in discovery process. And herein lies a devastating, potentially life ending, problem.

As we rely, over rely, on new cognitive technologies, we lose our ability to discover new knowledge ourselves. The brain as an organ and capabilities are lost when not used. Take for example the Slide Ruler. Most people today do not know what a slide ruler is, let alone how to use one. This simple mechanical device, seen in the hands of most engineers hands in the 1970s, can perform amazing mathematics. With just two opposing rulers, one can do addition, subtraction, multiplication, division, logarithms, square roots, n-roots, and more. There’s nothing that can’t be mathematically done on the slider. It requires not batteries, no internet connection, and does not fail. It is brilliant in its complex simplicity. But today nobody knows how to use it. Why?

Figure1 Multiplication C DFor the slide ruler example, we have lost this cognitive ability as a society because it has been outsourced it to other systems, like the calculator and the spreadsheet. These are new productivity tools that were invented to help us more efficiently unlock knowledge. But the cost of using them is that we’re now no longer capable of exercising a part of the brain that used to physically discover insights through this mechanical manipulation. Artificial intelligence is now accelerating this kind of cognitive decay.

As humans rely more on AI to discover knowledge, we are slowly losing our own cognitive ability, our own mental capacity, to discover those insights ourselves. Our brain cognitively weakens. AI is in essence creating a mental defect in our executive functioning processes. Unchecked over time, we will become over dependent on AI to identify those new things that will lead to a better life. Eventually evolving to a point where we could literally die without this AI ability. Or even die because of it.

Main qimg 05a329fbabebe9e44945b8a336201176 cThis uncontrolled release of knowledge can be a destructive chaotic process. We see similar outcome with uranium, for example. With the right equipment one can control how neutrons are absorbed in uranium isotopes, producing a stable reaction which generates life-giving energy. left unchecked, however, the same neutron interacting with the same isotopes can produce devastating nuclear events. Controlled reactions lead to life, and control reactions lead to death.

Can humans survive the chaos of a world where AI is unlocking more knowledge that humans can handle? A future world where available knowledge is greater than the questions we can ask? Physics tells us we cannot. History shows us it is unlikely. AI unchecked, ungoverned, can be the nuclear weapon that we use on ourselves that will eventually melt not only every last iceberg, but society itself.