Using Economic Value to Reduce Artificial Intelligence Development Risks

Most enterprise artificial intelligence projects fail. They suffer from a fundamental flaw that prevents them from achieving their stated business goals. The flaw is so deep and so embedded that it can’t be engineered away or risk managed out. The flaw is not about improper learning data or poor platform selection. It’s not even about project management. This AI flaw is more devastating than those could ever be. Enterprise AI projects fail because they never started with large enough business value to support their cost of implementation.

Artificial intelligence is about making better decisions at scale, which can be implemented through computers and not humans. AI is fundamentally designed to replace one of the most time-consuming processes humans have – decision making. In order to economically justify an AI program, therefore, we must start with an understanding of the business value that results when we make better decisions. Not just any decision, but those that result in measurable actions. Measurable actions that result in better outcomes. It all starts with understanding value-based outcome.

AI business value is not the only consideration we need to make when when justifying a project. We need to also take a look at its cost, the economic impact of the effort we put into realize the capability. If AI to achieve its business end game, we need to ensure that the implementation cost is much less than the business benefits it achieve. This is common sense, but often overlooked. A question we struggle with in this area is, “How much more value does an AI project need to generate over its cost before we justify the start of the project?” I am glad you asked.

Best practices in the industry show that at the start of an AI project the baseline value to cost ratio should be at least 10 to 1. This mean for every 10x of arbitrary business value created, the cost of realizing the program should not exceed 1x. This results in the 10:1 model. This kind of return on value is a model that anybody would agree to. Who wouldn’t line up at a bank teller if they were giving ten dollars out for every one dollar given. But there’s a problem with this rule of thumb.

The problem is that humans overestimate value and underestimate costs all the time. Business benefit of AI projects are often overestimated by as little as 2X. That original 10x in business value only generates 5x in real results. At the same time, these projects woefully underestimate the effort it takes to build them. Instead of that 1x in cost, we see real costs are at least twice that. At the end of an actual project, the business is achieving more of a 5 to 2 return on value (5:2). This is still a great return. Again, who wouldn’t want to get $5 for every $2 given?

But estimating modern AI programs doesn’t stop with the value-based economic model. We also need to economically manage risk across all stages of its implementation. Implementations that run from proof of value (POV), to pilots, and into enterprise deployments. Each of these stages should explicitly generate economic value on the effort it took to build them. Again, there are some new rules of thumb that increase the likelihood of economic success for AI projects.

2018-03-09_11-54-44

An AI project starts with a proof of value phase. This phase is not a proof of concept (POC) or a proof of technology (POT). POV explicitly demonstrates end-user economic value that can be scaled through pilot and enterprise phases in order to achieve the targeted business results. Our economic value target on the POV phase is just 1% of the cost it takes to build. This gets the “RPM gauge” off the lower peg. It shows the engine is running. It is a minimal demonstration of real business value. So for every 1x of cost to implement a POV project, we are looking to achieve a 0.01x of value in return.

Next is the pilot phase. This stage is all about scaling the AI implementation demonstrated in the POV phase. It’s not about implementing more AI features or functions. It’s about demonstrating that the value from deploying this minimal AI capability across a larger user base (a region, a class of product, etc) can generate more revenue than the cost of doing so. In many cases, a pilot implementation cost around 0.5x to deploy with a targeted 1x of economic return. This provides for a breakeven result under similar assumption from above, should the implementation cost are higher and benefits are lower.

Finally, the enterprise stage is all about the mass rollout of the piloted AI capability across all targeted user groups (all regions, products, etc.). For this phase, the rule of thumb is that for the additional 0.1x in enterprise deployment costs, there should be another 2x in economic value generation. This extreme high return ratio is conditioned on the assumption that there is no additional development costs. This is about deployment for value generation only.

Following this approach of proof of value, pilot, enterprise deployments driven by a value return, we see that we get an overall program return of about 2 to 1(1.9:1). This is a reasonable net return for any global AI program while managing risk using evaluate each stage. The highest economic risk is limited to the POV phase, where only 6% of the project cost is ensured before value is proven.

Artificial intelligence is all about value. The same value generated by their human counter parts. AI project fail because they do not explicitly start out by both defining that economic value and ensuring the value to cost ratio is high enough to achieve a targeted risk weighted returns. In addition, to effectively manage AI development risk, each phase of the project needs to have it own phased value to cost targets. By managing to a value-based model, AI projects will sustain 10:1, 5:2, or at worst 2:1 returns, while exposing only 6% of the project cost before customer value is proven. Who wouldn’t want that.

SaveSave

Review – A Spy’s Guide To Strategy by John Braddock. Brain food for the hungry.

Spy's Guide Strategy BookDeadly conflict is inevitable when companies compete for the same client. Your team, your alliance, will fight their team, their alliance, for people, places, and things on a battlefield that you may or may not choose. There will be a winner. There will be some losers. With the right strategy, your endgame ends in your victory and their defeat. So goes the ways of strategic games. If you want to win, you need to know Strategy.

Really understanding strategy is a hard problem. One that requires itself a strategy. Amazon list over 200,000 books on strategy. Finding any book is easy. It’s a simple query executed in a text field of a web page. It’s results in a collection of data. Authors, summaries, recommendations. But picking one of them requires a decision. A decision requires analysis, a point of view. Therefore, deciding on book can also be an easy action if the right analysis of the proper data is performed. Learning about Strategy requires Tactics (talk about this later).

This foreplay between strategy and tactics has never been better illustrated than through the works of John Braddock. John was a case officer at the CIA. He developed, recruited and handled sources on weapons proliferation, counter-terrorism and political-military issues. He was a master spy. And as he points out, master spies are master strategists and master tacticians.

Through John’s second book “A Spy’s Guide to Strategy, we learn about strategy through the eyes of a CIA case manager. A master spy. A field operative. John teaches us that Strategy is imagination and reasoning, separate but connected. Strategy is looking forward (imagination) and reasoning backwards.

He shows us how to reason backwards from our endgame through zero-sum games where our battles will take place. We continue backwards through positive-sum games where our alliances are built. Farther backwards into boss-games – inevitable. To win the boss-games, we might have to win the more zero-sum games, more positive-sum games, and maybe more boss-games. A cycle.

Once you reason backwards far enough, you must move forward again by taking action. You turn decisions into actions. Actions into results. John shows how these results lead to yet more strategy. More tactics. This framework is beautiful in its simplicity and applicability to everyday life. Corporate life. Home life.

My companyIn the spring of 2017, I watched my company change. A good change. No, a great change. Our leadership shifted course, from systems integration to digital transformation. We had a new end-game, which required a new strategy. We want companies to hire us for their transformation activities. People. We wanted to dominate the North American and European markets. Places. We want them to pay us to do this. Things. As John say, we needed a new strategy for this people, places, and things.

We weren’t alone in this game, we had competitors. Major competitors. Ones with hundreds of thousands of people and even more alliances. They also want the same clients, in the same regions, and their same money. We were heading for conflict. Global conflict. We were going to play a zero-sum game, a game that they had more market share in. A game that if we lost we could be vanquished. We had a problem. A strategic problem, so we reasoned backwards.

end game

We needed new and bigger alliances. Alliances that were formed on solid and unshakable partnerships. These partnerships needed to bring new capabilities that could be used in our future zero-sum battles. So we formed alliances around Artificial Intelligence (AI). We partner with mega technology companies like Google, Amazon, and Microsoft. Behemoths. We also developed specialized alliances with industry leaders for human trafficking, substance use disorder recovery, and financial crimes. Good for flanking. Within our company we reorganized, we played boss-games. We formed new team, which required new leadership, new bosses.


new strategyO
ver the last years, we’ve actioned forward through many competitive zero-sum conflicts using our new AI strategy. We lost some, but we won even more. In loosing and winning, continue to imagine forward, testing our end-game, assessing their end-games. We reason backwards through our alliances, finding new partners and new industries. We make informed decisions, we action forward. The strategic cycle continues.

So, if you are a master strategist, read John’s book to learn how Osama bin Laden used this strategic framework to position himself as the next Khalif of his Caliphate end-game. If you are a master executive, then read this work to help you really understand the dynamics of strategy. If you are both, drop me a note on what that world is like.