Using Economic Value to Reduce Artificial Intelligence Development Risks

Most enterprise artificial intelligence projects fail. They suffer from a fundamental flaw that prevents them from achieving their stated business goals. The flaw is so deep and so embedded that it can’t be engineered away or risk managed out. The flaw is not about improper learning data or poor platform selection. It’s not even about project management. This AI flaw is more devastating than those could ever be. Enterprise AI projects fail because they never started with large enough business value to support their cost of implementation.

Artificial intelligence is about making better decisions at scale, which can be implemented through computers and not humans. AI is fundamentally designed to replace one of the most time-consuming processes humans have – decision making. In order to economically justify an AI program, therefore, we must start with an understanding of the business value that results when we make better decisions. Not just any decision, but those that result in measurable actions. Measurable actions that result in better outcomes. It all starts with understanding value-based outcome.

AI business value is not the only consideration we need to make when when justifying a project. We need to also take a look at its cost, the economic impact of the effort we put into realize the capability. If AI to achieve its business end game, we need to ensure that the implementation cost is much less than the business benefits it achieve. This is common sense, but often overlooked. A question we struggle with in this area is, “How much more value does an AI project need to generate over its cost before we justify the start of the project?” I am glad you asked.

Best practices in the industry show that at the start of an AI project the baseline value to cost ratio should be at least 10 to 1. This mean for every 10x of arbitrary business value created, the cost of realizing the program should not exceed 1x. This results in the 10:1 model. This kind of return on value is a model that anybody would agree to. Who wouldn’t line up at a bank teller if they were giving ten dollars out for every one dollar given. But there’s a problem with this rule of thumb.

The problem is that humans overestimate value and underestimate costs all the time. Business benefit of AI projects are often overestimated by as little as 2X. That original 10x in business value only generates 5x in real results. At the same time, these projects woefully underestimate the effort it takes to build them. Instead of that 1x in cost, we see real costs are at least twice that. At the end of an actual project, the business is achieving more of a 5 to 2 return on value (5:2). This is still a great return. Again, who wouldn’t want to get $5 for every $2 given?

But estimating modern AI programs doesn’t stop with the value-based economic model. We also need to economically manage risk across all stages of its implementation. Implementations that run from proof of value (POV), to pilots, and into enterprise deployments. Each of these stages should explicitly generate economic value on the effort it took to build them. Again, there are some new rules of thumb that increase the likelihood of economic success for AI projects.

2018-03-09_11-54-44

An AI project starts with a proof of value phase. This phase is not a proof of concept (POC) or a proof of technology (POT). POV explicitly demonstrates end-user economic value that can be scaled through pilot and enterprise phases in order to achieve the targeted business results. Our economic value target on the POV phase is just 1% of the cost it takes to build. This gets the “RPM gauge” off the lower peg. It shows the engine is running. It is a minimal demonstration of real business value. So for every 1x of cost to implement a POV project, we are looking to achieve a 0.01x of value in return.

Next is the pilot phase. This stage is all about scaling the AI implementation demonstrated in the POV phase. It’s not about implementing more AI features or functions. It’s about demonstrating that the value from deploying this minimal AI capability across a larger user base (a region, a class of product, etc) can generate more revenue than the cost of doing so. In many cases, a pilot implementation cost around 0.5x to deploy with a targeted 1x of economic return. This provides for a breakeven result under similar assumption from above, should the implementation cost are higher and benefits are lower.

Finally, the enterprise stage is all about the mass rollout of the piloted AI capability across all targeted user groups (all regions, products, etc.). For this phase, the rule of thumb is that for the additional 0.1x in enterprise deployment costs, there should be another 2x in economic value generation. This extreme high return ratio is conditioned on the assumption that there is no additional development costs. This is about deployment for value generation only.

Following this approach of proof of value, pilot, enterprise deployments driven by a value return, we see that we get an overall program return of about 2 to 1(1.9:1). This is a reasonable net return for any global AI program while managing risk using evaluate each stage. The highest economic risk is limited to the POV phase, where only 6% of the project cost is ensured before value is proven.

Artificial intelligence is all about value. The same value generated by their human counter parts. AI project fail because they do not explicitly start out by both defining that economic value and ensuring the value to cost ratio is high enough to achieve a targeted risk weighted returns. In addition, to effectively manage AI development risk, each phase of the project needs to have it own phased value to cost targets. By managing to a value-based model, AI projects will sustain 10:1, 5:2, or at worst 2:1 returns, while exposing only 6% of the project cost before customer value is proven. Who wouldn’t want that.

SaveSave

Critical Capabilities for Enterprise Data Science

NewImageIn the article “46 Critical Capabilities of a Data Science Driven Intelligence Platform” an original set of critical enterprise capabilities was identified. In enterprise architecture language, capabilities are “the ability to perform or achieve certain actions or outcomes through a set of controllable and measurable faculties, features, functions, processes, or services.”(1) In essence, they describe the what of the activity, but not necessarily the how.While individually effective, the set was nevertheless incomplete. Below is an update where several new capabilities have been added and other relocated. Given my emphasis on deep learning, composed on cognitive and intelligence process, I have added genetic and evolutionary programming as a set of essential capabilities.

2015 03 04 10 52 16

The Implementation architecture has also be updated to reflect the application of Spark and SparkR.

2015 03 04 10 53 13

46 Critical Capabilities of a Data Science Driven Intelligence Platform

NewImageData science is much more than just a singular computational process. Today, it’s a noun that collectively encompasses the ability to derive actionable insights from disparate data through mathematical and statistical processes, scientifically orchestrated by data scientists and functional behavioral analysts, all being supported by technology capable of linearly scaling to meet the exponential growth of data. One such set of technologies can be found in the Enterprise Intelligence Hub (EIH), a composite of disparate information sources, harvesters, hadoop (HDFS and MapReduce), enterprise R statistical processing, metadata management (business and technical), enterprise integration, and insights visualization – all wrapped in a deep learning framework. However, while this technical stuff is cool, Enterprise Intelligence Capabilities (EIC) are an even more important characteristic that drives the successful realization of the enterprise solution.

2015 02 04 08 50 01

In enterprise architecture language, capabilities are “the ability to perform or achieve certain actions or outcomes through a set of controllable and measurable faculties, features, functions, processes, or services.”(1) In essence, they describe the what of the activity, but not necessarily the how. For a data science-driven approach to deriving insights, these are the collective sets of abilities that find and manage data, transform data into features capable of be exploited through modeling, modeling the structural and dynamic characteristics of phenomena, visualizing the results, and learning from the complete round trip process. The end-to-end process can be sectioned into Data, Information, Knowledge, and Intelligence.

2014 11 08 14 10 45

Each of these atomic capabilities can be used by four different key resources to produce concrete intermediate and final intelligence products. The Platform Engineer (PE) is responsible for harvesting and maintenance of raw data, ensuring well formed metadata. For example, they would write Python scripts used by Flume to ingest Reddit dialogue into the Hadoop ecosystem. The MapReduce Engineer (MR) produces features based on imported data sets. One common function is extracting topics through MapReduced programmed natural language processing on document sets. The Data Science (DS) performs statistical analyses and develops machine learning algorithms.  Time series analysis, for example, is often used by the data scientist as a basis of identifying anomalies in data sets. Taken all together, Enterprise Intelligence Capabilities can transform generic text sources (observations) into actionable intelligence through the intermediate production of metadata tagged signals and contextualized events.

2014 11 08 14 21 11

Regardless of how data science is being used to derive insights, at the desktop or throughout the enterprise, capabilities become the building block for effective solution development. Independent of actual implementation (e.g., there are many different ways to perform anomaly detection), they are the scalable building blocks that transform raw data into the intelligence needed to realize true actionable insights.

Deep Web Intelligence Platform: 6 Plus Capabilities Necessary for Finding Signals in the Noise

NewImage

Over the last several months I have been involved with developing uniques data science capabilities for the intelligence community, ones specifically based on exploiting insights derived from the open source intelligence (OSINT) found in the deep web. The deep web is World Wide Web (WWW) content that is not part of the Surface Web, which is indexed by standard search engines. It is usually inaccessible through traditional search engines because of the dynamic characteristics of the content and in persistent natural of its URLs. Spanning over 7,500 terabytes of data, it is the richest source of raw material that can be used to build out value.

2014 01 30 09 54 05

One of the more important aspects of intelligence is being able to connect multiple seemingly unrelated events together during a time frame amenable for making actionable decisions. This capability is the optimal blend of man and machine, enabling customers to know more and know sooner. It is only in these low signal that are found in the deep web that one can use behavioral sciences (psychology and sociology) to extract outcome-oriented value.

2014 01 30 09 54 15

Data in the web is mostly composed of noise, which can be unique but is often of low value. Unfortunately, the index engines of the world (Google, Bing, Yahoo) add marginal value to very few data streams that are important to any valuation process. Real value comes from correlating event networks (people performing actions) through deep web signal, which are not the purview of traditional search engines.

2014 01 30 09 54 50

These deep web intelligence capabilities can be achieved in part through the use of machine learning enabled, data science driven, and hadoop-oriented enterprise information hubs. The platform support the 5 plus essential capabilities for actionable intelligence operations:

1. Scalable Infrastructure – Industry standard hardware supported through cloud-based infrastructure providers that is scales linearly with analytical demands.

2. Hadoop – Allows for computation to occur next to data storage and enables storage schema on read – stores data in native raw format.

3. Enterprise Data Science – Scalable exploratory methods, predictive algorithms, and prescriptive and machine learning.

4. Elastic Data Collection – In addition to pulling data from third party sources through APIs, bespoke data collection through scraping web services enables data analyses not capable within traditional enterprise analytics groups.

5. Temporal/Geospatial/Contextual Analyst – The ability to regionalize events, to a specific context, during a specified time (past, present, future).

6. Visualization – Effective visualization that tailors actionable results to individual needs.

The Plus – data, Data, DATA. Without data, lots of disparate data, data science platforms are of no value.

Deep Web Intelligence Architecture 01

Today’s executive, inundated with TOO MUCH DATA, has limited ability to synthesize trends and actionable insights driving competitive advantage. Traditional research tools, internet and social harvesters do not correlate or predict trends. They look at hindsight or, at best, exist at the surface of things. A newer approach based on combining the behavioral analyses achievable through people and the machine learning found in scalable computational system can bridge this capability gap.

Enterprise Data Science (EDS) – Updated Framework Model

NewImage

Companies continue to struggle with how to implement an organic and systematic approach to data science. As part of an ongoing trend to generate new revenues through enterprise data monetization, products and services owners have turned to internal business analytics teams for help, only to find their individual efforts fall very short of achieving business expectations. Enterprise Data Science (EDS), based on the proven techniques of  Cross Industry Standard Process for Data Mining (CRISP-DM), is designed to overcome most of the traditional limitations found in common business intelligence units.

The earlier post “Objective-Based Data Monetization: A Enterprise Approach to Data Science (EDS)” was in initial cut a describing the framework. It defines data monetization, hypothesis driven assessments, objective-based data science framework, and the differences between business intelligences and data science. While it was a good first cut, several refinements (below) have bee made to better clarify each phase and their explicit interactions.

Data Science Architecture Insurance Prebind Example

In addition to restructuring the EDS framework and its insurance pre-bind data (all the data that goes into quoting insurance policies) example, it was important to document the data science processes that come with an overall enterprise solution (below).

Data Science Process

SaveSave

Heilmeier Catechism: Nine Questions To Develop A Meaningful Data Science Project

Nine

As director of ARPA in the 1970’s, George H. Heilmeier developed a set of questions that he expected every proposal for a new research program to answer. No exceptions. He referred to them as the “Heilmeier Catechism” and are now the basis of how DARPA (Defense Advance Research Projects Activity) and IARPA (Intelligence Advance Research Project Activity) operate.  Today, it’s equally important to answer these questions for any individual data science project, both for yourself and for communicating to others what you hope to accomplish.

While there have been many variants on Heilmeier’s questions, I still prefer to use the original catechism to guide the development of my data science projects:

1. What are you trying to do? Articulate your objectives using absolutely no jargon. 2. How is it done today, and what are the limits of current practice? 3. What’s new in your approach and why do you think it will be successful? 4. Who cares? 5. If you’re successful, what difference will it make? 6. What are the risks and the payoffs? 7. How much will it cost? 8. How long will it take? 9. What are the midterm and final “exams” to check for success?

Each question is critical in the success chain of events, but number 3 and 5 are most aligned to the way business leaders think. Data science is fought with failures, by the definition of science. As such, business leaders are still a bit (truthfully – a lot) suspicious of how data science teams do what they do and how their results would integrate into the larger enterprise in order to solve real business problems. Part of the data science sales cycle, addressed by question 3, needs to address these concerns. For example, in the post “Objective-Based Data Monetization: A Enterprise Approach to Data Science (EDS),” I present a model for scaling out the our results.

In terms of the differences a project makes (question 5), we need to be sure to cover the business as well as technical differences. The business difference are the standard three: impact on revenue, margin (combined ratios for insurance), and market share. If there is not business value (data/big data economics), then your project is a sunk cost that somebody else will need to make up for.

Here is an example taken from a project proposed in the insurance industry. Brokers are third party entities that sell insurance products on behalf of a company. They are not employees and often are under the governance of underwriters (employee that sells similar products). There are instances where brokers “shop” around looking get coverage for a prospect that might have above average risk (e.g., files too many claims, in high risk business, etc.). They do this by manipulating answers to pre-bind questions (prior to issuing a policy) in order to create a product that will not necessarily need underwriter review and/or approval. This project is designed to help stop this practice, which would help the improve business financial fundamentals. Here is Heilmeier’s Catechism for the Pre-Bind Gaming Project:

1. What are you trying to do? Automate the identification of insurance brokers that use corporate policy pricing tools as a means to undersell through third party providers.

2. How is it done today? Corporate underwriters observer broker behaviors and pass judgement based on person criteria.

3.  What is new in your approach? Develop signatures algorithms, based on the analysis of gamer/no gamer pre-bind data, that can be implemented across enterprise product applications.

4. Who cares? Business executives – CEO, President, CMO, and CFO.

5. What difference will it make? In an insurance company that generates $350 M in premiums at a combined ratio (margin) of 97%, addressing this problem could result in  an additional $12M to $32M of incremental revenue while improving the combined ratio to 95.5%.

6. What are the risks and payoffs? Risks – Not having collect or access to relevant causal data reflecting the gamers patterns. Payoffs – Improved revenue and combined ratios.

7. How much will it cost? Proof of concept (POC) will cost between $80K and $120K. Scaling the POC into the enterprise (implementing algorithms into 5 to 10 product applications) will cost between $500K and $700K.

8. How long will it take? Proof of concept (POC) will take between a 8 to 10 weeks. Scaling the POC into the enterprise will take between 3 to 7 months.

9. What are the midterms & final check points for success? The POC will act as the initial milestone that demonstrates gaming algorithms can be identify with existing data.

Regardless of whether you use Heilmeier’s questions or other research topic development methodologies (e.g., The Craft of Research), it is important to systematically address the who, what, when, where, and why of the project. While a firm methodology does not guarantee success, not addressing these nine questions are sure to put you on a risky path, one that will need work to get off of.

SaveSave

Five Graphical Perception Best Practices Every Data Scientist Should Know

NewImageGraphical perception – the visual encoding of data on graphs – is an important consideration in data exploration and presentation visualization. In their seminal work, “Graphical Perception: Theory, Experimentation and Application to the Development of Graphical Methods,” William Cleveland and Robert McGill lay the foundational theory for setting guidelines on graph construction. 

Their graphical perception research enables the data scientist to maximize the likelihood of value transfer for the incurred study cost  (AKA Data Monetization). As we encode the information (relevant data) into graphics, the viewer has to decode the data and interpret the results. This asymmetric and error prone knowledge transformation. Fortunately, Cleveland and McCill have identified several best practices that reduce the likelihood of viewer misperception.

Five Graphical Perception Best Practices:

  • Use common scales when possible  – hard to compare across scales, especially offset
  • Use positions comparison on identical scales when possible
  • Limit the use of length comparisons – proportions are difficult to interpret
  • Limit pie charts – angular  and curvature comparisons are hard to interpret
  • Do not use 3-D charts or shading 

Elementary Perceptual Task

NewImage

11 Steps To Finding Data Scientists

NewImageData scientist recruiting can be a challenging task, but not an impossible one. Here are eleven tips that can get you going in the right recruiting direction:

1. Focus recruiting at the universities that have top notch computer programming, statistical, and advance sciences. For example, Stanford, MIT, Berkeley, and Harvard are some of the top schools in the world.  Also a few other schools with proven strengths in data analytics, such as: North Carolina State, UC Santa Cruz, University of Maryland, University of Washington, and UT Austin.

2. Look for recruits in the membership rolls of user groups devoted to data science tools. Two excellent places to start are The R User Group (for an open-souce statistical tool favored by data scientists) and Python Interest Groups (for PIGies). Revolutions provide a list of known R User Groups, as well as information around the R community.

3. Search for data scientists on LinkedIn, many of which have formed formal groups.

4. Hang out with data scientists at Strata, Structure:Data, and Hadoop World conferences and similar gatherings or at inform data scientist “meet-ups” in your area. The R User Group Meetup Groups is an excellent source for finding meetings your a particular area.

5. Talk with local venture capitalist (Osage, NewSprings, etc.), who is likely to have gotten a variety of big data proposals over the past year.

6. Host a competition on Kaggle (online data science competitions) and/or TopCoder (online coding competitions), the analytical and coding competition websites. One of my favorite Kaggle competitions was the Heritage Provider Network Health Prize – Identified patients who will be admitted to a hospital within the next year using historical claims data.

7. Candidates need to code. Period. So don’t bother with any candidate that doesn’t understand some formal language (R, Python, Java, etc.). Coding skills don’t have to be at a world-class level, but they should be good enough to get by (hacker).

8. The old saying that “we start dying the day we stop learning” is so true of the data science space. Candidates need to have a demonstrable ability to learn about new technologies and methods, since the field of data science is exponentially changing. Have they gotten certificates from Coursa‘s Data Science or Machine Learning course; contributed to open-source projects; or built an online repository of code or data sets (e.g., Quandl) to share?

9. Make sure a candidate can tell a story in the data sets they are analyzing. It is one thing to do the hard analytical work, but another to provide a coherent narrative about a key insights (AKA they can tell a story). Test their ability to communicate with numbers, visually, and verbally.

10. Candidates need to be able to work in the business world. Take a pass on those candidates that get stuck for answers on how their work might apply to your management challenges.

11. Ask candidates about their favorite analysis or insight. Every data scientist should have something in their insights portfolio, applied or academic. Have them break out the laptop (iPad) to walk through their data sets and analyses. It doesn’t matter what the subject is, just that they can walk through the complete data science value chain.