The Last Iceberg – How Artificial Intelligence Is Unlocking Humanities Deep Frozen Secrets

Tip of the iceberg 90839Icebergs are common meme used throughout the Internet. You see them everywhere, from depicting social media to human behaviors. They are used to explain knowledge we know, above the surface, and those things we don’t know, below the surface. Icebergs are interesting. They’re secretive. The ten percent we see is the literal tip of what is possible. Below the waterline, just out of sight, are dark secrets. Secrets that are often out of the reach and unusable. That is, until now.

Artificial intelligence (AI) is changing our the way we live. AI is doing more than just helping us find patterns in data or helping us make better decisions, it’s truly unlocking unexpected insights and extending our knowledge in ways that only humans are capable of doing… capable of controlling. Prior to AI, human beings had to use their minds to harvest knowledge from everyday life events. It’s a hard process. A process that required countless hours of dedication just to discover one new meaningful insight that can lead to massive improvements in our lives. But this is now changing as we begin to rely on new cognitive technologies that generate knowledge for us…knowledge without us.

Iceberg Breaking Slowing Global WarmingAI is melting those data icebergs. In essence, it is becoming the global warming of the knowledge age. It’s unlocking their deep hidden secrets. AI is unleashing those hidden insights, producing more knowledge that is then used to melts even more icebergs. It’s an exothermic knowledge activity and that is exponentially generating more insights than used in discovery process. And herein lies a devastating, potentially life ending, problem.

As we rely, over rely, on new cognitive technologies, we lose our ability to discover new knowledge ourselves. The brain as an organ and capabilities are lost when not used. Take for example the Slide Ruler. Most people today do not know what a slide ruler is, let alone how to use one. This simple mechanical device, seen in the hands of most engineers hands in the 1970s, can perform amazing mathematics. With just two opposing rulers, one can do addition, subtraction, multiplication, division, logarithms, square roots, n-roots, and more. There’s nothing that can’t be mathematically done on the slider. It requires not batteries, no internet connection, and does not fail. It is brilliant in its complex simplicity. But today nobody knows how to use it. Why?

Figure1 Multiplication C DFor the slide ruler example, we have lost this cognitive ability as a society because it has been outsourced it to other systems, like the calculator and the spreadsheet. These are new productivity tools that were invented to help us more efficiently unlock knowledge. But the cost of using them is that we’re now no longer capable of exercising a part of the brain that used to physically discover insights through this mechanical manipulation. Artificial intelligence is now accelerating this kind of cognitive decay.

As humans rely more on AI to discover knowledge, we are slowly losing our own cognitive ability, our own mental capacity, to discover those insights ourselves. Our brain cognitively weakens. AI is in essence creating a mental defect in our executive functioning processes. Unchecked over time, we will become over dependent on AI to identify those new things that will lead to a better life. Eventually evolving to a point where we could literally die without this AI ability. Or even die because of it.

Main qimg 05a329fbabebe9e44945b8a336201176 cThis uncontrolled release of knowledge can be a destructive chaotic process. We see similar outcome with uranium, for example. With the right equipment one can control how neutrons are absorbed in uranium isotopes, producing a stable reaction which generates life-giving energy. left unchecked, however, the same neutron interacting with the same isotopes can produce devastating nuclear events. Controlled reactions lead to life, and control reactions lead to death.

Can humans survive the chaos of a world where AI is unlocking more knowledge that humans can handle? A future world where available knowledge is greater than the questions we can ask? Physics tells us we cannot. History shows us it is unlikely. AI unchecked, ungoverned, can be the nuclear weapon that we use on ourselves that will eventually melt not only every last iceberg, but society itself.

America is Roadkill on China’s Path to Artificial Intelligence Dominance

Paul-Reiffer2-980x652 cce4255cf879dc3f38e36e2ad4b1d178--teepee-hotel-wigwam-hotel

“That’s a dead armadillo,” I said pointing with my finger as my arm hung out the car door window.  Back in the day, my dad would take our family on vacation drives along iconic Route 66. We drove for hours during the day, stopping at bizarre roadside attractions, and sleeping the night in one of those TeePee motels. Every summer it was the same drive. The same attractions. Same lodging. The only thing that changed was the kind of roadkill, lifeless flattened animal bodies, that my sister and I would try to identify as we motored along. As much as they were all different, they were all the same. Their tiny bodies were slower to move than the massive cars and trucks plowing along the highway. They never had a chance.

America faces a similar challenge in our desire to navigate across the competitive roadscapes leading towards an artificially intelligent driven world. As a country, America stands at the side of a heavily travelled global AI highway, tepidly stepping out into strategic traffic and then back onto the tactical breakdown lane. Back to a slow pace and a safer place. All the while, massive AI achievements are zipping by from other countries with amazing speed. Pushing us further back onto the side of the road. Keeping us from make our move. Giving us a false sense of safety. All the while keeping us from a lumbering move that would most likely have us end up like the dead armadillo of my childhood day.

jbareham_170802_1892_0002America is dangerously lagging other parts of the world when it comes to treating AI as a strategic asset. For example, China seeks to dominate the global AI industry. We do not. They are treating development of AI as an arms race, building massive government supported industries that drive toward their strategic endgame – own AI, around the world, and have the resources to support it. We have no stated strategy. To support their strategic goals, to win the inevitable zero sum competitive games with America, China has release a national AI development strategy. This set of capabilities, partners, and alliances that will guide their goals to develop a China-centric $23B AI industry by 2020 and a $59B industry by 2025. Local and state governments are also supporting this strategy, creating educational and delivery alliance partners. China’s 1.4 billion population is a data gold mine for building AI. For China, this is national strategic initiative. A Pax Americana of Asia AI. We don’t have one. They do. That’s an America problem. A strategic problem.

Lack of a national strategic program is important because AI is a unique strategic resource. It is not like oil, water, or food. Traditional strategic resources do not beget more of those resource. Having a reserve of oil does not in itself generate more oil. These resources are finite and consumed. AI is different. AI produces more AI. AI is an exothermic resource, generating more than it consumes. It produces more knowledge, more insights, more advantages for the user. Having a strategic AI lead means one can produce more AI in the future, faster than those that don’t have it or are just starting.

oodacover2-790x1024John Boyd, a United States Air Force Colonel, studies the tactical effects of strategically out thinking your enemy. He determined that when one operates at a faster tempo or rhythm than the adversary, you will win and they loose in a zero sum competitive game. AI is a catalyst for faster tempos and rhythms. But unlike other processes, like the OODA that Boyd studied, AI exponentially improves its results with each cycle, each evolution. This limits effective counter attacks, limits effective transformations that could equalize future competitive engagements. He who owns AI, owns the world.

America needs a National AI Strategy (NAiS). We need to treat AI as a strategic resource; just as we do with oil, uranium, and electricity. We need to have a clear endgame that results in us driving AI, in all places, and having the resources to sustain it. America needs to build bigger and badder AI capabilities than our enemies, whoever they are and wherever they exist. We need to create effective AI partnerships and strong dominating AI alliances. We need to gain the strength to dominate the AI roadscape. Sustain a faster AI tempo. Doing anything less will be catastrophic. Doing less will jeopardize our way of life. Doing less could have our children one day saying, “Look daddy, is that American leadership that is dead on the side of the AI superhighway?”

Review – A Spy’s Guide To Strategy by John Braddock. Brain food for the hungry.

Spy's Guide Strategy BookDeadly conflict is inevitable when companies compete for the same client. Your team, your alliance, will fight their team, their alliance, for people, places, and things on a battlefield that you may or may not choose. There will be a winner. There will be some losers. With the right strategy, your endgame ends in your victory and their defeat. So goes the ways of strategic games. If you want to win, you need to know Strategy.

Really understanding strategy is a hard problem. One that requires itself a strategy. Amazon list over 200,000 books on strategy. Finding any book is easy. It’s a simple query executed in a text field of a web page. It’s results in a collection of data. Authors, summaries, recommendations. But picking one of them requires a decision. A decision requires analysis, a point of view. Therefore, deciding on book can also be an easy action if the right analysis of the proper data is performed. Learning about Strategy requires Tactics (talk about this later).

This foreplay between strategy and tactics has never been better illustrated than through the works of John Braddock. John was a case officer at the CIA. He developed, recruited and handled sources on weapons proliferation, counter-terrorism and political-military issues. He was a master spy. And as he points out, master spies are master strategists and master tacticians.

Through John’s second book “A Spy’s Guide to Strategy, we learn about strategy through the eyes of a CIA case manager. A master spy. A field operative. John teaches us that Strategy is imagination and reasoning, separate but connected. Strategy is looking forward (imagination) and reasoning backwards.

He shows us how to reason backwards from our endgame through zero-sum games where our battles will take place. We continue backwards through positive-sum games where our alliances are built. Farther backwards into boss-games – inevitable. To win the boss-games, we might have to win the more zero-sum games, more positive-sum games, and maybe more boss-games. A cycle.

Once you reason backwards far enough, you must move forward again by taking action. You turn decisions into actions. Actions into results. John shows how these results lead to yet more strategy. More tactics. This framework is beautiful in its simplicity and applicability to everyday life. Corporate life. Home life.

My companyIn the spring of 2017, I watched my company change. A good change. No, a great change. Our leadership shifted course, from systems integration to digital transformation. We had a new end-game, which required a new strategy. We want companies to hire us for their transformation activities. People. We wanted to dominate the North American and European markets. Places. We want them to pay us to do this. Things. As John say, we needed a new strategy for this people, places, and things.

We weren’t alone in this game, we had competitors. Major competitors. Ones with hundreds of thousands of people and even more alliances. They also want the same clients, in the same regions, and their same money. We were heading for conflict. Global conflict. We were going to play a zero-sum game, a game that they had more market share in. A game that if we lost we could be vanquished. We had a problem. A strategic problem, so we reasoned backwards.

end game

We needed new and bigger alliances. Alliances that were formed on solid and unshakable partnerships. These partnerships needed to bring new capabilities that could be used in our future zero-sum battles. So we formed alliances around Artificial Intelligence (AI). We partner with mega technology companies like Google, Amazon, and Microsoft. Behemoths. We also developed specialized alliances with industry leaders for human trafficking, substance use disorder recovery, and financial crimes. Good for flanking. Within our company we reorganized, we played boss-games. We formed new team, which required new leadership, new bosses.


new strategyO
ver the last years, we’ve actioned forward through many competitive zero-sum conflicts using our new AI strategy. We lost some, but we won even more. In loosing and winning, continue to imagine forward, testing our end-game, assessing their end-games. We reason backwards through our alliances, finding new partners and new industries. We make informed decisions, we action forward. The strategic cycle continues.

So, if you are a master strategist, read John’s book to learn how Osama bin Laden used this strategic framework to position himself as the next Khalif of his Caliphate end-game. If you are a master executive, then read this work to help you really understand the dynamics of strategy. If you are both, drop me a note on what that world is like.

Critical Capabilities for Enterprise Data Science

NewImageIn the article “46 Critical Capabilities of a Data Science Driven Intelligence Platform” an original set of critical enterprise capabilities was identified. In enterprise architecture language, capabilities are “the ability to perform or achieve certain actions or outcomes through a set of controllable and measurable faculties, features, functions, processes, or services.”(1) In essence, they describe the what of the activity, but not necessarily the how.While individually effective, the set was nevertheless incomplete. Below is an update where several new capabilities have been added and other relocated. Given my emphasis on deep learning, composed on cognitive and intelligence process, I have added genetic and evolutionary programming as a set of essential capabilities.

2015 03 04 10 52 16

The Implementation architecture has also be updated to reflect the application of Spark and SparkR.

2015 03 04 10 53 13

46 Critical Capabilities of a Data Science Driven Intelligence Platform

NewImageData science is much more than just a singular computational process. Today, it’s a noun that collectively encompasses the ability to derive actionable insights from disparate data through mathematical and statistical processes, scientifically orchestrated by data scientists and functional behavioral analysts, all being supported by technology capable of linearly scaling to meet the exponential growth of data. One such set of technologies can be found in the Enterprise Intelligence Hub (EIH), a composite of disparate information sources, harvesters, hadoop (HDFS and MapReduce), enterprise R statistical processing, metadata management (business and technical), enterprise integration, and insights visualization – all wrapped in a deep learning framework. However, while this technical stuff is cool, Enterprise Intelligence Capabilities (EIC) are an even more important characteristic that drives the successful realization of the enterprise solution.

2015 02 04 08 50 01

In enterprise architecture language, capabilities are “the ability to perform or achieve certain actions or outcomes through a set of controllable and measurable faculties, features, functions, processes, or services.”(1) In essence, they describe the what of the activity, but not necessarily the how. For a data science-driven approach to deriving insights, these are the collective sets of abilities that find and manage data, transform data into features capable of be exploited through modeling, modeling the structural and dynamic characteristics of phenomena, visualizing the results, and learning from the complete round trip process. The end-to-end process can be sectioned into Data, Information, Knowledge, and Intelligence.

2014 11 08 14 10 45

Each of these atomic capabilities can be used by four different key resources to produce concrete intermediate and final intelligence products. The Platform Engineer (PE) is responsible for harvesting and maintenance of raw data, ensuring well formed metadata. For example, they would write Python scripts used by Flume to ingest Reddit dialogue into the Hadoop ecosystem. The MapReduce Engineer (MR) produces features based on imported data sets. One common function is extracting topics through MapReduced programmed natural language processing on document sets. The Data Science (DS) performs statistical analyses and develops machine learning algorithms.  Time series analysis, for example, is often used by the data scientist as a basis of identifying anomalies in data sets. Taken all together, Enterprise Intelligence Capabilities can transform generic text sources (observations) into actionable intelligence through the intermediate production of metadata tagged signals and contextualized events.

2014 11 08 14 21 11

Regardless of how data science is being used to derive insights, at the desktop or throughout the enterprise, capabilities become the building block for effective solution development. Independent of actual implementation (e.g., there are many different ways to perform anomaly detection), they are the scalable building blocks that transform raw data into the intelligence needed to realize true actionable insights.

Deep Web Intelligence Platform: 6 Plus Capabilities Necessary for Finding Signals in the Noise

NewImage

Over the last several months I have been involved with developing uniques data science capabilities for the intelligence community, ones specifically based on exploiting insights derived from the open source intelligence (OSINT) found in the deep web. The deep web is World Wide Web (WWW) content that is not part of the Surface Web, which is indexed by standard search engines. It is usually inaccessible through traditional search engines because of the dynamic characteristics of the content and in persistent natural of its URLs. Spanning over 7,500 terabytes of data, it is the richest source of raw material that can be used to build out value.

2014 01 30 09 54 05

One of the more important aspects of intelligence is being able to connect multiple seemingly unrelated events together during a time frame amenable for making actionable decisions. This capability is the optimal blend of man and machine, enabling customers to know more and know sooner. It is only in these low signal that are found in the deep web that one can use behavioral sciences (psychology and sociology) to extract outcome-oriented value.

2014 01 30 09 54 15

Data in the web is mostly composed of noise, which can be unique but is often of low value. Unfortunately, the index engines of the world (Google, Bing, Yahoo) add marginal value to very few data streams that are important to any valuation process. Real value comes from correlating event networks (people performing actions) through deep web signal, which are not the purview of traditional search engines.

2014 01 30 09 54 50

These deep web intelligence capabilities can be achieved in part through the use of machine learning enabled, data science driven, and hadoop-oriented enterprise information hubs. The platform support the 5 plus essential capabilities for actionable intelligence operations:

1. Scalable Infrastructure – Industry standard hardware supported through cloud-based infrastructure providers that is scales linearly with analytical demands.

2. Hadoop – Allows for computation to occur next to data storage and enables storage schema on read – stores data in native raw format.

3. Enterprise Data Science – Scalable exploratory methods, predictive algorithms, and prescriptive and machine learning.

4. Elastic Data Collection – In addition to pulling data from third party sources through APIs, bespoke data collection through scraping web services enables data analyses not capable within traditional enterprise analytics groups.

5. Temporal/Geospatial/Contextual Analyst – The ability to regionalize events, to a specific context, during a specified time (past, present, future).

6. Visualization – Effective visualization that tailors actionable results to individual needs.

The Plus – data, Data, DATA. Without data, lots of disparate data, data science platforms are of no value.

Deep Web Intelligence Architecture 01

Today’s executive, inundated with TOO MUCH DATA, has limited ability to synthesize trends and actionable insights driving competitive advantage. Traditional research tools, internet and social harvesters do not correlate or predict trends. They look at hindsight or, at best, exist at the surface of things. A newer approach based on combining the behavioral analyses achievable through people and the machine learning found in scalable computational system can bridge this capability gap.

Insurance Product Recommender(IPR) – Growing Value Through Insights

NewImage

The Insurance industry can grow additional revenues by incorporating Recommender Systems into existing applications and across enterprise solution as a means of monetizing product related data. Some have estimated that for national insurance companies that have not implemented this category of data monetization, an additional 10% can be added to the top line at current combined ratios (AKA margin). For the $500M in premiums small business insurance company, this means a potential additional premium growth of $50M on existing operations.

Predictive systems, like the Recommender System, have become an extremely important source of new revenue growth over the last few years. Recommender systems are a type of “information filtering system” that seek to predict the ‘rating’ or ‘preference’ that user would give to an item (such as music, books, or movies) or social element (e.g. people or groups) they had not yet considered, using a model built from the characteristics of an item (content-based approaches) or the user’s social environment (collaborative filtering approaches). A familiar example is Amazon’s “Customers Who Bought This Item Also Bought” recommendation sections which identifies other products (books, electronics, etc.) thought should be of interest to the prospect as well.

NewImage

Because of their direct impact on top line revenue through the organic use of enterprise data, recommender systems are being increasingly used in other industries, such as e-commerce websites; giving businesses a strategic advantage over businesses without them. These systems involve industry specific predictive models, heuristic search, data collection, user interaction and model maintenance that are developed through the emerging field of data science and supported by new big data platforming systems. No two are the same and each become part of a competitive differential that can only be attained through internally analyzing data across the enterprise and externally throughout social networks.

One example that could transform the insurance industry is the Insurance Product Recommender(IPR).This data science-driven risk profiling recommender-based application helps underwriters and brokers identify industry-specific client risks; then, pinpoint cross-selling and up-selling opportunities by offering access to collateral insurance products, marketing materials, and educational materials that support the a complete sales cycle. As more products are sold to an every increasing customer base, the recommendations become more reliable, resulting in an exponential increase revenue realization.

The insurance industry, driven by economic calculus to maximize combined ratios (margin) for a given risk profile that is secured at a given premium (revenue), can use predictive systems, like Recommender Systems, to optimize value by leveraging existing untapped data sources (e.g., pre bind, claims, product purchases, etc.). These systems can become the pathway through which increased client retention (renewals) and client satisfaction can be achieved while growing the risk coverage wallet share within the industry.

Data Monetization: 30 percent of businesses will monetize data and information assets in 4 years

Newimage10 1There are three data points that are driving the business discussion around big data:

1. Only 1% of the world’s data was being analyzed (IDC); while at the same time, 100% of the data is costing companies CapEx and OpEx every day.

2. Consumers and businesses are beginning to recognize that the insights locked in data that reflects personal usage, location, profile and activity has a tangible market value. This is especially true when you apply the Power of Three principle to data sets.

3. As a result, 30% of businesses will monetize their data and information assets by 2016 (Gartner), up from today’s 10% baseline.

 As big data management consultants and data scientists, working with lines of business, begin to address these drivers, we should expect the following solution to fundamentally change the we monetize our business (mostly through applications and people):

:: Companies will look to drive incremental revenue by placing their point-of-sale (POS), internal social,  relationship-oriented, and other data online for business partners to subscribe

:: Companies will launch ventures that package and resell publicly available data (creating new data sets and insights), or using it to launch new information-based products

:: Information Resellers are arising to help organizations develop and execute data and information asset monetization strategies.

:: Information Product Managers to lead these efforts internally to identify, create, and make operational new services out of data.

:: New information architectures, focused on monetized data services (Quadrification of Big Data), will emerge since traditional business intelligence products and implementations are not well-suited to analyzing and sharing data in a subscription-based manner. This will transform platform companies that produce data into data insights companies that have platforms

This type of monetization strategy can can open new revenue doors without a significant change in existing platform and/or services investments. The nice thing about information product management is that it leverages most of the platform/service development to date. New immediate revenue can come through the sale and/or licensing of de-personalized data (loyalty, POS, social, etc.) to third parties.

Secondary revenue streams, which can come later in the implementation phase, comes from combining existing data sets with other third party data (transactional, social, etc.) in order to identify orthogonally conflated services (see the Power of Three). This capability would come at a marginal incremental cost and could be outsourced to cost competitive data science teams.

We are at a tipping point for the realization of value from data-oriented services (big data, data science, etc.). Those that see limited growth opportunities in traditional application and services development are already well underway in this data science transformation phase. For those that don’t see the need to monetize their data and information assets, it may be an Extinction Level Events (ELEs) that is competitively unavoidable.