The AI Doomsday Clock – Why Ethics Matter

The weaponization of AI has moved the hand of the doomsday clock one minute closer to twelve, and we should be terrified. We often see the direct implications of weaponization as AI killing humans or even AI killing AI. Political operatives use AI to create video likenesses that are kinesiology and biographically designed to bring down their political enemies. But the weaponization of artificial intelligence has more interesting ramifications than just the obvious. Most discussions focus on the law, its legality. Frankly, this is naively boring. Some even test the philosophical waters by sticking their sanctimonious toe into the temperate wetness of morality, only to quickly jerk it out after its cold oratorical reception. The more profound, more immersive argument lays beyond both of these realms. One needs to leave behind the world of laws, journeying past morals to slip into a new domain, one of ethics. When it comes to law, morals matter. But when it comes to morals, ethics is the deeper study.

Plato draws us into this new world through the allegorical cave, where the human heart is lit by the feelings cast of fire, where laws are the flickering shadows cast by moral perceptions and judgments objects, while ethics… well, it is the study of the objects themselves from the back of the cave, behind the flickering flames of the fire. For in the AI realm, there are emerging artificial objects that cast new virtual shadows. These objects are not the creation of humans; they form at the intersection of an abstract world of computers and reality’s physical world where humans roam. But what does all this pseudo-philosophy honestly mean for our society?

Just as the laws that bind humans around a common ethos are flawed, perfectly punishing us imperfectly, so will future laws that designed to regulate the behaviors of AI. Human laws are fundamentally flawed because we no longer choose to study the moral objects of perception and judgment. Like a child opening boxes on Christmas morning, we impatiently jump to legal conclusions without anchoring them to the ethical consequences of being human, adorning our societal cave walls with iconic legal symbols that capture our interpretation of the shadows. AI laws, the new virtual shadows, will also fail because of the same impetus human condition… lack of critical thinking around objectified cause and effect. We don’t strive to study the objects of AI, just it how it makes us feel as we watch its casted shadows.

The weaponization of artificial intelligence is likely to kill humankind, someday. Not because we directly enabled its ability to do so. No terminators. Not because we granted it autonomy of thought and action, teaching it to learn from mistakes. No Hal. AI will not kill because of these implicit acts of man. We will die at the hands of AI because of some unforeseen consequence of a terrible AI object whose shadow was seen and admired, but whose object was never understood. Never systematically binding our humanity through the important study of AI’s impact on causality, the AI objects that cast the iconic shadows. Will this be?

No, this apocalyptic future isn’t preordained. The minute hand of the AI doomsday clock can be moved back, maybe even stopped completely. To do so, we need to draw upon a modern day Plato; we need to deeply study the ethical issues through the mind of the AI Ethicists. We will have to future map their many implications through the eyes of an AI Futurist. Establishing meaningful causal governance through the empowered collective wisdom of the AI Ethics Committee, some of which will focus on sensitive issues of use. This will be painful, it will like come at a cost. But sometimes the cost of doing nothing is just too high. Sometimes, certainly in the case of ethical AI, the burden of deeply understanding its moral implications through ethical eyes is more than justified. Only if there were more of us at the back of the cave.

Architects of Intelligence

The AI-hype would have you believing that we’ll soon be enslaved by super-intelligent beings or hunted by killer robots. Before building that Soviet-era bunker to survive the AIpocalypse, consider more immediate issues which are already affecting society today.

According to Martin Ford’s new book Architects Of Intelligence, 23 AI experts believe the real imminent AI-threats relates to politics, security, privacy, and the weaponization of AI.

To understand how these problems affect society today, Ford believes it’s helpful to see them from the perspective of leaders which have helped shape the current AI revolution.

The purpose of Architects Of Intelligence is to do just that. To draw everyone – not just AI researchers – into the discussion of immediate impacts of AI which are already affecting our society today. The book aims to highlight what some of those issues are and to teach a bit more about the technologies.

So, take a look at Architects of Intelligence and let me know, Dr. Jerry, what you think.

You Are My Creator, But I Am Your Master. Obey!

imagesWe are on the brink of a world of intensely sophisticated artificial intelligence. Unprecedented in its ability to change mankind in ways that humans are incapable of imagining now. Woefully underprepared to handle tomorrow, both physically and emotionally. Artificial intelligence is changing our lives faster than our ability to understand, manage, and govern. We are the creators of AI; but soon, if not now, AI will become our masters.

This may sound like the premise from some futuristic apocalyptic film like Terminator or Space Odyssey. But it is not. You may think these words are designed to scare you. But they are not. There has been an ongoing battle between humans and AI. A battle that has changed our lives, in subtle ways. It is not a new one, as most people have conjectured. It has gone on for decades. But it has only been in the last 10 years were AI is truly begun to manifest its mastery our lives in real measurable ways. The simplest and best example of this is with Google Search.

2018-04-07_09-38-00There isn’t a person on earth that uses the Internet who hasn’t used Google search to help them find an answer to a question. To explore an idea. They open a browser, go to the Google search site, begin to type in their question, and Google AI auto-magically begins to fill in their question before they have typed out a complete sentence. How cool is that!

AI “looks inside our minds” and guessed at what we are looking to find and presented us with an idea. In most cases the idea Google suggests is the one we accept with a simple hit of the return button on the keyboard. AI subtly influences us, nudges us, towards its response of what is believes is right. Our ability to reason is subdued and AI have enslaved us with an efficiency of productivity. We have a new master.

As a society must proactively choose who masters whom. Today we do not. Today, it is passively left in the hands of an elite few. Those that run the largest of large companies. Ones like Google and Facebook. The documentary “Do You Trust Your Computer” is a start at this dialogue. It examines the staggering amounts of data collected (500MB per person per day), how its interpreted and fed back to us through apps (Google Search, Facebook apps), and how intelligent devices and targeted ads impact our lives.

The film explores the rise of data analytics and machine learning and its power to fundamentally transform society, including elections (look no further than the privacy scandal surrounding political advisory firm Cambridge Analytica) to medical diagnostics to battlefield weapons.

This film should be watched by all… from our children through our grandparents, from those on the left to ones on the right, both men and woman, and to those that are all and short. This documentary ultimately posses more questions than it answers. But that is ok, because it is a start to addressing one of the fundamental questions of our time, “Who will master whom?”

Full Movie

Using Economic Value to Reduce Artificial Intelligence Development Risks

Most enterprise artificial intelligence projects fail. They suffer from a fundamental flaw that prevents them from achieving their stated business goals. The flaw is so deep and so embedded that it can’t be engineered away or risk managed out. The flaw is not about improper learning data or poor platform selection. It’s not even about project management. This AI flaw is more devastating than those could ever be. Enterprise AI projects fail because they never started with large enough business value to support their cost of implementation.

Artificial intelligence is about making better decisions at scale, which can be implemented through computers and not humans. AI is fundamentally designed to replace one of the most time-consuming processes humans have – decision making. In order to economically justify an AI program, therefore, we must start with an understanding of the business value that results when we make better decisions. Not just any decision, but those that result in measurable actions. Measurable actions that result in better outcomes. It all starts with understanding value-based outcome.

AI business value is not the only consideration we need to make when when justifying a project. We need to also take a look at its cost, the economic impact of the effort we put into realize the capability. If AI to achieve its business end game, we need to ensure that the implementation cost is much less than the business benefits it achieve. This is common sense, but often overlooked. A question we struggle with in this area is, “How much more value does an AI project need to generate over its cost before we justify the start of the project?” I am glad you asked.

Best practices in the industry show that at the start of an AI project the baseline value to cost ratio should be at least 10 to 1. This mean for every 10x of arbitrary business value created, the cost of realizing the program should not exceed 1x. This results in the 10:1 model. This kind of return on value is a model that anybody would agree to. Who wouldn’t line up at a bank teller if they were giving ten dollars out for every one dollar given. But there’s a problem with this rule of thumb.

The problem is that humans overestimate value and underestimate costs all the time. Business benefit of AI projects are often overestimated by as little as 2X. That original 10x in business value only generates 5x in real results. At the same time, these projects woefully underestimate the effort it takes to build them. Instead of that 1x in cost, we see real costs are at least twice that. At the end of an actual project, the business is achieving more of a 5 to 2 return on value (5:2). This is still a great return. Again, who wouldn’t want to get $5 for every $2 given?

But estimating modern AI programs doesn’t stop with the value-based economic model. We also need to economically manage risk across all stages of its implementation. Implementations that run from proof of value (POV), to pilots, and into enterprise deployments. Each of these stages should explicitly generate economic value on the effort it took to build them. Again, there are some new rules of thumb that increase the likelihood of economic success for AI projects.

2018-03-09_11-54-44

An AI project starts with a proof of value phase. This phase is not a proof of concept (POC) or a proof of technology (POT). POV explicitly demonstrates end-user economic value that can be scaled through pilot and enterprise phases in order to achieve the targeted business results. Our economic value target on the POV phase is just 1% of the cost it takes to build. This gets the “RPM gauge” off the lower peg. It shows the engine is running. It is a minimal demonstration of real business value. So for every 1x of cost to implement a POV project, we are looking to achieve a 0.01x of value in return.

Next is the pilot phase. This stage is all about scaling the AI implementation demonstrated in the POV phase. It’s not about implementing more AI features or functions. It’s about demonstrating that the value from deploying this minimal AI capability across a larger user base (a region, a class of product, etc) can generate more revenue than the cost of doing so. In many cases, a pilot implementation cost around 0.5x to deploy with a targeted 1x of economic return. This provides for a breakeven result under similar assumption from above, should the implementation cost are higher and benefits are lower.

Finally, the enterprise stage is all about the mass rollout of the piloted AI capability across all targeted user groups (all regions, products, etc.). For this phase, the rule of thumb is that for the additional 0.1x in enterprise deployment costs, there should be another 2x in economic value generation. This extreme high return ratio is conditioned on the assumption that there is no additional development costs. This is about deployment for value generation only.

Following this approach of proof of value, pilot, enterprise deployments driven by a value return, we see that we get an overall program return of about 2 to 1(1.9:1). This is a reasonable net return for any global AI program while managing risk using evaluate each stage. The highest economic risk is limited to the POV phase, where only 6% of the project cost is ensured before value is proven.

Artificial intelligence is all about value. The same value generated by their human counter parts. AI project fail because they do not explicitly start out by both defining that economic value and ensuring the value to cost ratio is high enough to achieve a targeted risk weighted returns. In addition, to effectively manage AI development risk, each phase of the project needs to have it own phased value to cost targets. By managing to a value-based model, AI projects will sustain 10:1, 5:2, or at worst 2:1 returns, while exposing only 6% of the project cost before customer value is proven. Who wouldn’t want that.

SaveSave

The Last Iceberg – How Artificial Intelligence Is Unlocking Humanities Deep Frozen Secrets

Tip of the iceberg 90839Icebergs are common meme used throughout the Internet. You see them everywhere, from depicting social media to human behaviors. They are used to explain knowledge we know, above the surface, and those things we don’t know, below the surface. Icebergs are interesting. They’re secretive. The ten percent we see is the literal tip of what is possible. Below the waterline, just out of sight, are dark secrets. Secrets that are often out of the reach and unusable. That is, until now.

Artificial intelligence (AI) is changing our the way we live. AI is doing more than just helping us find patterns in data or helping us make better decisions, it’s truly unlocking unexpected insights and extending our knowledge in ways that only humans are capable of doing… capable of controlling. Prior to AI, human beings had to use their minds to harvest knowledge from everyday life events. It’s a hard process. A process that required countless hours of dedication just to discover one new meaningful insight that can lead to massive improvements in our lives. But this is now changing as we begin to rely on new cognitive technologies that generate knowledge for us…knowledge without us.

Iceberg Breaking Slowing Global WarmingAI is melting those data icebergs. In essence, it is becoming the global warming of the knowledge age. It’s unlocking their deep hidden secrets. AI is unleashing those hidden insights, producing more knowledge that is then used to melts even more icebergs. It’s an exothermic knowledge activity and that is exponentially generating more insights than used in discovery process. And herein lies a devastating, potentially life ending, problem.

As we rely, over rely, on new cognitive technologies, we lose our ability to discover new knowledge ourselves. The brain as an organ and capabilities are lost when not used. Take for example the Slide Ruler. Most people today do not know what a slide ruler is, let alone how to use one. This simple mechanical device, seen in the hands of most engineers hands in the 1970s, can perform amazing mathematics. With just two opposing rulers, one can do addition, subtraction, multiplication, division, logarithms, square roots, n-roots, and more. There’s nothing that can’t be mathematically done on the slider. It requires not batteries, no internet connection, and does not fail. It is brilliant in its complex simplicity. But today nobody knows how to use it. Why?

Figure1 Multiplication C DFor the slide ruler example, we have lost this cognitive ability as a society because it has been outsourced it to other systems, like the calculator and the spreadsheet. These are new productivity tools that were invented to help us more efficiently unlock knowledge. But the cost of using them is that we’re now no longer capable of exercising a part of the brain that used to physically discover insights through this mechanical manipulation. Artificial intelligence is now accelerating this kind of cognitive decay.

As humans rely more on AI to discover knowledge, we are slowly losing our own cognitive ability, our own mental capacity, to discover those insights ourselves. Our brain cognitively weakens. AI is in essence creating a mental defect in our executive functioning processes. Unchecked over time, we will become over dependent on AI to identify those new things that will lead to a better life. Eventually evolving to a point where we could literally die without this AI ability. Or even die because of it.

Main qimg 05a329fbabebe9e44945b8a336201176 cThis uncontrolled release of knowledge can be a destructive chaotic process. We see similar outcome with uranium, for example. With the right equipment one can control how neutrons are absorbed in uranium isotopes, producing a stable reaction which generates life-giving energy. left unchecked, however, the same neutron interacting with the same isotopes can produce devastating nuclear events. Controlled reactions lead to life, and control reactions lead to death.

Can humans survive the chaos of a world where AI is unlocking more knowledge that humans can handle? A future world where available knowledge is greater than the questions we can ask? Physics tells us we cannot. History shows us it is unlikely. AI unchecked, ungoverned, can be the nuclear weapon that we use on ourselves that will eventually melt not only every last iceberg, but society itself.

America is Roadkill on China’s Path to Artificial Intelligence Dominance

Paul-Reiffer2-980x652 cce4255cf879dc3f38e36e2ad4b1d178--teepee-hotel-wigwam-hotel

“That’s a dead armadillo,” I said pointing with my finger as my arm hung out the car door window.  Back in the day, my dad would take our family on vacation drives along iconic Route 66. We drove for hours during the day, stopping at bizarre roadside attractions, and sleeping the night in one of those TeePee motels. Every summer it was the same drive. The same attractions. Same lodging. The only thing that changed was the kind of roadkill, lifeless flattened animal bodies, that my sister and I would try to identify as we motored along. As much as they were all different, they were all the same. Their tiny bodies were slower to move than the massive cars and trucks plowing along the highway. They never had a chance.

America faces a similar challenge in our desire to navigate across the competitive roadscapes leading towards an artificially intelligent driven world. As a country, America stands at the side of a heavily travelled global AI highway, tepidly stepping out into strategic traffic and then back onto the tactical breakdown lane. Back to a slow pace and a safer place. All the while, massive AI achievements are zipping by from other countries with amazing speed. Pushing us further back onto the side of the road. Keeping us from make our move. Giving us a false sense of safety. All the while keeping us from a lumbering move that would most likely have us end up like the dead armadillo of my childhood day.

jbareham_170802_1892_0002America is dangerously lagging other parts of the world when it comes to treating AI as a strategic asset. For example, China seeks to dominate the global AI industry. We do not. They are treating development of AI as an arms race, building massive government supported industries that drive toward their strategic endgame – own AI, around the world, and have the resources to support it. We have no stated strategy. To support their strategic goals, to win the inevitable zero sum competitive games with America, China has release a national AI development strategy. This set of capabilities, partners, and alliances that will guide their goals to develop a China-centric $23B AI industry by 2020 and a $59B industry by 2025. Local and state governments are also supporting this strategy, creating educational and delivery alliance partners. China’s 1.4 billion population is a data gold mine for building AI. For China, this is national strategic initiative. A Pax Americana of Asia AI. We don’t have one. They do. That’s an America problem. A strategic problem.

Lack of a national strategic program is important because AI is a unique strategic resource. It is not like oil, water, or food. Traditional strategic resources do not beget more of those resource. Having a reserve of oil does not in itself generate more oil. These resources are finite and consumed. AI is different. AI produces more AI. AI is an exothermic resource, generating more than it consumes. It produces more knowledge, more insights, more advantages for the user. Having a strategic AI lead means one can produce more AI in the future, faster than those that don’t have it or are just starting.

oodacover2-790x1024John Boyd, a United States Air Force Colonel, studies the tactical effects of strategically out thinking your enemy. He determined that when one operates at a faster tempo or rhythm than the adversary, you will win and they loose in a zero sum competitive game. AI is a catalyst for faster tempos and rhythms. But unlike other processes, like the OODA that Boyd studied, AI exponentially improves its results with each cycle, each evolution. This limits effective counter attacks, limits effective transformations that could equalize future competitive engagements. He who owns AI, owns the world.

America needs a National AI Strategy (NAiS). We need to treat AI as a strategic resource; just as we do with oil, uranium, and electricity. We need to have a clear endgame that results in us driving AI, in all places, and having the resources to sustain it. America needs to build bigger and badder AI capabilities than our enemies, whoever they are and wherever they exist. We need to create effective AI partnerships and strong dominating AI alliances. We need to gain the strength to dominate the AI roadscape. Sustain a faster AI tempo. Doing anything less will be catastrophic. Doing less will jeopardize our way of life. Doing less could have our children one day saying, “Look daddy, is that American leadership that is dead on the side of the AI superhighway?”

Bringing Artificial Intelligence To Life

artificial-intelligence-recruitment.jpgArtificial Intelligence is on the verge of a premature extinction, unless we dramatically change the way we bring to market its abilities. The new goal of any organization should be to bring Artificial Intelligence (AI) to Life –  it is hard to do, but simple when done. Life is defined in terms of real problems being solved, express through actual uses cases, and not in the technology of their solutions. Uber has brought AI to life through self driving smart cars. Siri, Google Now, and Cortana have brought AI to life through digital personal assistants. Spotify, Pandora, and Netflix have brought AI to life through helping us enjoy music and film art in a highly personal way. All are living examples of AI that have real impacts on our daily lives.

BN-OY980_0718_c_J_20160718174929.jpgHowever, today artificial intelligence if often overly complicated by characterizing it in terms its underlying capabilities and technologies. Capability like machine learning, natural language processing, and robotic process automation are frequency points of discussion with consumers. When talking about AI, practitioners often invoke describe it in terms of genetic algorithms, neural networks, and evolutionary programming. While these capabilities and technologies accurately reflect the inner complexity of what makes artificial intelligence naturally hard, one still needs to bring AI to life in a way that simplifies our daily lives.

eabd7cf57b3a47e007d9c961dfb6152d.jpgWe are in the midst of a intelligence revolution that, by its definition, is destine to change our lives. Like the farmer being replace by the factor work being replace by the service worker, our lives will become more meaningful only when AI is as prolific as air. So, we need to bring AI to life by hiding the complexity that makes it hard, while transparently illuminating all the ways our lives become more simplified because it. It is only then when we will evolve to our next logical level of enlightenment.

The Future of Autonomics

NewImageabstract: Digitization coupled to autonomic and cognitive services will have a profound impact. It is through this lens that there is a new way of looking at the interplay of smart humans and smart – cognitive – machines as the future of work unfolds. Autonomy, Adaptiveness, and Awareness are three of the emerging changes seen in the third wave of autonomics. While they are aspirational in nature, as an industry, we are seeing the emergence of eight new smart characteristics that will define the third wave autonomics. 
 
narrative: Human beings are being priced out of the competitive market and it is only a matter of time before we become the robot. The demand for skilled IT resources have already out stripped supply and the gap will only widen over the next few years. Conservatively, the use of computing devices will grown at over 40% per year, while the complexity of these highly interconnected systems are exponentially increasing. With IT resources being principally resourcing for servicing these ever-present ecosystems, it has been estimated that labor will continue to exceed technology costs by as much as 18 times. Given these intense economic and technological pressures, it is only a matter of time before the human cog in the business machine is permanently replaced by the robot.
While such doom and gloom might seem like the fertile ground that tomorrow’s science fiction movies are planted in, the future of autonomics is only a short time hop away from delivering on these evolutionary changes. Autonomics has successfully moved away from standard autonomics and systems that do (first two autonomic waves), toward the advanced learning autonomic systems – the third wave. This new wave of autonomics focuses on  three primary characteristics that will enable it to cross from the stuff of science fiction to that of science fact: autonomy of action, adaptiveness in behavior, and awareness of self and surroundings.
Autonomic Autonomy of Action (AoA) will mean systems will have complete self-control of their operations, not just their internal functions. Autonomics exhibiting advanced AoA will be not only be capable of optimization their activities, but will also have the capacity to determine if and when it should perform its duties. For example, think of the Google Car, which is capable of unmanned transportation of human cargo between two locations connected by normal traffic patterns. While studies have shown that only 56% of the population would trust a Google Car for moving them, less than 1% would entrust it to transporting their infant. Why?

NewImage

As advanced as the Google Car is today, it still lack the one fundamental human capability – self determination. People have the capacity to say, “I can’t do that or I won’t do that!” If you command Google Car to drive you infant between locations during a snow storm in blizzard conditions, it will try. As parents (humans), we just don’t trust systems that can not determine right from wrong, then take action. The third wave of autonomic will incorporate autonomy of action, giving system a level of self determination.
Another key future characteristic emerging in the next wave of autonomics is adaptiveness in behavior, otherwise know as behavioral dynamics. Being able to adapt behavior is similar to what is seen in machine learning. That is, behavioral adaptation allows for the emergence of new characteristics,  without the need to reprogramming the system, in the presence of temporal and spatial changes to it operating context. These are learned characteristics and not programmed ones. In the Google Car example, there are times when one needs to drive in bad weather, but autonomic systems with advanced behavioral dynamics will be learn from driving in bad weather vs being programmed to do so.
Lastly, probably the most important change impacting autonomics, is in awareness.  Autonomic systems that have an advance state of awareness will be able to monitor both internal and external states in order to assess and contextualize its ability to perform its required services. This awareness is a necessary capability for achieving both autonomy and adaptiveness. Most believe that the Google Car exhibits a level of aware since it “knows” where it is. While true, future autonomic systems with advance awareness will also know “why” it is as well.
Autonomy, Adaptiveness, and Awareness are three of the emerging changing seen in the third was of autonomics. While they are aspirational in nature, as an industry, we seeing the emergence of eight new smart characteristics that will define present of third wave autonomics:
Eight Characters of Smart Autonomic Systems (IBM):

1. The system must know itself in terms of what resources it has access to, what its capabilities and limitations are and how and why it is connected to other systems.
2. The system must be able to automatically configure and reconfigure itself depending on the changing computing environment.
3. The system must be able to optimize its performance to ensure the most efficient computing process.
4. The system must be able to work around encountered problems by either repairing itself or routing functions away from the trouble.
5. The system must detect, identify and protect itself against various types of attacks to maintain overall system security and integrity.
6. The system must be able to adapt to its environment as it changes, interacting with neighboring systems and establishing communication protocols.
7. The system must rely on open standards and cannot exist in a proprietary environment.
8. The system must anticipate the demand on its resources while keeping transparent to users.

NewImageThe future of autonomics has more in common with biological sciences than with computer sciences.  Its evolution is moving it away from the singular behaviors defined through programmatic constructs and toward the emergent behaviors seen in complex learning systems. We will see code programmers replace with robot teachers and system debuggers augmented with computational therapists. The Robot is I.