Insurance Product Recommender(IPR) – Growing Value Through Insights

NewImage

The Insurance industry can grow additional revenues by incorporating Recommender Systems into existing applications and across enterprise solution as a means of monetizing product related data. Some have estimated that for national insurance companies that have not implemented this category of data monetization, an additional 10% can be added to the top line at current combined ratios (AKA margin). For the $500M in premiums small business insurance company, this means a potential additional premium growth of $50M on existing operations.

Predictive systems, like the Recommender System, have become an extremely important source of new revenue growth over the last few years. Recommender systems are a type of “information filtering system” that seek to predict the ‘rating’ or ‘preference’ that user would give to an item (such as music, books, or movies) or social element (e.g. people or groups) they had not yet considered, using a model built from the characteristics of an item (content-based approaches) or the user’s social environment (collaborative filtering approaches). A familiar example is Amazon’s “Customers Who Bought This Item Also Bought” recommendation sections which identifies other products (books, electronics, etc.) thought should be of interest to the prospect as well.

NewImage

Because of their direct impact on top line revenue through the organic use of enterprise data, recommender systems are being increasingly used in other industries, such as e-commerce websites; giving businesses a strategic advantage over businesses without them. These systems involve industry specific predictive models, heuristic search, data collection, user interaction and model maintenance that are developed through the emerging field of data science and supported by new big data platforming systems. No two are the same and each become part of a competitive differential that can only be attained through internally analyzing data across the enterprise and externally throughout social networks.

One example that could transform the insurance industry is the Insurance Product Recommender(IPR).This data science-driven risk profiling recommender-based application helps underwriters and brokers identify industry-specific client risks; then, pinpoint cross-selling and up-selling opportunities by offering access to collateral insurance products, marketing materials, and educational materials that support the a complete sales cycle. As more products are sold to an every increasing customer base, the recommendations become more reliable, resulting in an exponential increase revenue realization.

The insurance industry, driven by economic calculus to maximize combined ratios (margin) for a given risk profile that is secured at a given premium (revenue), can use predictive systems, like Recommender Systems, to optimize value by leveraging existing untapped data sources (e.g., pre bind, claims, product purchases, etc.). These systems can become the pathway through which increased client retention (renewals) and client satisfaction can be achieved while growing the risk coverage wallet share within the industry.

Refactoring Insurance/Reinsurance Catastrophe Modeling using Big Data

NewimageThe Catastrophe Modeling ecosystem, used in insurance and reinsurance, is a good example of the types of traditional computational platforms that are undergoing an assault from the exponential changes seen in data. Not only are commercially available simulation and modeling tools incapable of closing the forecasting capabilities gaps in the near future, but most organizations are not addressing the needed changes in the human factor (data scientists and functional behavioral analysts). The net for those insurance/reinsurance companies that rely on these old school techniques is 1) reduced accuracy in understanding physical effects of catastrophic events, 2) reduced precision in quantifying the direct and indirect cost of a catastrophe, and 3) increased blind spots for new and emergent catastrophic events, coming from combinations and permutations of existing events, as well as the creation of new ones.

NewImage

The quadrafication of big data (infrastructure, tools, exploratory methods, and people) is having a positive impact on these kinds of ecosystems. I believe we can use the big data reference architecture as the basis for refactoring the traditional catastrophic simulation, modeling, and financial analysis activities. Using platforms like Pneuron, we can help them more effectively map computationally complex MDMI (multi-data mult-instruction) workstreams into disaggregated process maps functioning in a MapReduce format, potentially using some of the existing simulation models. They could get the benefit of their a prior knowledge (models, tools), while dealing with the growth in data sets. Just a few thoughts.

One last note – this is an exercise in science and not engineering, or even systems integration. The practices that make for excellent enterprise architectures, requirements development, or even software engineering are of very little use here (those beyond critical thinking). To solve this problem, one must be willing to fail, fail early, an fail often. It is only through these failures that the true realization of Big Data Cat Modeling capabilities will be found.