A new paradigm for property risk data & analytics

A new paradigm for property risk data & analytics

USA map big data visualization. Futuristic map infographic. Information aesthetics. Visual data complexity. Complex USA data graphic visualization. Abstract data on map graph. The market is beginning to see a separation between those companies relying on basic data versus those companies that are innovating and targeting profitable niches with next-generation data. (Credit: Garrykillian/Stock.adobe.com)

The continued technological evolution in the insurance industry, particularly the digitization of core processes, migration to the cloud, and the explosive growth of insurtech companies, is laying the foundation for a new, transformative era of property risk data and analytics.

The market is beginning to see a separation between those companies relying on basic data versus those companies that are innovating and targeting profitable niches with next-generation data resources. The changes and the impact on underwriting and claims are so significant that insurers must ask themselves: Is your P&C data doing you right or doing you wrong?

It’s a critical question to ask at this time — and one business consultants and insurance industry veterans say could determine the direction and success of your business over the next 10-20 years. Your P&C business’ future could largely be determined by the strategic choices you make today about the direction and application of data in your business.

To give a brief illustration of the shift, simply consider that most of the fire risk data and models that insurers rely on today is based on 50-year-old assumptions. Consider that most current systems rely on evaluating risk and pricing based on property or business ZIP codes, a practice that dates back to the 1980s.

Yet, a house in one area of a ZIP code may have a completely different wildfire risk, flood risk, or crime risk than a property in another area in the same zip code. And underwriters may only examine a few data points on a residential or commercial property. Yet, the technology and data exist to look at 1,000+ property risk data points for every single property in the U.S. — including up-to-date aerial imagery and geospatial information that is much more comprehensive and precise.

There are hundreds of data points available to carriers that most are leaving on the table. Consider that lightning damage is a billion-dollar claims category, with risk data readily available, yet almost all insurers do not consider lightning data when assessing a property. While the exact location and distance of the nearest fire station and fire hydrant are available and have a notable impact on the estimated extent of fire damage, 30% of legacy systems did not know the location of a property’s nearest fire station a recent evaluation found.

When your data is sparse and inaccurate, you’re taking unnecessary risks. When your data is missing or unreliable the investment and faith you put into risk models and analytics goes to waste.

The availability and quality of property and casualty data has gotten dramatically better and more cost-effective over the last few years. The ability to store or access such data via the cloud and the ready availability of API’s means that data can be instantly delivered and consumed by business users such as underwriters.

There are dozens of companies and literally thousands of data points that are available to increase your understanding of properties, customers, and related risks. To give just a small sampling of the type of unique data points available that can significantly impact understanding of risk and pricing, consider that you can now have immediate access to (a) property distance to the exact locations of fire stations and fire hydrants; (b) which properties contain underground storage tanks; (c) the distance to PFA sites, superfund sites or toxic release facilities; and (d) direct access to building permits.

The insurance industry is often considered cautious in its adoption of new technologies, but when it comes to next-generation data, the industry is starting to move faster. With the commoditization of both personal and commercial lines of business putting pressure on executives and the bottom line, data and analytics are proving to be a difference maker and a competitive advantage.

And your access to data today is better, faster and much more cost-effective.

P&C insurers, particularly in the United States and Europe, are investing heavily in data and analytics to improve all aspects of their business from application profiles to risk selection and pricing, with underwriting being the main strategic focus.

In a recent study, the global consulting firm, McKinsey, contended that underwriting excellence, along with pricing sophistication, are the two key traits that underlie the success of insurance industry leaders. According to the study, best-in-class insurers are “putting distance between themselves and competitors” by applying advanced data and analytics in underwriting. They cite these insurers reducing loss ratios by three to five points, increasing new business premiums by 10%-15%, and improving retention by 5%-10%.

As the McKinsey report states, “external data is the fuel that can ignite the value of analytics.” By leveraging advanced data and analytics, insurers can gain deeper insight into risks and energize the insurance lifecycle, from application profiles to risk selection and pricing. We are seeing companies at the forefront put a focus on incremental improvements, particularly in the areas of:

  • Risk Selection: Through access to internal data and integrating the right mix of external data, and then integrating that data seamlessly into the risk selection process, insurers can better screen applicants. With next-generation data and analytics an insurer can select good risks and avoid the risk they do not want to underwrite. Of course, the idea is not to eliminate losses completely, but to eliminate highly identifiable and highly probable losses.
  • Prefill: Another point of the customer journey that can be made significantly more efficient and effective using the right next-generation data is in the interview and screening process. Using traditional data systems, the screening and interview process can be cumbersome, but with next-generation data integration, you can match and prefill data for prospects and customers quickly and inexpensively. Minimizing the number of questions you need to ask a potential customer or client can dramatically help speed up and smooth the screening and sales process.
  • Pricing: With quick access to and integration of the right internal data, and cross analysis with a broad array of external data, insurers can more effectively make their case to regulators on pricing — and can more accurately and appropriately price policies to reflect the actual inherent risk. With most current systems, a property owner in a zip code with an F wildfire rating, perhaps a home in an urban area of that zip code, likely pays the same premium as a home that is actually in peril of a wildfire due to its proximity to forests/wild areas with dry brush within just feet of the home. If a customer lives in a significantly more fire-prone property, they should be paying more than those in a low-risk property.
  • Marketing: Marketing is really one of the almost untouched or greenfield areas that insurers have yet to apply advanced data and analytics. Being a leader in the application of next-generation data in marketing can really make a competitive difference, particularly for small and medium-sized insurers. Marketing should really be seen as the starting point of risk selection. If you’re smarter and more targeted about who you market to, you are going to be able to produce stronger, less risky and more profitable leads.

It is important to remember that data is a living breathing thing. It is always evolving and growing.

Headshot of John Seigman, the co-founder of Hazard Hub. John Seigman, the co-founder of Hazard Hub. (Courtesy photo)

And there is tremendous value in closely integrating greater property risk data into the underwriting process. Particularly when you incorporate accurate, external data with your own data to drive insurance insights.

The market is starting to become more aware of these advantages and move in this direction — and it’s likely to represent a major paradigm shift in the industry. Being ahead of the curve with your property risk data and analytics can be a real competitive advantage for early innovators.

John Siegman is the co-founder of Hazard Hub, a property risk data company that was acquired by Guidewire in mid-2021. He is now a senior executive at Guidewire helping to lead the direction of the HazardHub solution and guiding P&C insurance clients in innovating their data integration into critical processes.

Opinions expressed here are the author’s own.


Source link

Share This

Leave a Reply

Your email address will not be published.