Why insurance companies need a new approach to observability

Why insurance companies need a new approach to observability

Mobile apps, virtual appraisals, digital signatures and online claims submissions are just a few of the digital innovations reinventing the claims value chain and helping insurers meet rising service expectations. And it seems insurance companies have gotten the memo: an estimated 60% are already using digital technology to improve the customer experience. According to JD Power, 84% of customers who filed auto insurance claims in the past several years used digital tools at some point during the process – a proportion that will only continue to grow as customer expectations evolve. 

Digital innovation presents a key opportunity for insurance companies to stand apart from the competition. But as customer adoption of digital technologies takes off, paradoxically, satisfaction with some digital services is actually declining. There are several reasons for this, one being that by its very nature, insurance claims can often be a complex, drawn-out process. Online claims submissions forms typically comprise numerous steps, including submission of ‘heavy’ content like photos and videos that can slow the process down or potentially cause crashes.

In order to best support customers (often during difficult times), digital performance (speed and availability) is one key area that insurance companies must maximize.

However, this is an extremely difficult challenge. Many digital services in the insurance industry incorporate external third-party functionality which can add feature richness, but introduces performance risks (if a third-party has a performance degradation, this can drag down performance for the entire host site). In addition, like other industries, insurance leaders are increasingly leveraging the cloud, often as part of hybrid environments, which may create efficiencies but also adds complexity. In these environments, when performance of an app begins to falter, it can be difficult to precisely pin down the root cause.

See also  Joe Manchin threatens to sue US Treasury over EV tax credit rules

To ensure the reliability of ever-increasing digital experiences and manage this growing complexity, many insurance organizations have implemented observability – or, the practice of gauging application and system health by analyzing the external data it generates.  But the challenge is that as customers interact increasingly through digital channels, the result is an explosion of application and system data. Ironically, the very data that is designed to help us identify and fix growing anomalies and hotspots faster, often becomes too overwhelming and cumbersome to wade through and derive any actionable meaning from. Not surprisingly, even though we have more data than ever at our disposal, recent outage analyses have found that the overall costs and consequences of unplanned downtime are increasing.

In this context, insurance companies’ observability approaches must evolve to better harness and leverage the mountains of data being generated.

No more ‘centralize and analyze’ – Observability architectures have traditionally been built using a ‘centralize then analyze’ approach, meaning data is centralized in a monitoring platform before users can query or analyze it. The thinking behind this approach is that data becomes contextually richer, the more you have and the more you can correlate in one central place. Building your architecture in this manner may have worked well in a previous era when data volumes were comparatively smaller. But given the volumes of data now being generated – the vast majority of which are never used – organizations can no longer afford to aggregate all their data in expensive, ‘hot’ storage tiers for analysis. Rather, data needs to be analyzed and correlated in smaller volumes, in a less expensive structure.

See also  1972 Lotus Elan Roadster Is Today's Bring a Trailer Auction Pick

Analyze all data at its source – To keep the storage costs associated with a central repository down, many organizations have resorted to indiscriminately discarding certain data sets. While it’s true that the vast majority of data is never used, the reality is anomalies and problems can crop up anytime and anywhere – so if you’re randomly omitting data, you’re leaving yourself open to missing something. By analyzing data in smaller chunks, ideally at its source (versus a central repository), you can effectively survey all your data. After being analyzed, data can then be relegated to a lower cost storage tier for safe-keeping, ultimately saving significantly on expenses. In fact, some organizations find they don’t even need a central repository at all. 

Enhance speed by reducing reliance on downstream pipes and systems – Another challenge of the ‘centralize and analyze’ approach is it can lead to clogged data pipelines and overstuffed central repositories, which slow down significantly and can take much longer to render returns on queries. So another benefit of analyzing data in smaller increments, at its source, is that organizations become much more nimble in conducting real-time data analytics – helping identify growing hotspots and their root causes faster, which is critical to reducing mean time to repair (MTTR). In addition, if you’re analyzing data at its point of origin and that data is throwing errors, you know that’s the source instantaneously.

Now, think of your typical insurance customer attempting to file a claim online.  Chances are they are stressed by the incident that has caused them to file. Imagine having them come face to face with a slow, clunky digital service right at that moment. This probably does not bode well for your brand, does it?  Increasingly, success in the insurance industry will not only be measured by your ability to deal with your customers’ unfortunate circumstances but also by delivering consistently superior digital experiences, which is going to require a more agile observability approach.

See also  Canoo Might Be Close to Dead