The insurance industry is undergoing a massive, tech-driven shift. The next decade will be crucial in deciding the future of the insurance sector. Industry leaders have a massive role to play, particularly in terms of adopting disruptive technologies throughout the value chain, starting from underwriting to policy servicing and claim settlement.
According to Data Bridge Market Research, the AI in the insurance market is expected to touch $6.92 billion by 2028, growing at a CAGR of 24.05 percent for the forecast period of 2021 to 2028. The sector’s growth is expected to be fueled by AI technologies, including machine learning, deep learning, natural language processing (NLP) and robotic automation.
Below, we discuss how insurance companies are leveraging AI, along with some use cases, challenges, and solutions.
AI, NLP adoption
For any business working in the insurance space, the first and foremost step is to list all the sub-processes within the value chain instead of solving the complete value chain or a chunk of processes together. This includes size, wider applicability, and complexity. Based on these parameters, the right processes should be prioritised for a minimum viable product (MVP).
For example, a use case that involves extraction from two-three document types can give you volume, complexity, and wider applicability, such as email submission in underwriting.
It is important to ensure the first use case is successful as it paves the path for other use cases. Once the first successful MVP implementation is set, a roadmap should be created for multiple AI-based proof-of-value (POVs) and integrate these use cases to deliver enhanced efficiency, effectiveness and customer experience.
Challenges in deploying AI-at-scale
Many global insurance companies’ technology and data science teams are exploring multiple generic products to solve structural problems. However, such products tend to reach a saturation point after a few easy, quick wins. Due to the limited capabilities of these generic products, some of the leading companies are struggling to deploy AI at scale, and are now looking at solving the next set of business challenges related to unstructured, handwritten, video and voice data.
The major roadblocks in deploying AI at scale include:
Building comprehensive solutions to address these challenges is easier said than done. An end-to-end AI implementation leverages many tech systems, including ingestion from document management systems, to final posting into business applications such as policy admin system (PAS). While developing solutions, it is best to plan and accommodate all dependent systems upgrades or changes to avoid last-minute hurdles. Thus, timing and system flexibility are critical for smooth AI implementation.
Moreover, successful AI implementation requires contributions from various resources, like, AI-NLP data scientists, data and tech engineers, business and project managers. However, as we move to solve the next level of challenges, it is important that tech teams upskill themselves and learn business nuances (understanding underwriters instructions). A deep understanding of business nuances will enable solutions that can address business complexities and multi-user functionality.
Today, market expectations for 100 percent automation or state-thru-processing from AI solutions are reasonable, and current generalised products have been able to deliver this, albeit for simple problems. In my opinion, expecting 100 percent automation is the reason why these products are limited to straightforward cases.
The way out of this problem is to accept the fact that machines cannot independently learn and solve problems and require human assistance. A well-known example that elucidates this better is self-driving cars or autonomous vehicles.
While AI, NLP solution does its job with high precision, some instances are far too complex for machines to interpret. A common example of this is underwriting risk for customers who have either submitted partial or contradictory information. Human intervention is required in such cases to process contextual information.Thus, human-in-loop enables ‘assisted’ ingestion of outputs by a human after augmenting business judgement.
Use cases to consider
There are multiple use cases that can be considered, including invoices, contracts, statements of values, endorsements, etc.
Business submissions in the underwriting space is one such use case. It provides size, wider applicability and moderate to high complexity and can be prioritised over other use cases. However, the process requires interpretations from email and various unstructured documents (application quote, proposal, etc.). To extract information from multiple documents, numerous NLP models are required. Once these NLP models are created, they can be applied to a wider canvas for delivering AI at scale.
Also, the submissions process for a transaction may stretch to a few months. However, AI can automate the process and reduce cycle time to a few days. In addition, the AI solution enables interpretation from emails and attached documents and provides underwriting assistants with the requisite information to review or modify to complete the transaction.
Final thoughts
To successfully implement AI, NLP solutions in the insurance segment, companies should adopt a case prioritisation framework based on size, wider applicability, and complexity.
The companies should first re-draft their AI at scale roadmap, as generic products have limited scope. The tech teams, including AI data scientists, and data and tech engineers, should upskill their domain understanding. In addition to this, the AI-at-scale solution designs should be flexible and well thought through along with dependent systems. Lastly, human-in-the-loop is essential for any AI implementation.
This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill the form here.