Actuarial
  • Articles
  • January 2024

Ringing in the New Year with a Chat Bot: A 2024 generative AI update for insurers

By
  • Jeff Heaton
  • Robert Jirsaraie
Skip to Authors and Experts
A woman faces her computer as morning sun streams through the window.
In Brief

AI-powered large language models (LLMs) could produce significant cost savings across the insurance pipeline, including underwriting manuals, claims resolution procedures, and policy documents and communications. What’s new with LLMs in 2024?

Combined with new techniques, such as retrieval-augmented generation (RAG) and prompt engineering, LLMs could produce significant cost savings related to underwriting manuals, claims resolution procedures, and policy documents and communications. Here, we highlight several key advancements in LLMs and detail how users can most effectively apply these new technologies within the insurance industry. 

What’s new with LLMs? 

One of the emerging features among LLMs is their ability to handle different types of input modalities. Whereas communicating with prior versions of ChatGPT had been limited to text-based interactions, GPT-4 enables users to upload documents with their prompts, thereby allowing the model to assist with imaging-based tasks or answer questions about specific CSV files. Equipping LLMs with multiple types of information should empower them to yield more nuanced responses tailored to the specific needs of an individual user.   

This heightened level of customization in the model inputs is further complemented by improvements that give users greater control over the responses generated by LLMs. For instance, GPT-4 can designate whether its responses should be in JSON format or another structured file type, greatly improving reliability and consistency. This capability is particularly useful for web development, data analytics, and machine learning – functions that are already being streamlined by LLMs and even greater automation is likely as these models shift into standalone generative AI platforms. ChatGPT is at the forefront of this transformation, given that GPT-4 can execute code and perform web searches in real time. As LLMs advance in 2024, so too will their potential to address a variety of insurance-related tasks. 

 

Insurance use cases 

LLMs could serve as a key tool for many facets of the insurance industry, including actuarial assessments, policy documentation, and underwriting. Actuaries could use LLMs to generate code that will augment predictive modeling. Further, the ability to upload user data could prove useful for exploring the relationships among variables and extracting key insights, which ultimately may lead to better risk models.  

The natural language processing abilities of LLMs could facilitate variety of tasks: 

  • Insurance professionals could use models to ensure consistency across policies, improve readability, and perform multi-language translations. Claude 2 from Anthropic is currently the LLM that can process the most amount of text within a single prompt.  
  • Using more advanced techniques, LLMs could help ensure policy documents adhere to regulatory standards or legal requirements.  
  • Underwriters could use LLMs to summarize many layers of text and pinpoint relevant information across a large database of policies.  

These are only a few of the ever-increasing applications for LLMs, which in aggregate would greatly improve operational efficiencies. With the right application, LLMs could be an essential tool for nearly every aspect of the insurance industry. 

Jeff Heaton and Kyle Nobbe
In this video presentation first published on Actuview, 69É«ÇéƬ's Jeff Heaton and Kyle Nobbe provide an introduction to AI and discuss how companies might train AI tools to derive greater insights from their own proprietary codebases and datasets.

Most effective approaches to LLMs

Users have four options for incorporating their data into LLMs, with each option requiring differing degrees of complexity and computational resources.

Train. The most challenging and expensive of these options is to train LLMs from scratch, which requires an immense volume of training data and computing power to find the most optimal configuration for a novel neural network. 

Fine tune. Given these enormous costs, a more common approach is to take a model that has been pretrained for natural language processing and fine tune its weights using proprietary data and corporate knowledge. Although this approach will produce a customized model tailored to fit the needs of its developer, the computing resources required for model retraining can still be an outsized expense for most large companies.

RAG. Retrieval-augmented generation (RAG) is a newer, cost-effective technique to embed more information into a single prompt. RAG works by transcribing the text from a document into a numerical representation (via word embedding) and then aggregating all resulting vectors into a database. This vectorized database contains semantic relationships across the document. Users can then search the database for pieces of information relevant to a given prompt. By augmenting prompts using RAG, users provide LLMs with greater context so they can yield custom responses without the need for users to directly tune the model’s weights.  

Prompt engineering. The most straightforward and inexpensive way to use LLMs is prompt engineering, where individual users explore and carefully craft prompts that yield the most useful responses.

AI opportunities and limitations  

LLMs recently gained the ability to interpret and analyze CSV files. By examining specific datasets, these models can provide customized insights and generate tailored code snippets for data analysis or visualization, enabling a more contextual and data-specific approach. For example, based on the structure, type, and trends within a CSV file, an LLM can suggest relevant statistical analyses, identify appropriate data cleaning methods, or even predict potential outcomes. This level of customization is pivotal for making more-informed decisions and simplifying the initial stages of data exploration.  

However, it is important to distinguish this LLM approach from more Automated Machine Learning (AutoML). AutoML focuses on automating the end-to-end process of applying machine learning to real-world problems. AutoML systems can select the best model, apply hyperparameter tuning, and complete feature selections, all with minimal human intervention. In contrast, LLM-generated code for data analysis is more about guiding and assisting in specific tasks rather than automating the entire process. A hybrid approach combining the two could be highly beneficial: LLMs could assist in the initial data understanding and preprocessing, while AutoML could take over for model selection and optimization. This synergy could lead to more efficient, accurate, and accessible machine learning workflows, making advanced data analysis more approachable for non-experts and more efficient for experienced practitioners. 

Generative AI technologies are also likely to undergo a drastic convergence. Currently, very distinct neural networks are used for image generation, text generation, audio generation, and self-driving cars. Future technology will likely unify these applications as we move towards general AI capabilities. General AI would function similarly to our own brains, with one technology accomplishing nearly all tasks. And while LLMs do not reason, at least not to the degree of humans, computational reasoning is another key advancement that may be on the horizon.  

Conclusion

All indications point to 2024 being another year of rapid growth for generative AI. Generative AI will be as impactful as the internet revolution, and it is important for insurance professionals to embrace these emerging technologies in this new era. 

More Like This...

Related 69É«ÇéƬ

Meet the Authors & Experts

JEFF HEATON
Author
Jeff Heaton
Vice President, Data Science, Data Strategy and Infrastructure
Robert Jirsaraie
Author
Robert Jirsaraie
Assistant Data Scientist