RWA 2025/2026


Congratulations, Robert Kinsey!
Our 2025/2026 RWA Winner


Title: The New Frontier: Underwriting in the Age of Artificial Intelligence

As someone who grew up listening to Isaac Asimov audiobooks, and later seeing his works be interpreted by various filmmakers and TV show producers, I can admit that the “rise” of artificial intelligence (AI) is something that has worried me for years. Now, thanks to incredible advances in technology, we have reached a place where artificial intelligence has become an integral part of our daily lives. Whether it is asking Alexa to play our favorite music in the morning or asking ChatGPT for travel suggestions on a budget, it is clear we have reached a point of no return with AI. The impact is undeniable, even for a healthy skeptic such as me. Considering the growth and increase in integration of artificial intelligence in our personal lives, it was only a matter of time before we started to incorporate these tools for business functions as well.

When it comes to artificial intelligence and insurance underwriting, the opportunities appear to be endless. The increased use of third-party data collection and analytics has gotten our industry to a point where an overwhelming number of underwriting decisions can be made without the need for fluids. When I started underwriting in 2016, most of my cases required a paramedical exam, blood profile, and urinalysis, especially when dealing with conditions such as diabetes or hypertension. Although we used some of the tools that have now become staples in life underwriting, such as prescription history data, we still relied on traditional underwriting methods to make most of our final decisions. Our underwriting engine could make automated decisions for only the best cases for a relatively small population of our younger applicants. Fast forward to 2025, and the landscape has shifted so significantly that there are multiple life insurance companies approving medically substandard risks on an automated basis. The data that we rely on has become so intricate that I can see the date a client was diagnosed with diabetes, the doctor who made the diagnosis, the tests run on that date, the results of said tests, and the medication that was then prescribed to treat diabetes, all without ordering medical

records or additional lab testing. With the advent of Large Language Models (LLMs), Optical Character Recognition (OCR), and Natural Language Processing (NLP), we are now able to broaden the use of these existing resources even further. For example, AI will allow us to review and interpret digitized handwritten documents without the need for human eyes. Theoretically, AI could review handwritten doctor’s notes, compare these notes with underwriting rules and be able to make an underwriting decision, all without human help. What would have been considered impossible a decade ago is on the brink of becoming commonplace in the near future.

Now that we know the possibilities of artificial intelligence, how do we adjust to the changing landscape? As I stated previously, when it comes to AI, I started a skeptic. I am more than aware of how useful artificial intelligence has become; however, I still struggle with the use of AI in my daily dealings. Far too many times I have Googled the purpose of an unknown medication or medical condition only to have Gemini direct me to a wholly unrelated topic or come to an erroneous conclusion. As with any technology, AI is not without its faults. But the frustration I experience with LLMs affecting my daily searches is only a slight annoyance; I can work around poor search engine results with relative ease. What does concern me, as both an underwriter and an insurance customer, is how these same models may affect the accuracy of underwriting decisions across the industry. These models are constantly improving, but there will be significant growing pains integrating these tools with our existing underwriting tools and practices. We should use existing methods to verify the decisions being made by artificial intelligence, such as underwriting studies and audits, to ensure we are not sacrificing accuracy in the name of technological advancement. AI-driven underwriting decisions will statistically bear out to be more consistent than human underwriting decisions, but a consistently incorrect decision is far more costly in the long term than a one-time human mistake. I believe human safeguards will become even more valuable as the use of artificial intelligence grows, and we should train experienced and inexperienced underwriters to identify errors as these tools evolve.

In the fast-paced environment we currently live in, we must be able to adapt at the same speed as the technology we use.

Another issue that will arise with the increased use of artificial intelligence is the ability to justify our underwriting decisions. I am not in the room with decision makers who determine how much we use artificial intelligence in our underwriting decisions; however, I believe this issue is the most significant hurdle to widespread implementation of AI in underwriting. AI has the ability to make decisions – with the right data sources and prompts – but I do not believe AI is able to explain its decisions well enough to satisfy consumers or regulatory agencies. In due time, I believe AI will be able to “show its work” effectively enough to satisfy any regulatory concerns. Nevertheless, as someone who has worked with risk scores before, I am worried about the potential for unexplainable decisions as artificial intelligence becomes more accepted. LLMs have already shown the propensity to spit out inaccurate information due to an inability to verify their answers. In a commercial setting, these responses should be thoroughly tested and vetted so that they are not actually responsible for underwriting decisions until they are capable. Unfortunately, the companies who will financially benefit from the expansion of AI tools will not advise against their implementation. On the contrary, they may suggest we use the tools even when they’re not quite ready, because it will be in their best interest. As underwriters, we must be diligent in making sure current underwriting standards are met. Furthermore, we must speak up when vendors attempt to cut us out of the decision-making process. There is nothing wrong with artificial intelligence making decisions, but these decisions should not be biased, unexplainable, or improperly evaluated. Otherwise, we leave ourselves at risk, as an industry, of falling victim to the same issues that have beset other industries that have become too reliant on unvetted, unprepared artificial intelligence tools.

As we move into the age of artificial intelligence, we should not be pessimistic about its possibilities. As we have already seen with our current underwriting tools and platforms, there is plenty of potential with AI, both in its current use and in ways that have not yet been explored.

As underwriters, we must continue to use our established expertise to make sure that artificial intelligence allows us to make better underwriting decisions in a timelier fashion rather than making slightly quicker but more inaccurate underwriting decisions with significantly worse consequences in the future. We should not be afraid of the future; we should embrace it. But we should be aware of all the possibilities, good and bad, that come with artificial intelligence. We are risk assessors, and that ability to assess risk should not be limited to underwriting decisions. We live in an incredibly exciting time where possibilities seem endless: it is now up to us to harness these tools for our own gain, our companies’ gain, and our customers’ gain. We are in the position to “boldly go where no one has gone before,” and we should take full advantage of the opportunity.