Artificial intelligence is changing insurance - underwriters aren't sure they trust it

The tech is getting integrated into both insurers’ work – and that of their clients

Artificial intelligence is changing insurance - underwriters aren't sure they trust it

Wholesale

By Chris Davis

As artificial intelligence reshapes industries, its impact on underwriting and risk assessment in professional and executive liability insurance is becoming impossible to ignore. A study by Capgemini found that 62% of executives believe AI is improving underwriting quality and reducing fraud. But despite these benefits, only 43% of underwriters trust AI-generated decisions.

“We’re definitely seeing a lot of discussion around AI and its use,” said Nirali Shah, partner and head of D&O, US at McGill and Partners. “The technology is evolving quickly - both in adoption and functionality - allowing it to take on more complex tasks than ever before.”

Yet, for insurers evaluating professional and executive liability risks, AI presents both opportunities and challenges – both to improve work processes, and as a new risk to help clients with.

The rise of ‘AI-washing’ in corporate disclosures

As companies incorporate AI into their operations, investor disclosures are coming under increasing scrutiny. A growing concern is the rise of “AI-washing” - a corporate trend where companies exaggerate or misrepresent their use of artificial intelligence.

“We had greenwashing a couple of years ago, where companies claimed to be more sustainable or environmentally conscious than they really were,” Shah said. “Now, we’re seeing AI-washing - companies talking about their use of AI or their research into it in a way that doesn’t necessarily reflect reality.”

That disconnect has already translated into legal action. In 2024, the number of securities class action lawsuits tied to AI-related disclosures nearly doubled.

“It’s still a relatively small number - 15 lawsuits out of more than 220 overall - but we’re expecting a wave of these claims,” Shah said. “How companies monitor their own risk exposure, and how they disclose their AI use to investors and shareholders, is going to become increasingly important.”

The open-source dilemma: innovation vs. security risks

A critical distinction for insurers evaluating AI risk is whether a company is using open-source or closed-source AI models.

“When you have a closed-source system, it’s easier to protect the data being used within it,” Shah explained. “That becomes increasingly important depending on your industry and your obligations around data privacy.”

Companies must weigh the competitive risks of using AI tools that expose proprietary data.

“Do you want some of this data out in the world, or is it proprietary to your company?” Shah asked. “Open-source models are almost always riskier, and companies need clear parameters around their use.”

Unauthorized AI use and the need for internal policies

Another emerging concern is unauthorized AI use within organizations.

“There’s been a lot of discussion about employees using AI tools like ChatGPT without permission,” Shah said. “Companies are now creating policies to regulate this use—ensuring that sensitive data isn’t being fed into AI models without oversight.”

Meanwhile, insurers are integrating AI into claims management, automating processes to boost efficiency. But Shah warns against over-reliance on automation, especially in financial lines insurance.

“AI is being used to handle claims and generate standardized responses,” she said. “But what happens when AI makes a call that doesn’t fit the specifics of the claim? Over-reliance on AI could lead to decisions that don’t align with the facts.”

An emerging regulatory challenge

As AI governance risks mount, corporate boards are paying closer attention.

“This is absolutely a growing issue,” Shah said. “It’s something I classify as an emerging risk that we need to pay attention to.”

While the US has yet to introduce AI-specific regulations, multinational companies must comply with varying standards worldwide.

“We work with a lot of global companies that have regulatory obligations across different jurisdictions,” Shah said. “How they disclose AI use, how they monitor it - these are critical questions. Are companies fully aware of how AI is being used internally? Have these tools been vetted and approved? These are the risks that insurers and corporate boards are now grappling with.”

As AI transforms the insurance industry, the challenge is clear: companies must balance the benefits of automation with the need for trust, transparency, and oversight.

Keep up with the latest news and events

Join our mailing list, it’s free!