Politics

Oversight, disclosure concerns dominate House panel discussion of AI use in insurance decisions

Published

on


Florida lawmakers heard from insurance industry experts this week about whether the growing use of artificial intelligence will help Sunshine State consumers or just turbo-charge opaque decisions.

Some of the information they got was solid and well-defined. At times, however, members of the five-person panel appeared either reticent or unsure of how to answer the questions posed to them.

The House Insurance and Banking Subcommittee, chaired by New Port Richey Republican Rep. Brad Yeager, convened the 90-minute fact-finding meeting, which included both optimism about efficiency and calls for better guardrails and clearer disclosure to policyholders.

Yeager framed the Tuesday hearing, held with no bills on the agenda, as groundwork for the 2026 Session after AI-curbing proposals stalled last Spring.

“AI is here,” he told Florida Politics beforehand. “It’s on the rise in all facets of business and life, and we need to learn more about it.”

A recurring theme at Tuesday’s hearing was strategic opportunity. Gary Sullivan of the American Property Casualty Insurance Association, whose expertise spans insurance and cybersecurity matters, noted that insurers have used forms of AI for decades. But he said the arrival of generative tools has been a “game-changer,” enabling a shift from indemnification to proactive risk management.

With that comes many opportunities, he said, from triaging workloads, pairing AI with drone inspections and redesigning inspections to monitor risk patterns over time to probing clients’ vulnerabilities to cyberattacks and supporting agents and brokers during the application process.

Gary Sullivan, Senior Director of Emerging Risk at the American Property Casualty Insurance Association, previously worked as a professor on cyber risk management and for two major insurance carriers. Image via the Florida Channel.

As America’s workforce ages up and the talent pipeline shrinks down — Americans 65 and older now comprise 18% of the population, he said, up from 9% in the 1960s, with close to half the number of capable workers to replace retirees — he argued AI can preserve institutional knowledge and keep productivity rising.

“The societal challenge is not so much AI replacing jobs, including insurance jobs,” he said. “Rather, AI will be essential to make the shrinking workforce more productive to offset increasing financial and demographic burdens.”

Thomas Koval of Lakewood Ranch-based FCCI Insurance Group urged lawmakers to distinguish between generative AI — which is widely used to summarize, draft and provide probabilistic analyses — and agentic AI, which would make autonomous decisions.

The latter, he said, is not being relied upon for nuanced coverage calls, exclusions, underwriting or claim resolutions “because it simply cannot read and understand the nuances” of complex policies. By contrast, fraud detection is a “big use,” where models score red flags before a human decides whether to escalate.

Thomas Koval was a partner at an insurance defense law firm for 20 years before becoming General Counsel for FCCI Insurance Group, where he now serves on the Board of Directors and Officers. Image via the Florida Channel.

AI also now pervades customer service, HR and back-office tasks, plus marketing and product development. Koval emphasized the utility of “front-end guardrails,” compliance constraints built into algorithms to keep outputs within Florida’s insurance code, alongside “human intervention” on outcomes.

Paul Martin of the National Association of Mutual Insurance Companies said insurers began using AI for actual service in the 1990s, adding that today’s tools increase pricing precision, automate workflows, mitigate losses, and fight fraud and cyber incidents.

Martin pointed to the growing private flood insurance market as a case where data and AI helped carriers model risk better, bring products to market and, in some cases, price below government options.

He also argued Florida’s current statutes already govern AI.

“Any decision made or any action taken by an insurance company, be it a person, a human, an AI platform, whatever — all of that is governed by Florida law … irrespective of the source of the wrong decision,” he said. “If it’s an AI platform (or) a human that makes a mistake, the same law applies.”

Paul Martin, Vice President of State Affairs for the National Association of Mutual Insurance Companies. Image via the Florida Channel.

Several lawmakers pressed for consumer-facing impacts. Asked by Hollywood Democratic Rep. Marie Woodson whether the savings insurers may see through AI use would lead to lower premiums, Koval said efficiency should cut operating costs that factor into rates, “hopefully” producing savings over time.

Woodson followed with another query: If AI has been in use since the 1990s and speed saves money, where’s the relief? Sullivan pointed to targeted efficiencies, like AI use in aerial imagery that improves risk assessment and speeds post-disaster claims. Faster closeouts and fewer losses lead to competitive pricing, he said, but stopped short of promising across-the-board rate reductions.

Two lines of questioning focused on who should regulate AI and how transparent insurers should be. The first came from Tampa Republican Rep. Susan Valdés, who inquired about whether Florida’s Department of Financial Services (DFS) has an official position on regulating insurers’ AI use.

Sean Fisher, who runs DFS’s Division of Consumer Services, said his mandate is to protect consumers regardless of how a decision was made, but that the agency has not taken a policy position under CFO Blaise Ingoglia.

Fisher touted the Division’s relatively new “Ask DFS” website chatbot, which has handled more than 13,000 consumer interactions since last October. He added that since 2019, DFS found just three complaints that even alleged AI delayed or undervalued a claim, but not for a comforting reason.

Sean Fisher, Director of the Division of Consumer Services within the Department of Financial Services, said that during Fiscal Year 2024-25, his agency answered more than 75,000 consumer calls, opened more than 39,000 new service requests or complaints, conducted over 25,000 mediations and referred more than 3,700 regulatory issues to other relevant agencies. Image via the Florida Channel.

“Unfortunately, consumers are not made aware AI has been used to underwrite their coverage, deny their claim or determine the amount of the offer being presented,” he said. ‘Therefore, consumers have not brought any AI-related issues to our attention. We really don’t know what (they) know and do not know about AI.”

When Valdés asked whether lawmakers or regulators should take the lead, Martin said the Office of Insurance Regulation should assess whether any gaps exist and that statutes already provide ample protection. Koval echoed a targeted approach, recommending that lawmakers allow existing oversight to work and then tailor fixes to specific problem areas.

Insurance Commissioner Michael Yaworksy, who leads the Office of Insurance Regulation, was originally confirmed to participate in Tuesday’s panel but dropped out due to health reasons.

Jarrett Catlin is a policy adviser for TechNet, which describes itself as a national, bipartisan network of tech CEOs and senior executives that promotes the growth of American innovation. He attended Tuesday’s panel hearing remotely. Image via the Florida Channel.

Rep. Hillary Cassel, a Dania Beach Republican and insurance lawyer who sponsored bills last Session (HB 1433HB 1555) to mandate human input in health insurance claim decisions, said her proposals may have been “too targeted,” even as litigation elsewhere alleges AI-driven denials with minimal human review.

With that in mind, she asked whether the panelists recommend broad regulation or a piecemeal approach where corrections are employed industry-by-industry.

Jarret Catlin of TechNet, a trade association representing senior technology executives, suggested the latter, citing legislation passed in other states that has since required retooling. He pointed to Colorado’s sweeping AI law, which was passed in 2024 but whose implementation has now been pushed to 2026, as evidence of how difficult it is to harmonize policy and practice limits across sectors.

Asked by Homestead Democratic Rep. Kevin Chambliss if states require insurers to disclose when AI helps determine claims. Catlin said a recent Nebraska measure requires notice in cases of “consequential decisions.” Colorado’s law and another that Utah legislators enacted in 2024 require similar notification.

As the meeting wound down, Baker Republican Rep. Nathan Boyles returned to a practical point several panelists raised: If AI “can’t read” the fine print of complex policy language, as Koval asserted earlier, what chance does a homeowner have?

“Maybe at some point,” he said, “that’s something we should take a look at.”



Source link

Trending

Exit mobile version