December 12, 2025
Experts from Insurance Compliance Services (ICS) addressed how brokers are being affected by the increasing use of artificial intelligence in the insurance space, and what this means for regulation and compliance.
In the first part of the webinar, Helen Dean (Head of Intermediary, North) introduced the concept of AI in Insurance.
As she explained, AI is no longer a future concept, it’s already embedded in how the insurance market operates and how customers expect to be served.
Helen highlighted several core use cases:
- Risk assessment & underwriting – using large datasets to price more accurately and consistently, and speed up decisions.
- Claims & fraud detection – verifying documents, spotting suspicious patterns (e.g. reused photos), and accelerating claims handling.
- Preventative risk management – shifting from “detect and repair” to “predict and prevent” using driving, lifestyle and behavioural data.
- Compliance & cyber – monitoring data, flagging risks, and supporting regulatory compliance processes.
- Customer experience – chatbots, personalised product recommendations, and 24/7 digital support.
Even where firms aren’t consciously “doing AI”, many tools already in use (from document review to customer service platforms) are AI‑enabled.
There are clear benefits for brokers and insurers when it comes to using AI. For example it can enable, faster processing and decision‑making, greater consistency and potential accuracy, the automation of repetitive, low‑value tasks and supports better use of data for insights and innovation.
Despite these sunny skies, there are also pitfalls. Helen noted that unfair outcomes can happen if data is biased. AI is also at risk of providing misinformation and offering up “hallucinations” which are plausible but incorrect outputs. There are also risks around staff becoming over-reliant on AI and merely rubber-stamping its output without due considerations.
Other risks include privacy and data security around how customer data is gathered, shared and storied. There are also environmental considerations, with the hardware footprint of AI depleting water and energy sources. Additionally, the potential for misuse of AI and harmful or uncontrolled applications is also high.
Practical ground rules for firms
Helen suggested that brokers and MGAs must begin to build AI into existing governance, instead of bolting it on later.
Key steps include:
- Add AI to your risk register and policies – including which tools can be used and for what.
- Assign Senior Manager responsibility – and update Statements of Responsibilities where relevant.
- Start small and prove the concept – impact assessments, limited pilots, clear MI to monitor outcomes.
- Stress‑test for vulnerable customers and fairness – particularly where AI affects pricing, claims or eligibility.
- Keep humans in the loop – AI supports decisions; it does not replace accountability.
Communicating with the FCA (Bella Macfarlane)
ICS Head of London Markets, Bella Macfarlane, also tackled how to stay on the front foot with the FCA – especially as use of AI increases regulatory interest in data, models and customer outcomes.
The FCA’s “do the right thing” expectation boils down to:
- Take responsibility when things go wrong
- Pay redress where due
- Cooperate openly
- Fix root causes so it doesn’t happen again
According to Bella, firms that engage early and constructively are far more likely to avoid the toughest tools in the FCA’s kit (such as s166 reviews, VREQs or full enforcement action), or at least limit their impact.
She noted that, under Principle 11 and SUP 15, firms must tell the FCA about anything it would reasonably expect to know, including; serious issues affecting customers, financial soundness, or your ability to operate; significant breaches, frauds, or control failures; and, major changes to your risk profile, including material new uses of AI affecting underwriting, pricing, claims or customer interactions.
Where in doubt, it is often safer to notify with context than to say nothing. Many notifications are simply logged with no further action where the firm clearly has the issue under control.
Bella indicated what good practice in communications looks like:
- Be open, factual and timely – don’t minimise or obscure problems.
- Document decisions and near misses – including why you did or didn’t notify.
- Maintain a breach/incident register and clear MI.
- Treat significant AI developments as part of this same framework, not a separate “tech topic”.
Conclusions
AI will increasingly shape how brokers operate, price, advise and serve customers. At the same time, the FCA is sharpening its focus on data, models and outcomes.
For brokers, the winning approach is to adopt AI where it genuinely improves outcomes and efficiency, embed it firmly within risk, compliance and Senior Manager accountability, and stay proactive and transparent with the FCA when changes are significant.
This combination of controlled innovation and open regulatory engagement will be central to staying competitive, and compliant, in the AI‑enabled insurance market.
ICS can assist you in understanding and meeting the FCA’s expectations in relation to your use and adoption of AI tools, and can direct you to FCA resource in that regard. Please contact us on .