Brighter Consultancy Blog

AI in Actuarial and Finance

Written by Sarah Watkins | Mar 19, 2026 4:53:01 PM

AI has become increasingly intertwined with our daily lives. From streaming recommendations to social media feeds and navigation assistants, it’s transforming how we interact with the world. It’s also impacting the actuarial and finance sector, with technologies such as machine learning, automation and generative AI completely altering how organisations approach their risk modelling, pricing and reserving forecasts. With around 75% of financial firms already using AI, and more planning to do so in the future, we look at how it’s affecting current actuarial functions, and what questions the industry needs to ask itself about governance, regulatory compliance and the importance of maintaining professional expertise and judgement.

A transformation of core actuarial functions?

In the past, actuarial expertise depended on analysing financial theories, statistics and mathematical models to make assumptions about the future and see how they affect real-world problems. In terms of the finance, insurance and pensions sectors, an actuary’s primary function includes risk assessment, financial forecasting, data analysis and regulatory compliance. With the introduction of AI, actuaries can now process far greater volumes of data to identify patterns that were previously overly time-consuming or difficult to detect.

Machine learning algorithms, for example, can now segment risks far more accurately by analysing a much wider range of demographic, behavioural and market variables, allowing insurers and other financial institutions to develop highly personalised pricing strategies that reflect individual risk profiles more accurately. As Ronald Richman, FIA FASSA CERA CPCU, noted in his paper entitled ‘Embracing AI: Transforming the Actuarial Profession, AI-driven techniques have the potential to enable more granular risk segmentation and improve predictive capability in pricing models.

Risk modelling is another area benefiting from this technology, with AI-driven analytical tools enabling actuaries to incorporate complex and unstructured datasets into predictive models. This can result in a better understanding of emerging risks, including evolving customer behaviours and climate-related events and also enhances stress testing and strategic planning.

Reserving and forecasting

Reserving is another important responsibility for actuaries and many organisations are now introducing AI tools which can support the process of analysing large volumes of claims data far more efficiently than in the past. A report by The Financial Reporting Council, the body that regulates auditors, accountants and actuaries, entitled ‘The use of Artificial Intelligence and Machine Learning in UK actuarial work’, notes that AI is increasingly used in forecasting, with its use most prevalent in general insurance. Machine learning models can test large claims datasets to highlight anomalies or trends in historical data that can influence future liabilities, enabling actuaries to refine their assumptions.

AI also has the potential to be used in conjunction with established actuarial reserving methods to provide a ‘second opinion’. Running alongside traditional methods, the technology can offer alternative projections to help validate assumptions and identify potential bias or inconsistencies.

In addition, generative AI, which includes deep-learning models that can generate content based on the data on which it is trained, is also being incorporated into operational actuarial work such as automating regulatory reporting, summarising management information documents and generating communications. In theory, this frees up actuaries’ time, allowing them to focus on more high-value analytical work.

Regulation and governance

As AI becomes increasingly ubiquitous within the sector, so the regulators are becoming more aware of its use and potential misuse. Regulators such as the Financial Conduct Authority (FCA) have already issued guidance around the use of AI, stating that it expects firms to ‘apply existing rules on governance, accountability and consumer protection to AI systems’ in a ‘safe and responsible’ manner.

In particular, regulators are concerned about transparency, model governance and the potential for bias within the algorithms, particularly in complex machine learning systems, which can make it more difficult for actuaries to explain exactly how a model has reached a specific conclusion. The trade association of firms providing credit, banking, markets and payment-related services, UK Finance, published a paper in 2020, entitled, ‘Trust, Context and Regulation: Achieving more explainable AI in financial services’ in which it stresses the importance of model explainability and governance when AI is used in regulated financial decisions.

If transparency is not present in regulated sectors such as insurance and pensions, it can create difficulties in justifying firms’ modelling decisions to regulators and there is increasing emphasis on implementing risk-based supervision and governance frameworks for actuarial AI applications. The FCA’s Chief Executive, Nikhil Rathi, recently wrote a letter to the Prime Minister, Sir Keir Starmer, in which he said that new AI-specific rules were unlikely and that existing regulatory frameworks already addressed governance, risk management and accountability. However, as technologies such as LLMs evolve and their use becomes more prevalent, stronger oversight, more robust validation processes and clearer documentation will be needed to ensure that they operate fairly and reliably.

Maintaining professional judgment

Despite the increasing use and capabilities of AI, actuarial work remains essentially human-driven, and it still depends on professional judgment when selecting assumptions and assessing and interpreting data. Neither can AI fully account for factors such as regulatory change, economic uncertainty, or behavioural trends. It should, therefore, be used as complementary to the expertise of actuaries rather than a replacement for it.

Mr Richman (as above) emphasises the importance of upholding professional standards that ensure compliance with requirements related to model understanding, fitness for purpose and bias avoidance and believes that this is essential to maintain the trustworthiness and integrity of the profession. AI models must be used responsibly, fairly and transparently and actuaries must recognise the limitations of automated systems to ensure that the decisions they take, which affect their customers or the financial stability of their organisation, are backed by the appropriate oversight.

In summary

The actuarial profession has always been an early adopter of technology, from spreadsheet programmes to advanced statistical software, and AI can be said to represent just another stage in the evolution of the desire to perform better.

The challenge for the profession now lies in not simply adopting AI but finding a way to integrate it in a way that not only maintains reliability, transparency and ethical standards but also combines those qualities with strong governance and informed professional judgment. AI has the potential to offer organisations profound insight into financial risk but it must be tempered with human oversight and professional judgment to maintain the standards of which the profession is rightly proud.

If you’d like more information about how Brighter can help you responsibly integrate AI into your actuarial functions, contact us.