Author: Guy Shepherd
As the complexity of models used by insurance enterprises continues to escalate, and the reliance placed on such models continues to grow, the importance of ensuring that models operate as intended comes under increased scrutiny. Historically, model validation has typically been carried out using spreadsheets, or even by validating individual elements of a model by hand, but the size & facets of today’s models makes this approach difficult at best and frequently unrealistic.
The Importance of Model Validation
There are numerous drivers for model validation, which include but are not limited to the following:
- Accuracy and Reliability – Ensuring that a model’s output is accurate and reliable for the purpose intended and reducing the likelihood of potentially costly errors
- Regulatory Compliance – Conformity with specific regulatory standards, frameworks and guidance
- Stakeholder Confidence – Provide confidence to stakeholders – e.g., investors, management, regulators – that the financial projections and risk assessments are robust and credible
- Model Performance – Over time, the assumptions and inputs used in financial models can become outdated or incorrect. Regular validation helps to ensure that models continue to perform as expected and remain relevant in changing market conditions.
- Transparency and Documentation – The validation process often involves thorough documentation and transparency, which helps in understanding the model’s mechanics, assumptions, and limitations. This is crucial for effective communication and informed decision-making.
- Error Detection – Validation can reveal errors in the model’s design, coding, or data input processes, allowing for corrections to be made before the model is used in the next cycle of business-critical decision-making.
In summary, model validation is essential to ensure that financial models are accurate, reliable, and fit for their intended purpose. It protects insurers from financial and reputational risks, ensures regulatory compliance, and enhances the overall quality of financial decision-making.
Approaches to Model Validation
There are a number of qualitative and quantitative approaches typically used to validate models; a selection of these is listed below. As some approaches are more robust and practical than others, insurers typically use a range of tactics to ensure ongoing confidence in models.
- Qualitative approaches – conceptual soundness, code & documentation reviews, expert judgement, peer reviews, first-principles testing, scenario analysis
- Quantitative approaches – secondary / shadow model testing (e.g. spreadsheets), benchmarking, back-testing, stress testing, use-case validation
It’s probably fair to say that no single approach can or should be used to validate models in today’s insurance ecosystem. A combination of qualitative and quantitative methods is generally accepted as model validation best practice.
Issues with traditional secondary model validation
By far the most common single approach to model validation is the use of a spreadsheet to test a mission critical model. Historically spreadsheets have been developed alongside the primary models (or retrospectively) to provide a seemingly independent view of some or all the key calculations. There are several main issues with this approach today:
- The spreadsheets are frequently developed using exactly the same methodology as the primary model, which can lead to systemic issues occurring in both models and a false sense of security
- Spreadsheets are generally slow and struggle to replicate all the functionality now found in today’s models. As a result, it may only be possible to run isolated calculations or small blocks of data through the secondary models.
- As model features evolve it can be difficult to maintain transparency & lineage in the spreadsheet model, and therefore keep primary and secondary models in sync
- There is increased concern about any differences between live (primary) and validation (secondary) models, since even small differences in approach can lead to significant differences in model results. Who’s to say whether any differences are due to issues with the primary or the secondary model.
While there are sometimes solutions to these problems, these are not necessarily simple to implement or retrofit into an operational environment.
Benefits of using Mo.net for Model Validation
An increasing number of clients are replacing their existing model validation capability with Mo.net-based solutions. Several clients have already successfully implemented model validation solutions using Mo.net and have highlighted the following features as enablers in this process.
- Out of the box connectivity to up and downstream data stores used by the primary models allowing calculations to be validated without worrying about data issues
- Transparency and traceability of calculations at the holistic and atomic level
- Development and operational performance without having to use exotic hardware / distribution mechanisms
- Built in change management and source control
- Automation of model operation to support rapid & regular revalidation activity
Clients have also indicated that by developing validation solutions using Mo.net they now have a ready-made and robust replacement for their legacy first generation models & platforms. They have also been able to highlight any significant issues with legacy models & results before embarking on any migration journey.
Summary
By leveraging the existing, integrated features of the Mo.net platform, an increasing number of insurers are performing thorough and efficient financial model validation, ensuring that their models are robust, accurate, and reliable.
Comments are closed.