Author: Guy Shepherd
Over the last couple of months, we’ve been working with numerous global insurers who are seeking to replace their first generation legacy modelling platforms with more modern technology. One of the common requirements emerging from these discussions is the desire to undertake future modelling tasks with greater performance but without breaking the bank.
Historically the year-on-year increase in performance of CPUs described by Moore’s Law has generally kept pace with the need to undertake more modelling in greater detail. But the current and emerging modelling needs of the global insurance community are now overwhelming evolutionary increases in CPU performance. This has led insurers and vendors to investigate a range of alternative techniques to provide the necessary performance increases. These include, but are not limited to:
- Highly scaled cloud-based compute
- GPU-based modelling
- Vector-based modelling
- Quantum computing
The use of GPUs (graphics processing unit) has been suggested as a potential solution to financial modelling performance constraints for at least 15 years. The primary issue with GPUs is that they are specifically designed to perform a limited number of relatively simple operations associated with image manipulation very quickly and in parallel. However, over the last few years the concept of general-purpose computing on graphics processing units (or GPGPU) has become more realistic, with dedicated libraries available to help process elements of non-image related problems using multi-core GPUs.
The fundamental question is whether GPUs could provide a step-change in the performance of financial models without incurring significant overheads, or whether the cost of converting models to utilise GPUs is more wisely invested in scaling-out more traditional CPU-based compute power.
At Software Alliance we have already started investigating the possibility of providing GPU support within a future version of the Mo.net kernel. Part of this work will also consider whether the tangible performance gains obtained from using GPUs is actually worth the effort.
We’d be interested in hearing thoughts from our client & partner network about whether GPU-based modelling is currently on your radar. We’ll keep you posted in future blog articles regarding our research & findings.