• Interface

    Interface

    The Latest News & Insights from Software Alliance

1. Compute Stops Being the Constraint (Finally)

By 2035, raw compute is no longer something modellers think about. Cloud-native execution, massive parallelism, GPU acceleration, and on-demand elasticity mean that “Can we afford to run this?” quietly disappears as a question. What matters instead is how fast insight cycles complete, not how long individual runs take.

This isn’t about bigger models. It’s about orders of magnitude more exploration, i.e. thousands or millions of scenarios becoming routine rather than exceptional.

Once computation becomes effectively infinite, the limiting factor moves somewhere much closer to home.

2. Models Become Modular, Not Monolithic

The 2035 model is unlikely to be a single engine. It’s a composition of components: mortality, lapses, expenses, assets, reinsurance, capital, management actions, each separable, swappable, and independently testable.

This modularity is what allows:

  • Continuous evolution without destabilising everything
  • Parallel development by different teams
  • AI to reason about cause and effect instead of just outputs

Monolithic “all-in-one” engines struggle here. Modular architectures thrive. This is one reason why platforms built around transparent, composable actuarial logic, like Mo.net, age better than those built around opaque execution pipelines.

3. Structured Transparency Replaces Black Boxes

In 2035, transparency is not a philosophical preference but an operational requirement. When models are always on, feeding real decisions, nobody accepts “trust the engine” as an answer. Regulators, boards, and capital providers expect traceability – what changed, why it mattered, and where judgement entered.

This requires:

  • Explicit assumption structures
  • Machine-readable model logic
  • Built-in explainability, not bolt-on documentation

Ironically, this level of transparency is easier to achieve with disciplined platforms than with sprawling bespoke codebases.

4. AI Becomes a Modelling Co-Pilot, not a Feature

AI in 2035 is not a separate tool you “use”, but an embedded capability:

  • Highlighting sensitivities before you ask
  • Surfacing non-linear behaviour automatically
  • Comparing today’s results to historical patterns
  • Drafting explanations, not conclusions

Critically, AI does not decide what assumptions are right but decides where your attention is most valuable. However, this only works if models are fast, structured, and consistent. AI doesn’t cope well with bespoke chaos. It amplifies both good architecture and bad.

5. Data Pipelines Become Boring

In 2026, data integration still consumes a significant amount of end-to-end modelling effort. But by 2035, data pipelines are dull, standardised, and reliable. Not because data got simpler, but because firms finally invested in:

  • Clean interfaces between data and models
  • Clear ownership of transformations
  • End-to-end lineage and business glossaries

When data stops being the daily fire fight, modelling teams can finally focus on thinking again.

6. Governance Moves from Gates to Guardrails

Instead of governance being about approval gates, i.e. “has this run been signed off?”, it becomes about guardrails. This is a subtle but profound shift.

  • Which assumptions are allowed to move?
  • Which ranges trigger escalation?
  • What behaviour is automatically logged and explainable?

Technology enables this by making behaviour observable rather than controlled through friction. Fast models demand smarter governance, not heavier governance.

7. Human Interfaces Catch Up with Machine Speed

One of the least discussed enablers is interface design. In 2035, actuaries don’t scroll through output files. They interact with surfaces, ranges, and dynamic explanations. Visualisation isn’t just cosmetic. The model speaks in shapes and responses, not tables. Without this, even the fastest model is wasted.

Conclusion

Unfortunately, none of these technologies matter in isolation. The real enabler of the 2035 vision is coherence, i.e. models that are fast enough for exploration, structured enough for AI, transparent enough for trust, and governed enough for reality.

That’s why the future doesn’t belong to:

  • Fully bespoke open-source estates, or
  • Fully opaque vendor platforms

It belongs to modelling environments that blend discipline with freedom and treat technology as a way to remove friction, not add ceremony.


Read more

y 2035, no one in life insurance still talks about “running the model”.

That phrase belongs to an earlier era; a time when modelling was an event rather than a capability, when results arrived hours or days after questions were asked, and when insight lagged behind decision-making.

In 2035, modelling is simply there. Always on. Always available. And quietly shaping almost every material decision a life insurer makes.

Read more

Every few years, life insurance modelling circles back to a familiar idea: “Surely we can build this ourselves now?”

Open-source languages are mature. Cloud infrastructure is cheap and elastic. Numerical libraries are faster than ever. On the surface, the case for fully open-source financial modelling feels stronger than it ever has.

And yet, time and again, large-scale internal build attempts quietly stall, get re-scoped, or end up re-introducing vendor platforms through the back door. This isn’t because open source has failed actuarial modelling. It’s because life insurance modelling turns out to be much more than code.

Read more

Over the last decade a number of free / open source database environments such as PostgreSQL and MySQL have emerged to challenge the traditional players like Microsoft SQL Server and Oracle.  Like PostgreSQL and MySQL, SQLite has found favour with lone developers using limited data sets or developing lightweight applications.  Even users of SQL Server Express Edition have moved to SQLite, where compatibility with the full edition of SQL Server isn’t a significant requirement.

Read more

The U.S. insurance industry is undergoing a major shift in how it calculates statutory reserves for fixed annuity products. Leading this transformation is VM-22, a new reserving standard that officially became mandatory in 2025 for many non-variable annuities. With its adoption, insurers are moving away from traditional formula-based methods and embracing a more nuanced, principle-based approach—one that better reflects the complexity and risks of modern annuity products.

Read more

In the life insurance industry, financial models—often built in complex spreadsheets—play a critical role in pricing, reserving, capital management, and strategic decision-making. These models are sophisticated, calculation-heavy, and require rigorous assumptions and projections over long time horizons. However, the very complexity that makes these spreadsheets powerful also makes them difficult to maintain, understand, audit, or transfer between teams—especially in the absence of proper documentation.

Read more