Our interest rate risk models perform 3 very important and related tasks. They allow us to measure, manage, and report our interest rate risk.

Regulatory literature is full of standards, pronouncements, best practices, and requirements all designed to help us perform these important tasks as accurately as possible. When rates shift it’s imperative that we’re fully aware of, and anticipate likely changes in our net interest income and profitability. Otherwise, we’re not really managing our risk.
But if that’s all we do, we’re missing a critical component of our asset liability process.
With the heavy emphasis on tying down the technical side of interest rate risk, it’s all too easy to forget one of the key truths……Never forget you’re using a model.
Models come in all types and levels of complexity, from simple aggregate models to the most complex and sophisticated instrument-level models. But despite their differences, all of these models share one important characteristic:
They are models and, when interest rates change, they will all be wrong.
Some may be more right or wrong than others, but none of them fully captures the complicated reality of banking.
So if all models are wrong, how do we judge the relative accuracy of our models?
  1. Validation
  2. Back Testing
  3. Sensitivity Testing
All model vendors should provide users with an independent 3rd party model validation opinion. This opinion is focused on confirming the mathematical accuracy of the model, its inputs, and outputs.  Having a model validation opinion in hand relieves the banker of needing to examine and test the theoretical underpinnings of the model and allows the banker to focus on how it’s used in their environment.
Backtesting is a process of comparing the model forecasts with the actual results achieved. Although this sounds simple, backtesting can be as detailed and rigorous as you would like to make it.  In fact I’d recommend that most bankers consider ramping up their back test. Here’s how I would approach it:
Start with your model’s static forecast.  Static forecasts (required for all financial institutions) are just as the name implies…static.  That means that the balance sheet doesn’t grow, the balance sheet mix doesn’t change, and rates are unchanged.  All cash flow runoff is reinvested in the exact same category of asset or liability as it originally occurred.
While unrealistic for managing or forecasting in the normal sense, static modeling is extremely important for measuring interest rate risk. Since we hold so much constant, we can zero in on the model changes due solely to interest rate risk.
Take your prior period forecast (I recommend 1 year prior) and adjust it for the actual period-end changes realized in balance sheet size, mix, and rate behavior. Once you do this you have what I like to call the adjusted forecast.
The adjusted forecast is the forecast your model would have made had you known with certainty in advance exactly how the past year’s changes would have worked out.  Compare this adjusted forecast (on a $ and a % basis) with your actual results and look for large or material unexplained variances.
Both validation and back-testing focus on how your model works.  Sensitivity testing, on the other hand, focuses on identifying where your model doesn’t work.
As we’ve mentioned before, your model assumptions are like fuel for your interest rate risk model engine.  The most important key assumptions are your non-maturity deposit account (NMDA) average life, NMDA beta rate sensitivity, asset prepayment rates, and key rate drivers.
This is where we circle back around to the idea that all models are wrong.  Most models are fairly accurate when rates don’t move. No surprise in that. It’s easy to forecast what happens when nothing changes…Typically it’s just more of the same.
But as rates move more frequently, in larger increments and in more complex ways (curve twists and slope changes) models begin to break down.  The same is true for our assumptions.
Sensitivity testing seeks to predetermine where your model begins to break down by comparing your base model run with runs based on increasingly large changes to your most important assumptions.
By design, these sensitivity tests will report wildly differing IRR results.  But here’s where it gets really interesting.  You must determine if these results truly represent the likely outcome of unlikely scenarios, or if they represent shortcomings in your model itself.
Remember, no model will be completely accurate. That’s as true for assumptions and how they’re handled as it is for the overall model itself. But by identifying weak spots in advance we are all better prepared to manage our interest rate risk position when and if unexpected scenarios come to pass.
Contact us today for more information.