the best approach to model audit

At Mazars we have built our reputation in model audit by placing emphasis on a “shadow modelling” approach. This differs from the traditional reliance on inspection techniques and this post summarises our take on this debate.

 One of our competitors got in first in posting the definition of “model audit” on wikipedia. The posting refers to a “debate” on alternative approaches as follows:

As noted in the citations, Andy Hucknall of BDO LLP (and many others) argues that the combination of a detailed ‘bottom up’ all-cells review with ‘top-down’ analytical review gives the greatest assurance.

An approach including an ‘all cells’ review provides assurance on the detailed model logic, as issues related to options not currently active in the Model, such as scenarios or alternative inputs, will not be identified where a reperformance only approach is used as such issues will only be identified by carefully reviewed a model’s logic. That said, an ‘all cells’ review without ‘top down’ analytical review would lose context, so it is important to use both techniques.

An alternative approach, put forward by Jerome Brice of Mazars LLP, is that a focus on shadow modelling is a superior approach.”

While I am not sure it complies with one of wikipedia’s five pillars which is “neutrality” (!) I thought that rather than enter into an editorial war on that website, this was a better place to set out the arguments for our approach i.e., shadow modelling. To do this I have ignored the wider elements of model audit such as review to documentation and accounting and tax to concentrate on the core model audit process of proving computational correctness.

What does an “all cells”, “inspection”, “cell-by-cell”, “tick and bash” review imply?

First, let’s be clear on what is meant by “all cells” review. This style of review is also referred to as the “inspection”, “cell-by-cell” or the “tick and bash” approach. From speaking to model auditors who have followed this method which I’ll refer to as the “inspection” approach, I understand the process is along the lines of:

  1. Receive model
  2. Run software tools to confirm row consistency. This allows the analyst to identify “unique formulae” and in effect only review one column of coding is reviewed in detail (i.e, it isn’t really an “all cells” review”)
  3. Print or otherwise list the unique formula in each row – this is your “test sheet”.
  4. Go to each row in the spreadsheet and inspect the formula “by eye”.
  5. If its correct “tick” your test sheet if not mark with a “cross”.
  6. When you’ve been through the whole model present your tick and crosses to your manager
  7. The manager can review some or all of your ticks and crosses to see if they agree
  8. Present your findings to the client and ask them to present a model which corrects any issues
  9. Use software to detect any changes in the new model and for any changes run steps 1-8 iteratively
  10. When you’re happy or have agreed caveats with your client finalise your opinion

The wikipedia entry says this method works when considering scenarios or alternative inputs so, and I am guessing, when looking at each unique formulae the analyst needs to consider not only if it works in the base case but also under any potential configuration of the model (a big ask!).

The wikipedia entry recommends supplementing this with a “top down” analytical review which could describe a number of tests but is presumably intended to catch any errors you’ve missed during the ticking exercise.

Finally, apparently “many others” argue the inspection approach gives the greatest assurance. As you might have guessed Mazars, alongside leading competitors and many of our clients who have been able to contrast the strength of our review process don’t argue this at all.

So what’s the shadow modelling approach and why is it better?

In a similar way to that set out for the inspection review, shadow modelling as carried out by Mazars can be summarised as:

  1. Receive model
  2. The inputs and outputs from the model are extracted (the outputs typically include all financial statements, funder covenants and shareholder return measures and other key outputs as agreed with the client)
  3. The analyst uses this information to prepare a shadow model; this is typically done in a proven template and any bespoke elements are built and reviewed separately
  4. The outputs of this shadow model are then compared to provide a full reconciliation to the model under test
  5. Any differences in results are investigated in full such that any issues in the client model are pinpointed on an individual cell level
  6. A report is written based on the outstanding differences which will either be highlighted as potential errors or issues requiring further explanation, this is then subject to a senior review
  7. The client responds to the report and provides an updated model which attempts to corrects outstanding issues
  8. On receipt of the update the shadow modelling exercise is updated and further iterations of review are undertaken until there are no unexplained discrepancies between the client model and the shadow model
  9. On this basis an opinion letter is then agreed with the client.

The key advantage of shadow modelling is that if we use the same inputs as the model under test we expect to get the same answer. If the answer in any cell which impacts upon model outputs is different, we keep asking the question ‘Why?’ until either any error in the model under test is amended or we have a complete explanation as to why differences arise. Contrast this with the inspection approach where the auditor is effectively comparing the model under test with a model in his or her head!

Another advantage of the shadow modelling process is that it creates an end-to-end audit trail. At the end of our review we have a shadow model which fully reconciles any differences in the model under test – this forms a full proof of model logic from inputs through to outputs. Contrast this with the inspection approach – at the end of the process all that you have is a printed list of formula with some ticks!

Also, by testing a model using a shadow modelling process we are able to update our review in a straightforward matter as the model being tested evolves, this also means we can test alternative input scenarios in the same way. While the wikipedia entry suggests this is done more straightforwardly using inspection techniques, I really can’t see how.

There is a further, final and decisive disadvantage of the inspection approach – it is far too boring! The human brain is not designed to cope with repetitive checking – and errors will inevitably be missed. Research suggests (http://panko.shidler.hawaii.edu/HumanErr/Index.htm) that when looking to uncover logic errors “by eye” detection rates are often 50% or less. The shadow modelling process in contrast is an intellectual challenge, and even if the shadow modeller should become bored – the process doesn’t allow short cuts, the lack of proof of an end-to-end reconciliation is there for all to see.

Of course, the argument above is somewhat simplified as I am sure all model auditors rely on more than one test. We too make use of software tools where appropriate and we too attach significance to “top-down” review including a senior commercial review to make sure deals “stack up” – and I haven’t even touched on approaches to document review, accounting and tax or sensitivities.

In summary, we are convinced, and having completed over 500 model audits using the shadow model we believe have now proved, shadow modelling is a more effective core methodology for proving financial model correctness than traditional inspection methods.

themodelauditor.com is the blogsite of the Mazars model audit team.

You can also follow us on twitter @themodelauditor

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>