One of the critical issues facing Government in establishing a pensions dashboard is “data readiness”. Currently it is in a relatively weak position to argue that schemes are or aren’t dashboard ready, since it has no way either to assess the quality of scheme’s data-readiness or to assist schemes in self-assessment.
The reasons why Defined Contribution data might be wrong are numerous. In most cases, the administrator cannot be held responsible for what goes into the database that keeps the member record.
Contributions are received from employers on behalf of staff and can easily be miscalculated. Work done by Ros Altmann and PensionSync suggests that there may be a high percentage of marginal miscalculations among SMEs using auto-enrolment. These errors are not a pension problem , they are a reward and payroll problem and should be treated separately
There is nothing that a pension administration team can do but to accept the declaration of payroll that the contributions received are correct, but if an error is made, the bulk of restitution falls to the blameless administrator.
Similarly, the contributions received from the DWP (formerly DSS) by way of protected rights contracting out rebates had to be taken as right, there being no way of reconciling what came from the DSS Newcastle operation with people’s entitlements.
Even the contributions from HMRC – representing pension relief at source, were typically accepted to be the responsibility of the tax-office and not the pension administrator.
But in some cases, administrators make mistakes, usually where manual processes are used. Examples are in the receipt of one-off contributions , claims and in member generated fund switches (as opposed to automated fund switching – as a part of lifestyle de-risking).
The vast majority of data errors that occur by pension administrators occur in the administration of contributions in and member initiated changes (switches and claims).
It is these errors that can be genuinely called pension mistakes and these that this article will concentrate on.
The current situation
I am writing with the privilege of analyzing over 1,000,000 contribution records held by administrators of UK DC schemes on behalf of trustees and within contract based plans.
It is clear that the quality of data held varies. Our evidence is based on the process we adopt to create AgeWage scores. What we do is to marry the contribution histories and units they have purchased to the eventual unit holdings which determine the pot value. This process is common and allows us to work out an individuals “internal rate of return” or “IRR”. IRRs are factual but they don’t tell us whether they are right.
To sense check if IRRs are reasonable we have to apply a second test to the data. This requires us to apply the same data to a synthetic benchmark fund – created for us by Morningstar (the Morningstar UK pension index). This fund tracks the progress of the average DC investment on a daily basis going back to 1980 and enables us to work out a synthetic IRR, being the return someone with that same contribution history would have got if they had been an average investor.
For the vast majority of records we look at, the achieved IRR and the benchmark IRR are close enough for us to validate the data as “making sense”. But in a small proportion of cases, the data does not make sense, here the two IRRs diverge sufficiently for us to consider the divergence unlikely and in some cases very unlikely. We categorize these cases as outliers and when we deliver our reports to the holders of these data sets, we list the outliers and determine their rate as a percentage of the data set.
We have seen outlier rates as low as 0.2% of data and as high as 8% of data. Typically, the outlier rate is around 3%.
The application of this work
By treating data in this way, we can show trustees, employers, providers and eventually members, where there are potential problems with data. Hopefully errors can be fixed before they are admitted to members and this process – known as cleansing – can be carried out relatively easily, once “outliers” have been identified. I say “relatively” as most errors aren’t as easy to fix as to identify but failures in identification has – in the past – stopped many schemes getting to first base.
It would be possible for schemes, using this methodology, to self-assess their data and establish how dashboard ready they were. It would be helpful if Government could validate the process and make it available to pension data administrators. The results of such self-assessment would have some immediate advantages
- Those people in DC schemes approaching the point when they wanted their money back could be assured that the money in their pot was “right”.
- The pension administrators could feel comfortable that their liabilities for errors were mitigated by this early warning system
- The trustees and provider IGCs could speak with authority in their chair statements about the quality of service being offered ( a component of the value for money assessment)
- Government could have a clear view of the capacity of schemes to integrate with the pensions dashboard with sensible results for those viewing their data
- The Government’s regulators could have proper information on data quality and be able to manage situations where schemes were failing , more accurately and earlier.
AgeWage makes this offer to Government and to the pension administration industry. We have found a way to process data in bulk and provide what we consider reliable metrics on data quality and pull out “outliers” for inspection and potential data cleansing.
We welcome feedback on this idea either as comments on this blog or directly to its author email@example.com