I’d value your feedback on a scoring system we are developing which aims to provide employers with a method of rating one pension proposition against another.
We want it used by employers looking to establish a new workplace scheme, and those who have an existing scheme and are wondering if it needs attention (a second opinion).
Clearly this will need some clarity from Government about what makes for good (the Quality Test) .
Here are three questions to you, trusted readers
1. Is this a fair method to assess workplace pension schemes?
2. Can we expect employers to engage in rating a series of propositions like this?
3. Would providers be comfortable to directly offer pensions to companies chosing in this way?
Answers on a postcard (or better still in “comments”).
Case Study
Here is how an employer rated the Providers at a recent beauty parade we ran.
Provider A
| Attribute | Weighting | Out of | Score |
| Charge | 1 | 30 | 25 |
| Investment | 2 | 25 | 20 |
| Payroll/HR support | 3 | 20 | 10 |
| At Retirement | 4 | 12 | 5 |
| Member engagement | 5 | 8 | 3 |
| Security of proposition | 6 | 5 | 5 |
| Overall | n/a | 100 | 68 |
Provider B
| Attribute | Weighting | Out of | Score |
| Charge | 1 | 30 | 20 |
| Investment | 2 | 25 | 15 |
| Payroll/HR support | 3 | 20 | 15 |
| At Retirement | 4 | 12 | 3 |
| Member engagement | 5 | 8 | 8 |
| Security of proposition | 6 | 5 | 5 |
| Overall | n/a | 100 | 66 |
NOTES
As they say in boxing, Provider A won on points, the judge marked it 68/66. This seems a simple and elegant basis for taking and documenting the decision.
Charge -overall impact of charge TER (including an assessment of impact of nominal per capita (NOW) and contribution charge (NEST)
Investment -subjective view of default plus organization of other fund options
Payroll/HR support– mainly AE related but also at implementation and ongoing (Note we assume all providers will have excellent record keeping – this is now a hygiene factor)
At Retirement –treating customers fairly – member support provided to all participants
Member engagement –effort put in to provide staff with comms –including FE
Security of proposition– how likely is the provider to be still in the game in 20 years’ time.
This is a slight development on the thinking on the six DC outcomes established in the Pension Regulator’s paper (Nov 2011).
We’ve moved on from “security of assets” to security “of proposition”, “getting higher contributions” maps onto member engagement while “administration” maps onto “Payroll/HR support”.
The essential difference between this and PQM is that this is about the scheme chosen and not about the sponsor and member covenant (the contribution structure).
———————————————————————————————————-
Variables
This system of rating is based on my personal view of the importance of each subject to my decision making; someone else might place Payroll/HR support at the top and charges less important . Others would argue that Member Engagement should be higher
Any fiduciary should be able to change the weighting order to suit their preferences but the default order should be set by the expert with conviction (in my case First Actuarial).
I don’t think that the attributes and the “out of” scores should be variables, they are hard coded into the process. Bespoking attributes and the scoring system would be an operative disaster and smacks of our having no conviction. It’s doing away with the concept of guidance.
Exceptions
During the process of choosing, a small number of employers will become enthused and want to “go further into it”. This might mean them wanting to go to a consultancy “after all” and pay fees for a second opinion or for detailed help on investments, engagement or on a full on wrap proposition .(for instance).
Exceptional companies should be given easy links to further assistance, something we think hard about at www.pensionplaypen.com.
Similarly , a system like this must point companies both to mainstream providers but also to industry specific workplace schemes such as SHPS , the Pension Trust and the Printing Industry scheme . To know what makes for good is one thing but to find and impliment “good” even more important.
In the months to come, we will build a machine that will help companies work out what makes for good and assess either an existing scheme (or workplace schemes available to them).
Output
The chief output is the overall % as this gives the personal assessment of the person managing the staging process. If a company wants to get this rating done by a number of people (a committee) then we should let them run this a number of times and save each result for them.
What if the employer can’t or won’t score?
This is a big conviction question . Should we have default positions on all providers? . The way I’d like it to work is that we ask the employer to make their own decision on “weightings” and “scores” but give each answer a “can’t choose? ” option which leads them to a default position.
(One snag with default positions is with providers (insurers) with variable responses. Our default view may be that xyz are generally the cheapest insurer but what if abc comes in with a superquote when they are normally very expensive ?)
If any “expert” can be sophisticated enough to give a bespoke rating on attributes based on the response received – well and good, but I think this is expensive and risky.
Apart from the dangers of assuming a standard charge from those with variable choices, we should not force companies to adopt a single “provider view”
Employers using this methodology should be encouraged to think for themselves and this means either requiring them to fill in certain fields or giving them the strongest of warnings that adopting the default position is not going to be as accurate a way of assessing as personal engagement.
Of course, employer specific scoring is valuable if it can be collected. It provides a “true view” of propositions (eg what the employer thinks), over a large sample of employers. This is about as good a data as you can get as to what employers really think ( a reasonable proxy for members the closer you get to 2018).
If we can collect this data at a provider level the data becomes even more valuable as it informs not just the general debate about what makes for good, but also the internal development of each provider’s proposition.
Further advantage of a self-service technology led approach.
A lot of this decision making will be imperfect. Even with a beauty parade this happens – (I once spoke with an HR manager who had the casting vote on a provider selection and chose xyz on the colour of the presenter’s tie).
We won’t have those distractions and we want to provide information to decision makers which is clear and easy to compare.
CONCLUSION
This blog is designed to encourage debate and then action. We can argue all day about this methodology but in the end we need to adopt a way of doing things. Basing our assessment on a tweaked version of tPRs six DC outcomes is a smart move as it ties in with Govt thinking but allows an “expert” to remain a thought leader with a value proposition.
The scoring system that leads to a percentage rating seems about right to me- encouraging a range of engagements from an employer (based on interest and competence) with a relatively small degree of bad outcomes (the beauty parade method produces its own).
Of course the providers will be able to see our hand and we will need to be able to justify our ratings not just to employers but to providers (and the regulator) so we are talking “robust”. That said, “robust” is something we do pretty well!
Related articles
- Everybody need standards (henrytapper.com)
- Better-buying makes auto-enrolment work (henrytapper.com)
- “What’s expensive for a pension these days?” (henrytapper.com)
- Give a straight red to active member discounts (henrytapper.com)
- Small business employees desire pensions (xlntelecom.co.uk)
- What to do about commission (henrytapper.com)
- And why these “employer duties”? (henrytapper.com)
- So what’s a good DC pension – Mr Webb? (henrytapper.com)
- Yesterday these were tomorrow’s problems.. (henrytapper.com)

