Robots do as they are told

robot

 

The point about artificial intelligence is that it is artificial. Were we to replicate human intelligence we would have to build in the bias’ that make us ‘moral’. By moral, I mean concerned with the principles of right and wrong behaviour.

We can only able to program computers to think as we think, so the guidance that robots bring to decision making is limited to the intelligence of their makers and the biases inherent in their pysches.

Putting one’s trust in a robot is intrinsically risky. The risk is that the robot knows better than the crowd- for if we make the robot representative of the crowd ( a Marxist robot) then we exchange conviction for consensus and crown conformity our king.

And it is quite possible to do this. It is the essence of tracking. Tracking suggests that there is more wisdom in the crowd than in conviction and – since crowd sourced wisdom- is the property of none of us, the price we pay for such wisdom should be no more than the price of collecting the data.

I would argue that the moral robot is reflecting a more fundamental definition of morality, the definition I was taught at school. Nietzche defined morality as the herd instinct of individuals, a behavioural tracker if you like. Behind the word is the latin “mora” which means no more than the “way” in which we do things.

As we all know, herds are susceptible to bad as well as good behaviours, history tells us that these behaviours can be set by powerful and charismatic leaders. So too the financial behaviours of robots. We now have trackers that are smarter than others  (if you believe in the theory of smart beta). The originators of the algorithmic tweaks are doubly smart if they can convince their customers not just that they have redefined the true market return but that the tweak can deliver sustained value!

Either the market will come back to them, in which case their idea was simply ahead of its time, or it doesn’t , which suggests it was wrong. Any form of conviction carries the risk on the one hand that it craziness and on the other hand , that it has built in obsolescence

So it is obvious that conviction in a perfect market has no value and that robots are as right as their ability to be 100% in tune with consensus. The only time that a robot is wrong is when it mistakes the true nature of things. But alarmingly the true nature of things is a concept that is slippery, we think we have it only to find it is elsewhere. Had we allowed the consensus that dominated German politics in the thirties to have continued unchallenged then we would have embraced a moral system that  would have eaten us.

This is my worry about robo-advice, not that the guidance that we get is wrong but that the paradigm that creates that guidance is wrong. That everyone is wrong at the same time.

Throughout the time I have been operating as a financial adviser (over thirty years) , I have seen consensual ideas  that have proved horribly wrong. We believed that giving people the right to a transfer value form their defined benefit pension was inherently right, we believe the same of pension freedoms. We saw no risk in the low-cost endowment and trusted that market forces would ensure with-profit bonuses would be sustainable.

The repeated market failures that resulted from these misconceptions could (arguably should) have been avoided had someone called the received ideas. But the Emperor’s new clothes , even though they now appear embarrassingly denuded, did not get called (until it was too late).

Robots do as they are told. In their world the best lack all conviction and the worst are filled with a passionate intensity. It takes a bold programmer indeed to call the emperor’s new clothes and to provide guidance that bucks the market.

My worry is that the markets that we are asking robo-advice to operate in are not perfectly understood. If not fully understood, the risk may be in the false consensus and is the greater for masquerading as the wisdom of the crowd. Nowhere is this risk greater than in the complex decision making needed to make long-term pension decisions.

Whether the decision is the employer’s and concerns the choice of workplace pension or the individual’s on how to spend the savings pot, the  assumptions that underpin the algorithm are fragile and weak. Either they are subject to the bias of individuals and generated out of conviction or they follow herd decision making and could be dressing the emperor in thin air.

This is why I welcome the scrutiny that the FCA will be paying robo-advisers in the months and years to come. If you are planning to launch or already have robo-advice that you have programmed, I advise you get in touch with the FCA’s innovation hub by phoning +44 (0)20 7066 4488 , mailing innovationhub@fca.org.uk or simply visiting  https://www.the-fca.org.uk/firms/project-innovate-innovation-hub/next-steps

 

 

 

 

About henry tapper

Founder of the Pension PlayPen, Director of First Actuarial, partner of Stella, father of Olly . I am the Pension Plowman
This entry was posted in pensions and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply