Blog

What do Self-driving cars have to do with Investing?

India has more than 1.5 lakh people dying each year in road accidents - among the highest in the world, even on the basis of percentage of population.

Now suppose, from tomorrow, all vehicles are changed to self driving ones. Let us say, the death toll drops to 50,000.

What will the newspaper headlines be?

Think carefully before you read further.

Will the headlines be, "Self driving vehicles reduce death toll by two thirds"

Or will it be, "Self driving vehicles kill 50,000 Indians every year"

My guess is, it will be the latter.

Although they do not deal with the specific example, in concept, this is a paradox that Daniel Kahneman, Cass R. Sunstein and Olivier Sibony talk about in their book, 'Noise'.

They give example after example of various areas of human endeavour, even those far away from the conventional number crunching, where well-constructed algorithms consistently outperform human beings - that too human beings with considerable experience and expertise.

For example, whether an accused should be granted bail or not appears to be a problem that only a human being with great judgement (pun intended) should tackle. The test is whether a person out on bail will commit another crime.

Actual studies show that even a very simple algorithm which may take into account only one or two factors, like the age of the accused and their previous crime record, outperforms skilled judges.

So also, cases like corporate recruitment or deciding on insurance premiums. Simple, at times even simplistic, algorithms outperform experienced human beings.

Why does this happen?

One of the reasons is that while we think humans bring expertise and nuanced judgement into the mix, what we forget is that they also bring in 'noise'.

What is noise? In simple terms, noise is undesirable variability, over and above variability which may be due to bias (eg, a judge being biased for or against a gender, race or caste).

Bias is easily understood but even without bias, various judges or insurance professionals will come up with different answers to the same question. Even worse, the same human being will come up with different answers depending on totally unrelated variables like whether they are hungry, what the weather is, what their mood is like and so on and so forth. We all understand this phenomenon at an individual basis that is, our performance is impacted by our own daily mental makeup. This noise is both across different individuals in the same profession or category as well as how a particular individual performs or decides in various instances.

It is this noise that algorithms or machine-led systems reduce and that accounts for their superior performance.

While the machine-led systems do not have all aspects of the expertise of a human professional they also do not have this random variability or noise. The net impact is that the machine-led systems perform better. Just as in our example, the self driving cars performed better than human drivers.

That being the case, what is the issue? Why don't we outsource many of these functions to machine or computer led systems?

The anomaly lies is how we judge the competing systems.

We intuitively know that human beings will make errors but we consciously or not, expect a machine led system, say an Artificial Intelligence based system, to be error free.

We are willing to ditch it at the first mistake: the first wrong diagnosis based on a mamogram or the first accused out on bail who commits a crime. Never mind that doctors and judges are also error prone.

In short, instead of evaluating whether the machine works better than the human being, we expect the machine system to be perfect.

This is completely irrational, as the rational thing we have to test is whether the machine system improves on the alternatives of the existing system, rather than whether it is completely error free on a standalone basis.

This type of thinking leads to wrong choices as we may abandon an algo/ machine even if it's better than what was being done earlier.

That's why we started with the example of self driving cars, where we aren't willing to live with a single fatality even when human drivers make many more errors and cause more deaths.

Moral of the story: when evaluating alternative processes or systems, always pause and think - especially whether you are using the same yardstick to evaluate all the systems or you have unrealistic expectations of one.

What does it have to do with investing? In investing as well, human beings are prone to many limitations like the amount of data they can process plus a huge variety of cognitive biases, like the Recency Bias, the Survivorship bias, Loss Aversion, Endowment Bias, and many more.

All of these limit how well human beings can perform because they tend to drag down the performance over a period of time plus of course is the element of noise.

Try giving the same company details and financials to 5 experienced analysts and see whether all of them come up with the same analysis. It is almost impossible!

That is where a systematic approach helps. Even if the system does in a mechanical fashion what the analyst or fund manager says that they are doing and even if it is somewhat more simplified than what the fund manager claims to be doing, the very fact that it is being done in a systematic manner with no random moves or noise, the system is likely to outperform the human being over a period of time.

However, will the system never make mistakes or not under perform in any time period? No.

But that is the wrong criteria to use to judge whether an artificial intelligence or machine learning system is good.

As in anything, if you ask the wrong question, you can never go right.

If your question is whether switching to an artificial intelligence system will eliminate mistakes? Therefore if it doesn't, you will not use that system.

What you need to do is change the question and ask: Is this system an improvement on the earlier or existing systems? That is when you will get the right answer.

Therefore, the right way is to evaluate whether the systematic approach works better than the human approach over a period of time rather than expecting the non human approach to be perfect and to have all the answers all the time which is anyway never possible in the market as there are things which cannot be known in advance, no matter how much information you gather and how well you analyse it.

For example, we at First Global, use a human plus machine model where most of the heavy lifting is done by the machine.

Does that have a perfect track record? No, but the answer to the question: Is it doing a lot better than what human fund managers are doing specially on a risk adjusted basis? ...it is a unequivocal YES.

If you ask a wrong question you can be lead astray. Hence, always stop and think on what your end objective is and whether you are asking the right question to get to that.

Elsewhere in the book have written more about how human plus machine systems work.

To learn more about human/machines/ combo systems and how to use them for investing:

From the desk of 

Devina Mehra

If you want any help at all in your wealth creation journey, in managing your Investments, just drop us a line via this link and we will be right by your side as your wealth advisor, super quick!

Or WhatsApp us on +91 88501 69753

Chat soon!

Tell Us What You Think:
Accolades & Happiness Under Management

First Global has been widely commended by Global Media. And by thousands of big & small investors worldwide

Subscribe to our fun & thought provoking articles

Contact us on (contact@firstglobalsec.com), to get cracking on building serious wealth!

Follow our buzzing social media handles