Quantum Algorithms: The New Black Box In Portfolio Optimization. What Connect The Dots?

Article about Quantum Algorithms: The New Black Box In Portfolio Optimization. What Connect The Dots?

Mar 9, 2020

“Creativity is intelligence having FUN” – Albert Einstein

The quantum algorithm is a new fun field emerging in the finance town. Such a field is capitalizing on developing Artificial Intelligence “AI” software to spot arbitrage opportunities in portfolio optimization.  Hence, it is gaining wide popularity nowadays among asset managers especially hedge funds. Currently, many firms utilize quantum algorithms in stock and bonds trading. They consider their applied algorithms, the firm’s most valued black box. Even more, they treat the algorithm success recipe as ultimate confidential. But why are algorithms in portfolio optimization that important? What connect the dots?

A quick and simple answer, it saves transaction costs and time. Ok, wait just a second, does it? Actually, this depends on the statistical and mathematical framework applied in the trading algorithm. In other words, it depends on the data sample size and formula embedded in the algorithm to calculate the optimum return/risk bundle in the investor’s portfolio. Theoretically, Mean-Variance (MV) is the most popular framework used to calculate the optimum return/risk trade-off. MV theory assumes the stock returns and variance are normally distributed; preferences are quadratic functions estimates and investors risk-averse homogeneous rationale. Generally speaking, theory losses its importance if assumptions are violated in real life scenarios. Statistically, in today’s fast-paced world, most of the stock returns and variance depict non- normality distribution and investors' preferences are heterogeneous. Investors could be categorized into risk-averse, neutral and risk seekers. It is true that AI software could process data sample size of millions and even billions of stocks returns and variance to overcome the violation of non-normality distribution. This AI pro could be considered con from the statistical perspective. It is not always the bigger the better, as sometimes big data samples may cause statistical biases especially p-values problems. Hence, accepting or rejecting null hypothesis while it is genuinely true or false. Consequently, processing big size of data in stock prediction could be time-consuming and expensive instead of reducing costs and time.


So what is the alternative algorithm formula that could be deployed efficiently? Extended Mean Gini (EMG) is another framework used in portfolio optimization that has its merits in overcoming returns and variance non –normality distribution by applying stochastic dominance techniques. Fortunately, such a framework does not necessitate using large samples upon calculating the portfolio minimum variance. The EMG formula is easy to embed in portfolio trading algorithms as per the following:


Min (R)= -v *COV [E(r),(1-Rank)^(v-1)]


Where R is the Gini Coefficient, v is the investor risk-seeking- preference degree, Cov is the covariance coefficient, E(r) is the expected return of the risky portfolio and Rank is the portfolio’s individual return ranked in descending order. Despite its simplicity in application, EMG framework assumes investors' preferences to be risk-averse only. Consequently, I attempted to cater for different types of investors by adjusting the EMG formula to include the coefficient as per the following:


Min (R)= 0.5 *v *COV [E(r),(1-Rank)^(Lnv)]

Subject to

0≤v ≤1 and 0≤R ≤1.


Where Lnv is the natural logarithm of the risk-seeking coefficient. The justification for using the natural logarithm of v is pursuing a measurement coefficient that caters for the risk seekers negative correlation perception between return and risk. Since risk seekers v is constrained to lie between 0 and 1, so Lnv will always depict a negative number. In addition, Lnv could be viewed as a substitution for the deleted negative sign in the original EMG equation. Consequently, adjusting the EMG equation by constant factor 0.5 could be rationalized from the Gini economic index. Such a concept necessitates that log normal distribution has to be divided by 2 to cater for the area below the Lorenz curve. Subsequently, this variance adjusted technique if applied in AI algorithms could save time and cost. This could be achieved by making small sample data reliable and catering for different investors’ types.


Finally, theories should connect real life dots. Real life applications should save money and also have solid theoretical grounds. Spotting the right arbitrage opportunity in trading stocks, bonds, options or any other financial instrument, depends on how accurately asset managers incorporate return and variance variables in their algorithms. No one has the crystal ball, but definitely part of the answer is in visualizing what connect the dots. Unleash your fun imagination and the puzzle pieces will form together the big forest picture.

This Article was published via TalkMarkets on 20/01/2020