Geometric vs. Arithmetic Mean

The geometric mean is a type of mean that indicates the central tendency of a set of numbers by using their product as opposed to their sum which refers to the arithmetic mean.

For n numbers the geometric mean is equal to (x1 * x2 * … * xn) ^ 1/n whereas the arithmetic mean is equal to (x1 + x2 + … + xn) / n.

Many times investment managers when presenting performance metrics they utilize arithmetic means which is a practice that has been debated over time. The reason is that the arithmetic mean downplays the impact of a bad year in the total investment and does not accurately reflect the growth of the investment from initiation and throughout the measurement period.

For example, if someone invested $1 in a fund at its initiation (let’s assume 5-years ago) and the fund had the following annual returns:

Capture

The arithmetic mean implies a positive return whereas the initial investment has actually been wiped out.

Of course other measures such as Sharpe ratio would have given an indication for bad performance and the volatility of the returns would practically have been very high, but still this extreme example highlights the fact that the arithmetic mean can sometimes be fairly deceiving.

 

Transaction Cost Model

In order to calculate the realized returns the total cost needs to be applied first which consists of the broker’s fees and execution cost that is dependent on the liquidity of the stock.

The broker functions as a transaction facilitator between the two parties that are willing to buy and sell a number of shares. The broker practically routes your order into an electronic network and charges a fee for this. In fact, a commission fee is charged both when buying and selling for each party.

A lot of times the broker is charging a fixed fee for each stock regardless of the number of units traded but some other times the commission is a percentage of the trade value that has a dollar amount cap.

Assuming fixed costs and that our trading strategy imposes the trade of 10 stocks, our cost will be the following:

  • Buy order: 10 x fixed fee
  • Sell order: 10 x fixed fee

As a result, the total cost is 20 x fixed fee and the hurdle rate is 20 x fixed fee / invested capital. For a fixed fee of $5 and invested capital of $10,000, the total cost is $100 and the hurdle rate is 1%.

Fortunately, individual investors usually do not have to worry about the execution cost based on the stock’s liquidity. S&P 500 stocks are relatively liquid and the orders are too small to have any impact on the price. Brokerage firms also provide limit order capabilities that can ensure a specific price.

For large trades, you essentially get a volume weighted average price (“VWAP”) since there are multiple blocks utilized to fulfill the order. Also, the order itself applies pressure on the price and a slippage effect is present creating a spiral effect for the effective price.

In conclusion, individual investors can focus on the broker’s fees to estimate their total cost. In general though and for the statistical arbitrage strategies presented in this blog, a transaction cost model of 1 basis point + 1/2 bid-ask spread will typically be used for back-testing purposes.

What is alpha?

The first letter of the Greek alphabet is used in finance as a measure of the return on an investment after comparing it to an appropriate market index that functions as a benchmark.

Alpha is a function of time and is measured over a specific period. For example if a fund has an alpha of x% over last year, it means that it exceeded the benchmark during that year by x%.

A typical benchmark for equities investments in the U.S. is the S&P 500 whereas equity investments in international markets can be mapped onto an appropriate index – typically a prevailing ETF.

The realized returns can also be compared to the theoretical expected returns by utilizing an asset pricing model (e.g. Fama-French model or CAPM) – this alpha is called Jensen’s alpha. Moreover, a regression analysis of the actual returns against the Fama-French returns for instance can provide useful information on the persistence of alpha and the fraction that is not explained by any of the factors.

The realized returns which are calculated after applying trading and execution fees are compared to the benchmark in order to conclude if the strategy over or under performed and reflect things on a relative scale. An extreme example is that during a year that S&P 500 had a negative return of y% by not investing at all you essentially beat the market by |y| %.

3d Plots – R Code

In this script, we create a 3d plot in R using the plot3D package. In this example, a sphere is generated but multiple 3d shapes can be created by modifying the formulas for x,y,z.


# 3d plots

# clear environment and console
rm(list = ls())
cat("\014")

# install and load required packages
#install.packages("plot3D")

library("plot3D")

# initiate Variables
theta = seq(from = 0, to = 2 * pi, length.out = 500)
phi = seq(from = 0, to = pi, length.out = 500)
rho = 3 # random value

M = mesh(theta, phi)
alpha = M$x
beta = M$y

# plot sphere
surf3D(x = rho * cos(alpha) * sin(beta), y = rho * sin(alpha) * sin(beta), z = rho * cos(beta), colkey=FALSE, bty="b2", main="Sphere")

This is the output for 500 points generated for each variable:

Rplot

Rarely Thankful to Lady Luck

A bias refers to the tendency to hold a partial perspective while refusing alternative points of view. There are many forms of biases that can be attributed to various personal traits or explained within different contexts.

It is very common for people to show bias in everyday life. One of the most interesting forms is accrediting pure luck to skill based on benefiting outcomes. This bias mainly refers to the mistaken belief that if something appears more frequently than it normally does, credit should be given to a person’s skill as opposed to accepting it as a series of random events.

A few years ago, one of my professors at UCLA asked my class to perform an interesting experiment. All students would toss a coin and only the ones who would get heads would proceed to the next round. The winner is simply the person who manages to get only heads until everyone else loses.

As a reminder, the probability of getting heads using a fair coin is 1/2 per toss. That means that the probability of getting heads n times in a row is simply 1/2 ^ n. Of course, as n increases the total probability decreases.

Even though everyone would agree in the beginning of the experiment that getting heads when flipping a coin is a random event, as the rounds would go by and my classmates would go further their perspective was gradually changing and round by round less credit was given to luck and more credit was given to intense coin flipping skills!

This funny experiment clearly demonstrates how easy it is to confuse skill with luck when only getting exposure to a limited number of random trials or competing against a limited number of contestants. The same concept applies in many other activities in life including trading and investing that our blog is mainly focused on.

A lot of times, it is very hard to have a tracking record that is long enough and hence statistically significant in order to conclude if an investor’s generated returns are based on skill or are just the product of pure luck. As with most things in life, the truth could be somewhere in between and even though not generally much appreciated some merit should be given to luck. The question will always be though what portion was skill and what portion was luck. Or does it even matter at the end?

The truth is that we have a hard time to accept that many events in our lives are just random – especially the ones that benefit us – and are putting a lot of effort to explain them somehow and believe that there is something special about the circumstances or our case. After all, people are rarely thankful to lady luck.

Poker Simulation – Probability of Hands

In this script, we are simulating poker hands and estimating the probability of each hand to come up. By increasing the number of simulations the execution gets slower but the estimates get more reliable. Finally, by removing or adding players the probabilities shift correspondingly.

## Poker simulator
## 10,000 hands with 2 players, neither ever folds

# clear environment and console
rm(list = ls())
cat("\014")

# install and load required packages
# install.packages('holdem')
library(holdem)

# inputs
n = 10000

# initialize variables
no_pair = rep(0, n)
one_pair = rep(0, n)
two_pairs = rep(0, n)
three_of_a_kind = rep(0, n)
straight = rep(0,n)
flush = rep(0,n)
full_house = rep(0, n)
four_of_a_kind = rep(0, n)
straight_flush = rep(0, n)

# simulate hands and track hands
for(i in 1 : n)
{
x1 = deal1(2)
b1 = handeval(c(x1$plnum1[1,], x1$brdnum1), c(x1$plsuit1[1,], x1$brdsuit1))
b2 = handeval(c(x1$plnum1[2,], x1$brdnum1), c(x1$plsuit1[2,], x1$brdsuit1))
if(min(b1,b2) <= 999999) {no_pair[i] = 1}
if(min(b1,b2) >= 1000000 && min(b1,b2) < 2000000) {one_pair[i] = 1}
if(min(b1,b2) >= 2000000 && min(b1,b2) < 3000000) {two_pairs[i] = 1}
if(min(b1,b2) >= 3000000 && min(b1,b2) < 4000000) {three_of_a_kind[i] = 1}
if(min(b1,b2) >= 4000000 && min(b1,b2) < 5000000) {straight[i] = 1}
if(min(b1,b2) >= 5000000 && min(b1,b2) < 6000000) {flush[i] = 1}
if(min(b1,b2) >= 6000000 && min(b1,b2) < 7000000) {full_house[i] = 1}
if(min(b1,b2) >= 7000000 && min(b1,b2) < 8000000) {four_of_a_kind[i] = 1}
if(min(b1,b2) >= 8000000) {straight_flush[i] = 1}
}

# calculate probabilities per hand
pr1 = (sum(no_pair > .5) / n) * 100
pr2 = (sum(one_pair > .5) / n) * 100
pr3 = (sum(two_pairs > .5) / n) * 100
pr4 = (sum(three_of_a_kind > .5) / n) * 100
pr5 = (sum(straight > .5) / n) * 100
pr6 = (sum(flush > .5) / n) * 100
pr7 = (sum(full_house > .5) / n) * 100
pr8 = (sum(four_of_a_kind > .5) / n) * 100
pr9 = (sum(straight_flush > .5) / n) * 100

# print probabilities
paste0(pr1, '% probability of no pairs')
paste0(pr2, '% probability of one pair')
paste0(pr3, '% probability of two pairs')
paste0(pr4, '% probability of 3 of a kind')
paste0(pr5, '% probability of a straight')
paste0(pr6, '% probability of a flush')
paste0(pr7, '% probability of a full house')
paste0(pr8, '% probability of a 4 of a kind')
paste0(pr9, '% probability of a straight flush')

 

 

Statistical Arbitrage – Intro

Statistical Arbitrage Overview

  • As a trading strategy, statistical arbitrage is a heavily quantitative and computational approach to equity trading. Other asset classes (fixed income, commodities or currencies) can be used as well.
  • In deterministic arbitrage, a sure profit can be obtained from being long some securities and short others.
  • In statistical arbitrage, there is a statistical mispricing of one or more assets based on the expected value of these assets.
  • By investigating historical data and extracting the appropriate statistics, you can conclude if there is persistence in the returns produced.

Practically, statistical arbitrage refers to the identification of strategies that historically produced alpha (excess returns over a benchmark) by examining supportive statistics. As a result, statistical arbitrage is heavily relying on back-testing and analyzing historical time series which might require excessive computational power.

In this blog, we are going to post numerous ideas/strategies for implementation and the respective statistics & performance analytics.

 

Is Algorithmic Trading Open to Everyone?

Algorithmic trading is a method of investing that relies upon automating the process of placing orders. There is an underlying strategy that indicates the triggering of a long or short position and then the corresponding order is placed through a broker. The positions are also closed when indicated by the strategy or based on a threshold for potential losses.
The process of establishing a working strategy involves the following steps:
  • Collecting data sets of historical stock prices (daily or intraday depending on your trading frequency).
  • Back-testing your strategy: test your strategy using historical data and estimate what your annual returns would have been, how these returns compare to a benchmark (e.g. S&P500 index) and what is their volatility (consider calculating Sharpe Ratio, and other performance analytics).
  • Consider leasing a server and acquiring a fast internet connection as well as a cloud service to store your data sets. Depending on the strategy, increased computational power might be required.
  • Opening a brokerage account with low commission fees. Estimate what is your hurdle rate to compensate for these expenses, and adjust your back-testing results to incorporate trading fees & commissions. What is the impact on the alpha?
  • Automating the process of placing and closing orders through your broker and the use of an API (Python would probably be a good choise for this step).
Nowadays, you can literally find a ton of strategies, codes and online data sets to back-test strategies. For your reference and as an example please take a look at the following websites regarding implementing and back-testing strategies as well as commission free trading:
There are quite a few websites that offer similar resources and services but the above ones can be helpful as a starting point.
Overall, technology has made it relatively easy to perform back-tests, test multiple strategies and automatically place orders and as a result making it possible for more people to participate in systematic trading.