Menu

Tuesday, December 29, 2009

A note on index tracking

A couple of years ago, Ernie has started an interesting discussion on his blog. A reader named 'L' was having doubts about ability of a small portfolio to track an index out of sample. And to be honest, he has a point, starting with only one stock in the portfolio you are most likely to get a random walk instead of tracking. On the other hand, having N-1 stocks in the portfolio should result in almost perfect tracking.
I've decided to verify the results and test tracking portfolios for the XLE. The results for 5 'best' portfolios consisting of only 5 stocks are given below.


Blue charts are in-sample training and red is true out-of-sample data (no re-balancing). Look how well three of them survived the 2008 debacles.
I must note that these portfolios have probably not enough variance for profitable trading, but that should be easy to fix.

Friday, December 25, 2009

Overachievers vs underperformers

This paper got me thinking: market-neutral trading of two stock baskets seems like an elegant strategy. A 'long' basket should mainly consist of favorable stocks, performing above the index. It is then traded against a 'short' basket of underperformers. So each trade should deliver the alpha of the long and -alpha of the short basket. Seems like a great idea, but how to identify the right stocks? How consistent is the over- and underperformance?
 Here is my first shot at this.

In the graph above the average 10-day performance of a stock is plotted relative to the XLE index. Some patterns can be clearly seen, but to be honest, I'm not really thrilled.
Maybe I'll try using bollinger next time.

However, the cumulative returns seem much more interesting, through I don't have any particular use for it.


I think a better way to approach stock rating is to look at the stationarity of the tracking error.
But for now, I'll stick with building a cointegrating portfolio and trading it against a benchmark.

Files: stocks_XLE.mat, lag.m, analizeStock.m

Wednesday, December 23, 2009

What does Dickey-Fuller test tell us (not)?

I've been trying to build an index-cointegrating portfolio in the last couple of days. One of the crucial questions here is which criterion to use for stock combination ranking. Dickey-Fuller test is the first option that comes to mind. I know it tests for stationarity, but that is just a  part of what I'm looking for.
You see, even when you have a stationary time series that has very little variance, it is also of very little use for trading, as you will probably  not earn the transaction costs back. (low variance is  a sign of market efficiency, in an ideal efficient market variance will be zero, eliminating any arbitrage opportunities). On the other hand, if a series is instationary with a low drift, but has plenty of variance, many good trading opportunities will exist.
To test the DF-test I've simulated a combination of two AR(1) time series I(0.95) and I(1):
y  = (1-drift)*s + drift*d          

where
s - stationary I(0.95) time series.
d-  non-stationary I(1) time series
Both series have variance alpha.

I've varied drift from 0 to 1 and  alpha from 0.2 to 5.   (0, 5) combination being most profitable. Increasing the drift value effectively means a transition from stationary signal to random walk.
The results are in the figure below:




Just as I've thought, DF can not distinguish between different levels of variance.

I clearly have a need for a better estimator, not looking for stationarity but for variance/drift ratio.

Maybe it's time to blow the dust off the good old Fourier transform, looking for high spectral peaks.
Probably an even better idea is to use wavelets for spectral decomposition and filtering, then estimate spectral density of each frequency band.
Any other ideas?

Files: drift_model.m , semistat.m

Monday, December 21, 2009

Can open innovation give a competitive edge?

I believe it certainly can! Even more so, I believe a small company must use it to be able to compete against 'big boys'.  (if you don't know what open innovation is, take a look at the Wikipedia ). My current full-time job is in the technology sector, working inside an RnD site about 1000 people strong. In my daily work I get to work with small(er) size subcontractors, companies ranging from 10 to 150 employees. My observation in the past couple of years is that 'big' is not always beautiful and 'small' is not much better. But open innovation could change this in favor of smaller sized companies.  What are the strengths of a big RnD? Usually it is a decent research budget and a large knowledge pool. Project organization is usually well organized, making it possible to handle very large projects (>50 fte). I've also experienced that these advantages come at a cost: bureaucracy, ill communication, heavy overhead, 'political' decisions etc. This is common to most large RnD's , as illustrated by Dilbert cartoons. Reading them daily I often find them spot-on, equally applicable to European and American companies alike.
Smaller companies have their own disadvantages. While being extremely agile and efficient, they often lack professional organization and broad know-how. If only they could get the knowledge they need....
This is where open innovation provides a competitive edge. I believe that the ways of doing research have been changing for the last couple of years and the rate of change is increasing. Information becomes readily available making it easier to build on ideas of others rather than reinventing the wheel.
I've started this blog with open innovation in mind. Here I'll try to
- share my ideas. Many of them will be incomplete, unpractical or just plain wrong. I really hope to get enough feedback to filter out the better ones.
- share specifics on things that don't work. In my professional work the 'don't do this' advice has proven to be most valuable. Hopefully I'll help somebody spare some time to come up with really great ideas (and then of course share them).
A great example of open innovation in action: today I've come across this post. It really helped me to get a couple of thoughts together that have been floating in my head for weeks. Thinking of these ideas as puzzle pieces, I now know that all the pieces are there. Now I can start the tedious work of putting them in place.

Tuesday, December 15, 2009

Where is the catch?

...is a question I usually ask myself when something seems to be going 'too well'. Up till now things were looking much too easy, so now seems the right time to start asking questions.
Here is what I did:
1. got a list of 800+ etfs in to Matlab from Yahoo.
2. run a Dickey-Fuller test to establish potential trading pairs. Cointegration was only tested inside a category to avoid pairs with no 'physical' cointegration. Setting a threshold to 3.9 t-static I've got about 900 pairs. Later I've decided to ditch some categories like 'N/A' and 'Bear Market'.
3. Let the 'Duck' strategy loose on the pool pair, let it figure out the optimal sharpe. (I've decided to use animal names for strategy tracking. Of course starting with 'not so cool' animals as rabbits, ducks is logical as early strategies will be not really great. I'll be moving to panthers and tigers in the later stages of develpment). To elaborate more on the Duck, which I'll post later: this strategy tracks the ratio of a pair. Optimization is done on the moving window and z-level threshold. For example: a trade is entered when z-spread is higher than 1.5 and exited when it reverts back to zero. Duck is plain and simple, no fancy stuff like stop-loss or time limits. I've let some big holes in backtesting, like not using training and test sets, not testing the data-snooping to the finest detail etc. But still, this step should give an *indication* on the returns achievable by the pairs. After this step I'll reduce the 900 pair set to a 'profitable' set on which I'll continue my research.

Good, quick and dirty, but let's take a look at the results.


The results provided by Duck are very encouraging: almost all pairs have positive sharpe, but the top of the list alarmed me. Take a closer look at the figure: sharpe ratios of well above 3 a possible even with a dumb duck strategy. Either I'm going to become rich much faster than expected or I haven't thought about something important. I guess the latter.

Ok, let's look at our runner-up, the MZO.

This is where my bad feeling is justified: MZO has zero trading volume for many days! This would result in slippage (I guess) and all other sorts of trouble, like being unable to short the share.
Now, what really confuses me is the fact that liquidity of an etf should be based on the underlying stock as explained here. And MZO has some really heavy weights behind it.

The same goes for many of the promising etf pairs, they seem to miss liquidity.
So now I'm a little confused: is it better to filter out the non-liquid etfs or start building my own share baskets and trading them against each other or bigger sized etfs?