Tuesday, February 28, 2012

Is WIG20 decoupling suggesting correction this time?



On January 11th this year I noted a significant decoupling of WIG20 from S&P500. Since then WIG20 has re-coupled and followed the path marked by larger markets.

Today we can observe another quite noticeable decoupling of the Polish stock market and its larger counterparts.

However, while the probability was tilted toward WIG20 compressing upwards with the S&P500 in January, I would currently suggest that WIG20 may actually be a harbinger of the wider equity correction this time.

To strengthen the call, it would be helpful to analyze the contributions of the WIG20 constituents as well as performance of other emerging market indexes. 

Just by comparing WIG20 to the wider index WIG as well as indexes comprising of smaller cap companies - mWIG40 and sWIG80 - we may notice that WIG20 is leading the correction on the Polish market and the other indexes are following:


WIG20 also seems to plot the future path for the other emerging markets:


Nevertheless, seems we are realizing the scenario 2 for the Polish equity market in 2012 as described in the previous post, so far ;)

Wednesday, February 22, 2012

You can retroactively simulate anything

A new systemic risk insurance fund was launched yesterday - Eurogeddon.

The fund aims at the collapse of the euro and the run on the European banks over the next three years.

According the fund's marketing materials, Eurogeddon would have made nearly +150% return at the time when Lehman Brothers went bust, if only it was active at that time (it was launched only yesterday):

Chart: Simulation of the Eurogeddon hypothetical returns over the past five years

The simulation presented above, shows purely hypothetical Eurogeddon returns employing posterior knowledge about the events in the analyzed period, the macroeconomic view retroactively adapted to them and  trading strategy devised on the extrapolation of the past trends.

Therefore I hereby present an even better approach :)

I take a number of liquid assets, such as currencies, index and commodity futures and take positions in them (both long and short), based on the past knowledge of the daily movements of these assets.

As you can imagine, the results over just a couple of years would have been amazing...

Chart: Simulation of the returns of a hypothetical perfect knowledge fund over 4 years

Chart: Asset allocation of a hypothetical perfect knowledge fund

Chart: Simulated daily trades of a hypothetical perfect knowledge fund

Chart: Distribution of simulated daily trades

Isn't such SIMULATED TRACK RECORD impressive? :)

UPDATE 2012-02-24

Thanks to some creativity, Eurogeddon is already up +2.20% YTD and +34.70% since its "inception":


Such an amazing result is possible for forming Eurogeddon on the basis of an existing equity fund. Mostly long biased in opposition to the declared Eurogeddon's strategy.

[---]

More information about the Eurogeddon fund: http://opera.pl/pl/oferta/fundusze/eurogeddon/co-wyroznia-eurogeddon/


Some Eurogeddon related links: https://pinboard.in/u:mjaniec/t:Eurogeddon/

[ R source code ]

Searching for Facebook value with differential evaluation


I have mentioned recently the Facebook's IPO preparations and its desire to sell its share at the $100 billion valuation.

A little later I have read an interesting analysis of the Facebook valuation based on the logistic growth model fitting by Peter Cauwels and Didier Sornette.

Taking this opportunity, I decided to test a brute force logistic model fitting to the actual data, using both simulated annealing and differential evolution optimization methods as implemented in R.

The results of such an approach are a little but not substantially different from the Cauwels-Sornette approach, at least in the differential evaluation case:


> # p0, k, r
> x    # differential evolution
      par1       par2       par3 
  1.000000 920.729890   1.262142 
> x2   # simulated annealing
[1]   0.1005737 505.2465850   1.9097082



Wednesday, February 15, 2012

Playing with intraday pair trading

Chart: Simulated intraday pair trading visualized

I consider a true hedge fund to employ market neutral strategies. Such strategies should be indifferent to the market conditions and not correlated with market performance.

Market neutral strategies usually employ statistical arbitrage. This mainly means two things: 

1) trades generate positive returns ON AVERAGE, 

2) trades usually have at least TWO OPPOSITE LEGS, such as long position in an undervalued asset and short position in the overvalued asset.

A classical example of such statistical market neutral strategy is pair trading.

Pair trading is a simple strategy in principle. But you can implement it in very many different ways.

And the devil and the returns are in the details... :)

Wednesday, February 8, 2012

Analyzing draw-downs

Chart: Cumulative 500 random returns

Let's take a series of randomly generated 500 returns in the range of 0.92 to 1.10 (or -8% to +10%).

Now, let's try to calculate some basic potential loss statistics for this series based on the observed draw-downs:

Number of the draw-down losing sequences:

> downsN
[1] 228

First draw-downs losing sequences:


d(dd_stats)
     start stop length         loss
[1,]     1    1      1 -0.005949528
[2,]     2    3      2 -0.122208798
[3,]     4    4      1 -0.024784714
[4,]     5    6      2 -0.098330307
[5,]     7    8      2 -0.060649554
[6,]     9   11      3 -0.137897672

The worst draw-down losing sequence in terms of value:


> min(sapply(drawdowns, function(x) x$result-1))
[1] -0.2625977

The longest draw-down losing sequence:

> max(sapply(drawdowns, function(x) x$length))
[1] 6

Average loss of draw-down losing sequences:

> mean(sapply(drawdowns, function(x) x$result-1))
[1] -0.07221781

The bottom, peak and the last value of the process:

> min(cumreturns)-1; max(cumreturns)-1; cumreturns[length(cumreturns)]-1
[1] -0.0683741
[1] 46.30488
[1] 38.46427

UPDATE 2012-02-10

Chart: Cumulative random sequence of 500 returns [-8%, +10%] and draw-down line

Above I did calculate the statistics for losing sequences not draw down.

The correct results (for a different random process) should be:

> downsN # number of losing sequences
[1] 214
> 
> head(dd_stats)
     start stop length         loss
[1,]     1    1      1 -0.039665746
[2,]     2    3      2 -0.097175026
[3,]     4    4      1 -0.001900723
[4,]     5    6      2 -0.076761609
[5,]     7    8      2 -0.083846687
[6,]     9   10      2 -0.062338201
> 
> min(sapply(drawdowns, function(x) x$result-1)) # worst sequence
[1] -0.2556858
> 
> mean(sapply(drawdowns, function(x) x$result-1)) # average sequence
[1] -0.06519456
> 
> max(sapply(drawdowns, function(x) x$length)) # longest sequence
[1] 6
> 
> mean(sapply(drawdowns, function(x) x$length)) # average length of sequence
[1] 1.658915
> 
> # bottom, top, last result
> min(cumreturns)-1; max(cumreturns)-1; cumreturns[length(cumreturns)]-1
[1] -0.06119829
[1] 161.2614
[1] 85.06354
> 
> 
> min(dd) # worst draw down
[1] -0.5623983


R Source

Thursday, February 2, 2012

$100 billion Facebook IPO


Facebook has filled to raise $5 billion in the IPO yesterday. The valuation is told to be between $75 and $100 billion. How this compares to the public tech companies?


Data: Google Finance, Bloomberg

As Bloomberg notes, $100B valuation is equivalent to 26.9x Facebook's SALES. Not EARNINGS. By comparison Google's current P/E is 19.51 and P/S = 4.06.

At the time of its IPO, Google's P/S ratio stood at 8.7.