niedziela, 29 czerwca 2014

What the UK grid status data tells us about the renewable energy sources?


Thanks to the data available at the UK National Grid Status website, we are able to watch the evolution of the wind power generation in the United Kingdom from May 2009.

Over the last three years, the peak wind generation exceeded 20% of the total country electricity demand.

Even that this is lower than over 50% in Denmark, this number still looks impressive. However...


sobota, 21 czerwca 2014

A first look at the potential arbitrage opportunities in the electricity market


As previously observed, electricity markets can be strange at times.

Especially when there are two (or more) separate markets dealing with virtually the same article.

Usually such situation leads to arbitrage opportunities.

In case of electricity, we often have the separation of the Day Ahead Market (DAM) - when one declares amount of electricity he will provide or consume - and balancing market - when differences between DAM declarations are settled.

When you are a Renewable Energy Producer (RES), and forecasting your energy generation is not perfect, you face two possible scenarios:

  1. production excess - you have produced more than you declared (sold on the DAM)
  2. production deficit - you have produced less than you declared (sold on the DAM)
Let's focus on the later - production deficit.

Regulations differ, but usually when you are short (in deficit), you must buy the amount of the "missing" energy (forecasted amount declared/sold on the DAM minus the actual production). Unless you are able to do it on the Intraday Market (IDM), you must settled on the balancing market.

Normally the deficit prices are higher than the DAM prices, meaning that you pay for the errors in your production forecast.

For example, in Romania, the average difference between DAM and deficit prices was minus 98 RON / MWh (approx. minus 23 EUR / MWh) over the October 2013 - May 2014 period (some preliminary data used).

However, on rare occasions (<5% of the cases), deficit prices were lower than DAM prices.

Fig. Distribution of the differences between DAM and deficit prices in Romania, 2013-10 - 2014-05

Hence, selling as much energy as possible (for example your whole installed capacity) on the DAM, for the moments when DAM price is expected to be higher than deficit price would bring you a profit (on average 21 RON / MWh).

A "tiny" problem remains though. How to predict when DAM>deficit? :)

Fig. Differences between DAM and deficit prices by the hour

Fig. Average differences between DAM and deficit by months and hours
Note: all values negative, best -39 RON

środa, 11 czerwca 2014

Let the wind blow or forecasting the wind power

I have mentioned recently, that the production of renewable energy sources is difficult to predict. Let's examine some of the challenges.

The difference between production and forecast can fluctuate wildly, sometimes exceeding +/-50% of the installed wind farm capacity.

Chart: differences between production and P50 forecast
(P50 forecast is the average expected energy production)

The distributions of differences between production and forecast have high kurtosis, and both tails are pretty long. 

Chart: Distribution of differences between production and P50 forecast
(P50 forecast is the average expected energy production)

It seems, assumption of the normal distribution of the production, customarily used in wind power forecasting, may not always be correct.

Chart: Assumed normal distribution of (long term) power production
(P50 forecast is the average expected energy production)

It shouldn't surprise then that production often exceeds both conservative P50 and aggressive P90 forecast levels.

Chart: Production (gray bars) vs. P05-P90 forecast band (red and green dotted lines)

On the other hand, production significantly lower than forecast is also not welcome. Since forecasts are the basis of the declared planned production levels on the Day-Ahead Market (DAM), all deviations require costly balancing.

The forecast spread, or the width of the forecast (difference between P90 and P05), which should reflect the forecast probability does not always add any value.

Chart: Forecast spread vs. production


Also, it doesn't help that in addition to the weather-related production forecast, one needs to deal with planned availability of a plant. The planned availability is connected among others with scheduled maintenance, but not everything always goes according to the plan.

As a result, a plant operator needs to deal with additional random variable. It sometimes happens that the plant stops producing long before the planned availability goes to zero, and starts producing, when the planned availability still equals zero.

Chart: Production (black line) vs. planned availability (grey area)

Weather is a factor that affects both production forecast and availability. While wind may be strong, suggesting high production, temperature may lead to operational problems that may not be fully reflected in planned availability. As a result, one may experience noticeable periodic disparity between forecast and production.

Chart: Average monthly difference between production and P50 forecast for different farms

Even with operational information, less than one year of data is too little to decide whether we face here a seasonal effect which should be adjusted for in the next years, or one time event.

To get better alignment between forecast and production the following directions seems promising:
  • use energy storage
  • develop better weather forecasting models
  • include farm operational data it into the production forecast models
  • utilize prices, and price forecasts, where adequate
For more about wind forecasting, see: https://pinboard.in/u:mjaniec/t:res/t:wind_forecasting/

niedziela, 8 czerwca 2014

Some curiosities of the European power markets


It should not be a surprise that power markets in Europe are amazingly complex.

There are both many different market segments, and rules governing the segments in various countries are often very different.

Among the key market segments we have:

  • day ahead market (DAM) - where planned electricity deliveries and prices for the next day are set
  • intraday market (IDM) - where both electricity generators and users try to adjust their positions
  • balancing market - where differences between plans (forecasts) and physical delivery/consumption are settled
  • futures and derivatives market - when expected mid- and long-term prices are established
The growth of the renewable energy sources (RES) such as wind and solar put this structure to a test and also added new market segment:
  • green energy rights market - where various green energy-related rights such as green certificates / certificates of origin / guarantees of origin are traded
The key problem with renewable energy sources is that they are intermittent (they work when the required "fuel" such as wind or sun is available) and the production is hard to forecast. This problem is exacerbated by the difficulties with energy storage.

This leads to many curious market behaviors like negative prices:

Fig. DAM prices on FELIX
Source: EEX

-2 EUR does not look too scary, but minimum theoretical prices on EEX are -500 EUR for DAM and... -10,000 EUR for IDM!

Fortunately, over the recent year the prices have "just" touched -250 EUR a couple of time:

Fig. Germany/Austria IDM 1yr
Source: EEX

I am going to examine the characteristics and behavior of the power exchanges and the impact of the renewable sources in the next posts, so stay tuned :)


sobota, 3 maja 2014

AES CBC

A couple of months after my first post about the AES encryption in R, I decided to update the code with more stronger AES CBC.

Also some minor errors were corrected :)

I hope to start posting some more interesting stuff soon.

In the meantime, there is no beach in Madrid :)



[ R source ]

BTW: Maybe someone know how to handle the whole encryption process in memory with R?

UPDATE 2014-05-19
In-memory decryption problem solved thanks to r2evans! :)

sobota, 15 lutego 2014

Visualizing option strategies with R


It has been a while since I wrote something about options.

Recently I found myself in a need for simple tool for visualizing option strategies.

Option strategy is a combination of a number of option positions - both long and short - and sometimes underlying assets (such as equities) - i.e. it has multiple "legs".

To calculate the payoff (*) of the option strategy one needs to know:

  • types of assets used to construct a particular strategy - possibilities include CALL and PUT options, as well as underlying assets
  • type of positions opened - options can be bought or sold (written); underlying assets can be bought or sold short
  • option strike prices - any option has a price at which it "activates"; for example CALL option with a strike price of 100, generates profit only when the price of underlying asset is above 100
  • option premiums - buying an option has a price, writing it generates some income
  • amount of assets utilized - for example you can buy more than 1 option
Transaction costs are not cover by my simplistic model.

(*) payoff is the value of the strategy at expiration; it should not be confused with the theoretical value derived from option pricing models such as Black-Scholes

Probably the simplest options-related investment strategy is a purchase of some stock and hedging it with PUT option - as a result, while one have unlimited profit potential, his loss is limited with the help of PUT option:

> assets.mat
     trans type   strike price amount
[1,] "buy" "base" "40"   "0"   "1"   
[2,] "buy" "put"  "35"   "2"   "1" 



A pure option strategy is Bull Call Spread, when one simultaneously buys and sells CALL options with different strike prices:

> assets.mat
     trans  type   strike price amount
[1,] "buy"  "call" "40"   "3"   "1"   
[2,] "sell" "call" "45"   "1"   "1" 


Option strategy can consist both long and short positions, and different amounts of options at opposing sides - like in the Call Backspread:

> assets.mat
     trans  type   strike price amount
[1,] "sell" "call" "40"   "4"   "1"   
[2,] "buy"  "call" "45"   "2"   "2" 


Option strategy can be composed with more than two legs - like in the Condor:

> assets.mat
     trans  type   strike price amount
[1,] "buy"  "call" "35"   "11"  "1"   
[2,] "buy"  "call" "55"   "1"   "1"   
[3,] "sell" "call" "40"   "7"   "1"   
[4,] "sell" "call" "50"   "2"   "1"  


Obviously there are many more different option strategies. You can learn about them at the following places:




And you can experiment with them and any other ideas connected with option strategies with the R code I have prepared. Enjoy!




[ R Source ]

czwartek, 9 stycznia 2014

Basics of scorecard model validation

Source: DW


Some time ago I wrote about Somers' D measure of association.

Somers' D is one of the many tools that can be used for validating scorecard models.

The task of a scorecard model is to assign scores, such as ratings, to the clients (in case of corporations usually called "counterparties") of a financial institution.

The ratings assigned by a scorecard model should reflect the probability of the client not meeting its obligations towards a bank or other financial institution.

As I showed in my previous post about relation between probability of default and interest rates, the higher risk of default needs to be compensate by higher interest rate charged.

Two aspects of a scorecard model are usually examined to assess the correctness of the model:

  1. discriminatory power
  2. calibration

Discriminatory power is about the model's ability to rate clients/counterparties according to their credit quality, i.e. verify whether clients/counterparties of similar quality fall into similar ratings classes.

Meanwhile, calibration verifies whether ratings given are correct indicators of future defaults. In other worlds, we compare probability of default forecast by the model with actual ("true") defaults, here.

Tools commonly used for the assessment of the models discriminatory power are Receiver Operating Characteristic (ROC), above mentioned Somers' D and Gini coefficient.

There is a number of tests for verifying model calibration. Probably the most often cited in the literature are: 

  • binomial test
  • Hosmer-Leweshow test
  • Spiegelhalter test
  • normal test

In most of the cases, the scorecard validation tests require knowledge of the previous scores and actual past defaults. 

When previous scores are not available and cannot be retroactively calculated, some external ratings - such as issued by credit agencies - may be used. An additional condition should be met in such a situation - the scores generated by the internal model need to be aligned with the external ratings.

As I noted before, the actual default rates change over time. Meanwhile, most of the scorecard validation tools does not take this fluctuations into account. They assume that the future will be similar to the (averaged or recent) past.

In addition, the mentioned above validation tools do not take into consideration the differences in characteristics of the past and current assets being evaluated by the scorecard model. It is assumed the model will take care of the differences. However, the assets being rated may have pretty complex internal structures hidden from the model.

Lastly, scorecard models do not usually say anything about the possible recovery rates and resolution times.

If you would like to play a little with scorecard validation tests, you may take a look at:

[ Scorecard validation tests in R ]

[ Scorecard validation tests examples in R ]

[ Measures of association in R ]

Validation tests in R

niedziela, 15 grudnia 2013

Sommers'D - two dirty implementations: R vs F#

A couple of days ago I started playing with F#.

Although I'm VERY far from being skillful F# programmer, I am slowly moving forward.

Not far ago I implemented one of the measures of association in R - Somers' D.

Somers' D is sometimes used for testing concordance of internal and external risk scores / rankings.

I had some problems with finding the right formula for Somers' D Asymptotic Standard Error, and when I finally found the solution I didn't have much energy to clean up my R code ;)

I thought, using this dirty code as a base for Somers' D implementation in F# may bring interesting results. My intention was not to give R too large a head start over F#.

Still, the differences are visible in many places...

First of all, I have been pretty surprised that basic matrix operations are not available in the core F# version.

It is necessary to add F# PowerPack to work with matrices.

Even then, working with matrices in F# does not seem so natural as in R (or Matlab). Or, probably, I still know too little about F#.

A couple of examples:

constructing matrix in R:

  1. PD    <- c(0.05,0.10,0.50,1,2,5,25)/100
  2. total <- c(5,10,20,25,20,15,5)/100
  3.  
  4. defaulted    <- total*PD
  5. nondefaulted <- total*(1-PD)
  6.  
  7. <- sum(total)
  8.  
  9. portfolio <- rbind(defaulted,nondefaulted)/n

constructing matrix in F#:

  1. #r "FSharp.PowerPack.dll"
  2.  
  3. let PD             = [ 0.05; 0.10; 0.50; 1.00; 2.00; 5.00; 25.00 ]
  4. let counterparties = [ 5.; 10.; 20.; 25.; 20.; 15.; 5]
  5.  
  6. let groups = PD.Length // risk groups no.
  7.  
  8. let div100 x = x / 100.0
  9.  
  10. let PDprct = [ for x in PD do yield div100 x ]
  11. let CPprct = [ for x in counterparties do yield div100 x ]
  12.  
  13. let n = CPprct |> Seq.sum
  14.  
  15. let defaulted    = [ for i in 1..groups do yield CPprct.[i-1]*PDprct.[i-1]/]
  16. let nondefaulted = [ for i in 1..groups do yield CPprct.[i-1]*(1.0-PDprct.[i-1])/]
  17.  
  18. let x = matrix [ defaulted; nondefaulted ]

calculating WR/DR in R:

  1. wr <- n^2-sum(sapply(1:nrow(x)function(i) sum(x[i,])^2))

calculating WR/DR in F#:

  1. let xr = x.NumRows
  2.  
  3. let rowSum (x : matrix) (i : int) = Array.sum (RowVector.toArray (x.Row(i-1)))
  4.  
  5. // WR / DR
  6.  
  7. let wr =
  8.  
  9.     let mutable mat_sum = 0.0
  10.  
  11.     for i in 1..xr do
  12.         let row2  = rowSum x i ** 2.0
  13.         mat_sum   <- mat_sum + row2
  14.  
  15.     n ** 2.0 - mat_sum

Later it gets a little better, but...

'A' function in R:

  1. <- function(x,i,j) {
  2.  
  3.   xr <- nrow(x)
  4.   xc <- ncol(x)
  5.  
  6.   sum(x[1:xr>i,1:xc>j])+sum(x[1:xr<i,1:xc<j])
  7.  
  8. }

'A' function in F#:

  1. let A (x : matrix) i j =
  2.  
  3.     let xr = x.NumRows
  4.     let xc = x.NumCols
  5.  
  6.     let rowIdx1 = List.filter (fun x -> x>i) [ 1..xr ]
  7.     let colIdx1 = List.filter (fun x -> x>j) [ 1..xc ]
  8.  
  9.     let rowIdx2 = List.filter (fun x -> x<i) [ 1..xr ]
  10.     let colIdx2 = List.filter (fun x -> x<j) [ 1..xc ]
  11.  
  12.     let mutable Asum = 0.0
  13.  
  14.     for r_i in rowIdx1 do
  15.         for r_j in colIdx1 do
  16.             Asum <- Asum + x.[r_i-1,r_j-1]
  17.  
  18.     for r_i in rowIdx2 do
  19.         for r_j in colIdx2 do
  20.             Asum <- Asum + x.[r_i-1,r_j-1]
  21.  
  22.     Asum

As I've mentioned at the beginning of the post - both codes are "dirty". Also, I definitely know R better than F# (even if it may not be apparent from the R code above ;)

Still, F# seems to require more coding and many "simple" operations (matrices...) may not be so easy in F# in comparison to R.

I'm still to find where F# excels :)

[ dirty R code ]

[ dirty F# code ]


środa, 11 grudnia 2013

F# - even the longest journey begins with a single step. And there may be bumps along the way

Don't take me wrong. I have just started familiarizing myself with F# - a fairly new functional programming language developed with heavy involvement of Microsoft.

My intention has been to examine, whether F# can be used for various tasks I usually perform with R (http://www.r-project.org/).

As for now, F# looks pretty strange.

It is different in many ways from standard programming languages like C/C+. It is also different from R.

Learning it seems like solving a series of logic puzzles, at this stage.

My (very early) F# code is definitely not optimal, but it may give a hint of what may come later.

Take for example a simple function for calculating return on investment in a bond, used in my previous post.

In R, the function looks like that:

  1. # expected (discounted) return
  2. pv <- function(fa,n,cr,rf) {
  3.   -fa+sum(sapply(1:n, function(i) (fa*cr)/(1+rf)^i))+fa/(1+rf)^n
  4. }

You can see the code in context here: http://pastebin.com/bFEHQQnM

Meanwhile, my F# equivalent is:


At least both functions return the same result :)

The nice thing about F# is that, although Microsoft did not include it in the free Visual Studio Express 2013, there is an online version of the F# available. You can write and test your F# code there.

OK, why F# may look strange? Just a couple of observations:
  • calculating power for floats and integers is handled differently - pown for integers and ** for floats
  • once a function is used with one type of argument - say int - you cannot use it again with any other type - say float
  • separate operations for adding a single element at the beginning of a list (::) and for joining the lists (@)
  • some symbol combinations (example: !!!), while it is possible to define the operations they perform, cannot be used between arguments, i.e. !!! 2 3 is fine, while 2 !!! 3 is not
I would like to stress again, that I am at the very beginning of my journey with F#. 

The peculiarities of F# have not discouraged me so far. I'd say, it is quite the opposite. They have increased my hunger for learning fore about this bizarre creature ;)




wtorek, 10 grudnia 2013

When interest rates go to infinity


CAUTION: It is my first post about corporate debt so excessive simplifications and mistakes are highly probable. Comments welcome.

***

The standard formula for calculating bond return with 3 year maturity and annual coupon of 5% tells us, that we should expected discounted return of around 13.6%, given the extremely low current "risk free" interest rate.

> pv(fa=100,n=3,cr=0.05,rf)
[1] 13.59618

Do you think it is adequate for the risk we are taking?

Actually it depends :)

Fig.: Probability of Default vs. Interest Rate curve

If we are unlucky and our bond defaults, we may actually lose approx. between 46% and 60% of our investment (assuming RR=37% and RT=1Y, see below).

> de(fa=100,di=0,cr=0.05,rv=0.4,rf,rl=1) # default in year one; no coupon payments
[1] -60.17087
> de(fa=100,di=1,cr=0.05,rv=0.4,rf,rl=1) # default after first coupon payment
[1] -55.36236
> de(fa=100,di=2,cr=0.05,rv=0.4,rf,rl=1) # default after second coupon payment
[1] -50.5744
> de(fa=100,di=3,cr=0.05,rv=0.4,rf,rl=1) # default after third coupon payment
[1] -45.80689

Three critical factors here are Probability of Default (PD), Recovery Rate (RR) and Resolution Time (RT).

The first tells us, how likely we are to lost all or part of our initial investment. 

The second - what part of the investment we could get back.

The third - when can we expect some of our money back after the default.


Average may be misleading here. The default rate for speculative bonds surpassed 11% in the period. In addition, intensity of defaults varies between geographies and industries.


According to Moody's, Resolution Time can take between 6 months and more than 3 years.

Let's focus on the Probability of Default - i.e. freeze all the other parameters: bond maturity = 3 years, Recovery Rate = 37%, Resolution Time = 1 year, and risk free (RF) interest rate = 0.429%.

The 5% annual coupon on our bond implies its Probability of Default at around 5.5%.

This estimation method used means that if we would have a large portfolio of identical bonds with equal and constant PD of 5.5% and annual coupon of 5%, we would finish our investment with (discounted) return of zero - i.e. we have treated our coupon as zero profit interest rate.

PD of 5.5% is clearly above the average historical default rate as recorded by Standard&Poor's. Hence if we believe the actual PD will be lower, say 2%, we will make a profit. Zero profit interest rate at PD equal 2% is 2.2%, so the difference (spread) between our coupon and risk level is 2.8 pp. 

The table below shows the relation between PD and zero profit interest rates required for PDs between 0% and 10%:

          PD    IR
  [1,]   0.00  0.005
  [2,]   0.01  0.013
  [3,]   0.02  0.022
  [4,]   0.03  0.031
  [5,]   0.04  0.039
  [6,]   0.05  0.048
  [7,]   0.06  0.057
  [8,]   0.07  0.066
  [9,]   0.08  0.075
 [10,]   0.09  0.085
 [11,]   0.10  0.094

Reminder: debt maturity, RR, RT and RF are still frozen

Clearly, when default rate increases, we should ask for the higher interest rate. However, as the chart at the beginning shows, the situation starts to be pretty hilarious after reaching some PD level. Around PD of 60%, we need to ask for 100% interest. And even further the required zero profit interest rate goes into infinity...


[ R code used ]