Thursday, September 26, 2013

Elementary Expectation Maximization algorithm with R code

I have recently found a short article describing the Expectation Maximization algorithm.

The Expectation Maximization algorithm enables parameter estimation in probabilistic models with incomplete data.

It is closely related with Maximum Likelihood Estimation (MLE), but while MLE requires complete data, Expectation Maximization works with some latent (unobserved) variables.

The values of the parameters being estimated with the Expectation Maximization algorithm converge at the values calculated with MLE.

For example, for a model with 10 (unfair) coin selections and 100 tosses per selection we may get:

> heads
 [1] 11 14 72 66 63 73 65  9 65 68
> coin
 [1] 1 1 2 2 2 2 2 1 2 2

> # actual probabilities
> head.prob1.init
[1] 0.124
> head.prob2.init
[1] 0.704

> # MLE estimates (we know which coin was selected in every turn)
> head.prob1.est
[1] 0.113
> head.prob2.est
[1] 0.674

> # Maximum Expectation estimates (coin selection is unknown)
> prob1.maxexp
[1] 0.113
> prob2.example
[1] 0.674

The above mentioned tutorial explains the basics of the Expectation Maximization algorithm well enough, so there is no need to repeat its content here.

What the article is missing is a code showing actual implementation of the simple version of this algorithm.

Hence I have written a short R code, demonstrating what's exactly going on here.

Enjoy! :)

***

Two other earlier introductory posts:

For Dummies: Intro to Generalized Method of Moments

Cointegrated drunk and her dog visualized