Monthly Archives: August 2009

OpenOffice advanced fill down

When people come to me for statistical advise, they often bring Excel files containing data like the following, where values are only written down when they change. But often it is required for further analysis that each row is completely filled with the correct values. There is a basic “fill down” option in OpenOffice (CTRL+D), but up to my knowledge there is unfortunately no extension for this case.

filldown

Years ago I wrote the following lines in Visual Basic for a small company I was visiting that even employed a student assistant for filling out these missing values in enormous automatically generated Excel files (I did this in just a few minutes and of course free of charge, that’s why it is so basic – I just couldn’t see people doing such dreary tasks):

Sub MakroAdvancedFillDown()
  For Each cell In Range("h1:h13215")
    If cell.Value = " " Or cell.Value = "" Then
      cell.Value = old
    End If
    old = cell.Value
  Next cell
End Sub

And given that last week someone with a really large table with the same problem came into my office I took an hour to learn StarOffice Basic and to write the following Macro:

SUB AdvancedFillDown
  oSel = ThisComponent.getCurrentSelection()
  oAdr = oSel.rangeAddress
  oSheet = ThisComponent.CurrentController.ActiveSheet  
  IF NOT oSel.supportsService("com.sun.star.sheet.SheetCellRange") then
      MsgBox "Error: Please select some cells for this makro."
      EXIT SUB
  END IF  
     
  FOR j = oAdr.startColumn TO oAdr.EndColumn
    FOR i = oAdr.startRow TO oAdr.EndRow
      oCell=oSheet.getCellByPosition(j,i)
      IF oCell.string = "" THEN
        oSource = oSheet.getCellRangeByPosition(j,i-1,j,i-1)
        oSourceAdr = oSource.getRangeAddress
        oSheet.copyRange(oCell.getCellAddress,oSourceAdr)
      END IF
    NEXT
  NEXT
END SUB

Now I can deal with this problem with just one click.

Random Correlation Matrices

Some time ago one of my colleagues with a new method for evaluating the cumulative distribution function of a multivariate normal distribution wanted to compare the speed of his method with that of randomized quasi-Monte Carlo methods. While we were going to lunch, he asked me how to generate random correlation matrices, because the speed of his method depends strongly on the correlation matrix and he wanted to have some sort of average.

But what is a random correlation matrix?

Let’s first give a characterization of correlation matrices.

It is well known that for a matrix $C:=(c_{i,j})_{1leq i,jleq n}inmathbb{R}^{ntimes n}$ there exist (multivariate normal distributed) random variables $X,Y$ with [text{Cor}(X,Y):=left(frac{text{Cov}(X_i,Y_j)}{sqrt{text{Var}(X_i)text{Var}(Y_j)}}right)_{1leq i,jleq n}=C] if and only if

  1. $-1leq c_{i,j}leq 1$ for all $i,jin{0,ldots,n}$,
  2. $c_{i,i}= 1$ for all $iin{0,ldots,n}$,
  3. $C$ is symmetric (therefore all eigenvalues $lambda_1,ldots,lambda_n$ of $C$ are real)
  4. and all eigenvalues of $C$ are greater or equal to zero.

But what is the right notion of randomness for these matrices?

For example let’s look at the orthogonal matrices. In many numerical applications we need uniformly distributed random orthogonal matrices in terms of the Haar measure (See http://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization).

Unfortunately in our case there is no clear, natural notion of randomness. 🙁

Method 1 – Try and Error: We generate a matrix fulfilling no. 1, 2 and 3 of the characterization (these matrices are called pseudo correlation matrices) by generating independent pseudo-random numbers uniformly distributed between -1 and 1 for the entries $c_{i,j}=c_{j,i}$, $1leq i

If this random symmetric matrix is positive semidefinite (i.e. all eigenvalues of $C$ are greater or equal to zero) than we have the desired result. Otherwise we try again. Here is the corresponding R code:

random.pseudo.correlation.matrix <-

function(n) {

a for(i in 1:(n-1)) {

for(j in (i+1):n) {

a[i,j] }

}

return(a)

}

random.correlation.matrix.try.and.error <-

function(n) {

repeat {

a if (min(eigen(a)$values)>=0) return(a)

}

}

This approach is only reasonable for very small dimensions (try it with $n=6,7,8$).

Method 2 – Lift the Diagonal:

We denote by $I$ the identity matrix. If $C$ has the eigenvalues $lambda_1leqldotsleqlambda_n$ then $(C+acdot I)$ has the eigenvalues $lambda_1+aleqldotsleqlambda_n+a$ since $x$ is a solution of $det(C-xcdot I)=0$ if and only if $x+a$ is a solution of $det(C+acdot I-xcdot I)=det(C-(x-a)cdot I)=0$.

So we start again with a pseudo correlation matrix $C$, but instead of retrying when $C$ has negative eigen values, we lift the diagonal by $lambda_1$ and obtain $C+lambda_1cdot I$, which is always positive semidefinite. After dividing by $1+lambda_1$ we have a correlation matrix which is “some kind of random”. 😉

Unfortunatly the diagonal is accentuated and the smallest eigen value is always zero. We could avoid the second problem by adding $lambda_1+b$ where $b$ is some random number, but the first remains.

make.positive.semi.definite <-

function(a, offset=0) {

(a + (diag(dim(a)[1]) * (abs(min(eigen(a)$values))+offset))) /

(1+(abs(min(eigen(a)$values))+offset))

}
random.correlation.matrix.lift.diagonal <-

function(n, offset=0) {

a make.positive.semi.definite(offset)

}

Method 3 – Gramian matrix – my favorite: Holmes [1] discusses two principal methods for generating random correlation matrices.

One of them is to generate $n$ independent pseudo-random vectors $t_1,ldots,t_n$ distributed uniformly on the unit sphere $S^{n-1}$ in $mathbb{R}^n$ and to use the Gram matrix $T^tT$, where $T:=(t_1,ldots,t_n)$ has $t_i$ as $i$-th column and $T^t$ is the transpose of $T$.

To create the $t_i$ in R, we load the package mvtnorm, generate $tau_isimmathcal{N}(0,I)$ and set $t_i:=tau_i/||tau_i||$:

random.correlation.matrix.sphere <-

function(n) {

require("mvtnorm")

t for (i in 1:n) {

t[i,] }

t%*%t(t)

}

Conclusion: There are futher methods (like e.g. to generate a random spectrum and then construct the correlation matrix), which are not all so easy to implement. But as much as the three given methods they all are unsatisfactory in some way because we don’t really know how random correlation matrices should be distributed.

For my colleague an average of calculation time does only make sense when he knows which kinds of correlation matrices occur in the applications. He decided to describe and compare the different cases individually.

But does it perhaps make sense to use random correlation matrices as test cases or are the special cases more important? For example random correlation matrices generated with method 1 and 3 are only singular with probability zero.

Any critique, comments, suggestions or questions are welcome!

And for the next time: Given a correlation matrix C. How do we generate tuples of pseudo-random numbers following a given multivariate distribution with correlation matrix C?

Literature:

Holmes, R. B. 1991.

On random correlation matrices.

Siam J. Matrix Anal. Appl., Vol. 12 No. 2: 239-272.