Some time ago one of my colleagues with a new method for evaluating the cumulative distribution function of a multivariate normal distribution wanted to compare the speed of his method with that of randomized quasi-Monte Carlo methods. While we were going to lunch, he asked me how to generate random correlation matrices, because the speed of his method depends strongly on the correlation matrix and he wanted to have some sort of average.

**But what is a random correlation matrix?**

Let’s first give a characterization of correlation matrices.

It is well known that for a matrix $C:=(c_{i,j})_{1leq i,jleq n}inmathbb{R}^{ntimes n}$ there exist (multivariate normal distributed) random variables $X,Y$ with [text{Cor}(X,Y):=left(frac{text{Cov}(X_i,Y_j)}{sqrt{text{Var}(X_i)text{Var}(Y_j)}}right)_{1leq i,jleq n}=C] if and only if

- $-1leq c_{i,j}leq 1$ for all $i,jin{0,ldots,n}$,
- $c_{i,i}= 1$ for all $iin{0,ldots,n}$,
- $C$ is symmetric (therefore all eigenvalues $lambda_1,ldots,lambda_n$ of $C$ are real)
- and all eigenvalues of $C$ are greater or equal to zero.

But what is the right notion of randomness for these matrices?

For example let’s look at the orthogonal matrices. In many numerical applications we need uniformly distributed random orthogonal matrices in terms of the Haar measure (See http://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization).

Unfortunately in our case there is no clear, natural notion of randomness. 🙁

**Method 1 – Try and Error:** We generate a matrix fulfilling no. 1, 2 and 3 of the characterization (these matrices are called pseudo correlation matrices) by generating independent pseudo-random numbers uniformly distributed between -1 and 1 for the entries $c_{i,j}=c_{j,i}$, $1leq i

If this random symmetric matrix is positive semidefinite (i.e. all eigenvalues of $C$ are greater or equal to zero) than we have the desired result. Otherwise we try again. Here is the corresponding R code:

function(n) {

a for(i in 1:(n-1)) {

for(j in (i+1):n) {

a[i,j] }

}

return(a)

}

random.correlation.matrix.try.and.error <-

function(n) {

repeat {

a if (min(eigen(a)$values)>=0) return(a)

}

}

This approach is only reasonable for very small dimensions (try it with $n=6,7,8$).

**Method 2 – Lift the Diagonal:**

We denote by $I$ the identity matrix. If $C$ has the eigenvalues $lambda_1leqldotsleqlambda_n$ then $(C+acdot I)$ has the eigenvalues $lambda_1+aleqldotsleqlambda_n+a$ since $x$ is a solution of $det(C-xcdot I)=0$ if and only if $x+a$ is a solution of $det(C+acdot I-xcdot I)=det(C-(x-a)cdot I)=0$.

So we start again with a pseudo correlation matrix $C$, but instead of retrying when $C$ has negative eigen values, we lift the diagonal by $lambda_1$ and obtain $C+lambda_1cdot I$, which is always positive semidefinite. After dividing by $1+lambda_1$ we have a correlation matrix which is “some kind of random”. 😉

Unfortunatly the diagonal is accentuated and the smallest eigen value is always zero. We could avoid the second problem by adding $lambda_1+b$ where $b$ is some random number, but the first remains.

function(a, offset=0) {

(a + (diag(dim(a)[1]) * (abs(min(eigen(a)$values))+offset))) /

(1+(abs(min(eigen(a)$values))+offset))

}

function(n, offset=0) {

a make.positive.semi.definite(offset)

}

**Method 3 – Gramian matrix – my favorite:** Holmes [1] discusses two principal methods for generating random correlation matrices.

One of them is to generate $n$ independent pseudo-random vectors $t_1,ldots,t_n$ distributed uniformly on the unit sphere $S^{n-1}$ in $mathbb{R}^n$ and to use the Gram matrix $T^tT$, where $T:=(t_1,ldots,t_n)$ has $t_i$ as $i$-th column and $T^t$ is the transpose of $T$.

To create the $t_i$ in R, we load the package mvtnorm, generate $tau_isimmathcal{N}(0,I)$ and set $t_i:=tau_i/||tau_i||$:

function(n) {

require("mvtnorm")

t for (i in 1:n) {

t[i,] }

t%*%t(t)

}

**Conclusion:** There are futher methods (like e.g. to generate a random spectrum and then construct the correlation matrix), which are not all so easy to implement. But as much as the three given methods they all are unsatisfactory in some way because we don’t really know how random correlation matrices should be distributed.

For my colleague an average of calculation time does only make sense when he knows which kinds of correlation matrices occur in the applications. He decided to describe and compare the different cases individually.

But does it perhaps make sense to use random correlation matrices as test cases or are the special cases more important? For example random correlation matrices generated with method 1 and 3 are only singular with probability zero.

Any critique, comments, suggestions or questions are welcome!

**And for the next time:** Given a correlation matrix C. How do we generate tuples of pseudo-random numbers following a given multivariate distribution with correlation matrix C?

**Literature:**

*On random correlation matrices.*

Siam J. Matrix Anal. Appl., Vol. 12 No. 2: 239-272.

Jorge Durango BorjaExcuse my Englishman I am of colombia believe that you can help me with this:

We want to study the relationship between the De of the sample and the proportion of components, h/p; needed to explain 90% of the total variability. We carried out a simulation study by generating random correlation matrices of dimension p as follows:

1. the eigenvalues of the correlation matrix are drawn from a Beta(a, b) distribution, with a and b chosen froma grid in the interval (0,3)^2; obtaining 900 pairs of parameters a, b.

2. The values are normalized so that their sum is p: For each fixed value p; we generated 900 matrices. This process was performed for p=40, 80…440; so that 9900 correlation matrices were generated in total.

For each one of these matrices, we calculate (h/p)e[0,1] and the De(X): We observed that the relation between (h/p) and De(X) is a sigmoid, but in the interval De(X)e[0.1; 0.9] we can approximate it by the linear relation

h/p= 0.8230 -0.492De(X)

Kornelius RohmeyerPost authorTake a look at HOLMES or BENDEL AND MICKEY which are proposing the following:

Start with a diagonal matrix with the eigenvalues you are looking for:

D <- diag(rbeta(p,a,b)) Then take a random orthogonal matrix M and create B=MDM'. This matrix will be unfortunately no correlation matrix. Therefore determine a product of elementary orthogonal rotation matrices P such that PBP' has only 'ones' along the diagonal. This is then a random correlation matrix with the wanted spectrum.

Jorge DurangoCan you help me to improve this algorithm did in R

hp = function(p){

#x<-seq(0.01,2.99,length=30)

#malla<-expand.grid(x,x) ### Malla (0,3)^2

### Parametros de la distribución beta (ai,Bi)

alfa1<-runif(900,0,3)

beta2<-runif(900,0,3)

malla<-cbind(alfa1,beta2) ### (0,3)^2

bet<-function(x) rbeta(p,x[1],x[2]) ### Generación aleatoria de la distribución Beta

vp<-apply(malla,1,bet) ### Valores propios

sumas<-apply(vp,2,sum)

vpN<-p*vp/matrix(sumas,ncol=900,nrow=p,byrow=T) ### Valores propios normalizados

dete<-apply(vpN,2,prod)

De<- 1-(dete)^(1/p)

f1=0.7)/p ### escogencia del numero de componentes principales

propor<-apply(vpN,2,f1)

list("De" = De, "hsp" = propor)

}

hsp<-De<-numeric(0)

for(i in seq(40,440,40)){

hspi<-hp(i)

hsp<-c(hsp,hspi$hsp)

De<-c(De,hspi$De)

}

hsp; De

plot(hsp~De, pch=20,ylab="h/p",xlab="De(X)") ### se observa que la relacion emtre h/p y De es un Sigmoideo

### La función sigmoide: Su gráfica tiene una típica forma de "S".

### A menudo la función sigmoide se refiere al caso particular de la función logística

### Aproximacion del modelo lineal

AML<-aov(hsp~De)

AML$coefficients

summary(AML)

plot(De,hsp,pch=20, ylab="h/p",xlab="De(X)",xlim=c(0.1,0.9))

abline(lm(hsp~De),v=c(0.1,0.9))