Tag Archives: R

Using JavaGD in Java applications

Goal: Create a Java application with a JFrame that contains a JGDPanel from JavaGD and plot something via JRI.

Solution: The following code is a derivate of JGR code snipplets under GPL-2:

import java.awt.Component;
import java.awt.event.WindowEvent;
import java.awt.event.WindowListener;

import javax.swing.JFrame;

import org.rosuda.JRI.Rengine;
import org.rosuda.javaGD.GDInterface;
import org.rosuda.javaGD.JGDBufferedPanel;

public class JavaGDExample extends GDInterface implements WindowListener {

    JFrame f;
    public void gdOpen(double w, double h) {
        if (f!=null) gdClose();
        f = new JFrame("JavaGD Example");
        c = new JGDBufferedPanel(w, h);
        f.getContentPane().add((Component) c);
    public void gdClose() {
        if (f!=null) {

    public static void main(String[] args) {
        Rengine engine = new Rengine(new String[] {"--vanilla"}, true, null);      
        engine.eval(".setenv <- if (exists("Sys.setenv")) Sys.setenv else Sys.putenv");
    /** listener response to "Close" - effectively invokes <code>dev.off()</code> on the device */
    public void windowClosing(WindowEvent e) {
        if (c!=null) executeDevOff();
    public void windowClosed(WindowEvent e) {}
    public void windowOpened(WindowEvent e) {}
    public void windowIconified(WindowEvent e) {}
    public void windowDeiconified(WindowEvent e) {}
    public void windowActivated(WindowEvent e) {}
    public void windowDeactivated(WindowEvent e) {}

Et voilà:


Of course you have to make sure that the JRI native library is in a directory listed in java.library.path, for example on my machine via:

java -Djava.library.path=/usr/local/lib/R/site-library/rJava/jri/ JavaGDExample

Also you have to provide the correct path for the environment variable R_HOME such as:


And of course you need the two jars javaGD.jar and JRIEngine.jar in your CLASSPATH.

Because it is autumn



branchvar <- 1
tree <- function(x, y, branch) {
  wadd <- 0.7
  if(branch>0) {
    alpha <- atan2((y[2]-y[1]),(x[2]-x[1]))
    len <- sqrt((y[2]-y[1])^2+(x[2]-x[1])^2)*0.6
  } else {


You need library colorspace.

Shortcuts in R under Unix from the readline library

Under Unix you can use in R the the advanced features for command editing and command history that the GNU Readline Library provides.

Both Emacs and vi editing modes are available and the Emacs-like keybindings are installed by default. Here some Emacs keybindings that also work in R (from the documentation of the GNU Readline Library):

Copy and Paste:
Ctrl-u Cut from the cursor to the beginning of the line.
Ctrl-k Cut from the cursor to the end of the line.
Ctrl-w Cut from the cursor to the start of the word.
Ctrl-y Pastes text from the clipboard.
Alt-y Rotate the kill-ring, and paste the new top. You can only do this if the prior command was Ctrl-y or Alt-y.

Ctrl-a Move to the start of the line.
Ctrl-e Move to the end of the line.
Alt-b Move back one word.
Alt-f Move forward one word.
Ctrl-b Move back one character.
Ctrl-f Move forward one character.

Ctrl-_ Undo the last changes.
Alt-r Undo all changes to the line.

Ctrl-l Clear the screen leaving the current line at the top of the screen.

Ctrl-r Incremental reverse search of history.
Alt-p Non-incremental reverse search of history.

If you want to use the vi-mode just press Ctrl+Alt+j and you can use the usual vi modes and commands (for example take a look at this vi-editing-mode-cheat-sheet).

If you want to start in the vi mode by default, put the following line in your ~/.inputrc (which of course will also effect your bash etc.):

set editing-mode vi

Random Correlation Matrices

Some time ago one of my colleagues with a new method for evaluating the cumulative distribution function of a multivariate normal distribution wanted to compare the speed of his method with that of randomized quasi-Monte Carlo methods. While we were going to lunch, he asked me how to generate random correlation matrices, because the speed of his method depends strongly on the correlation matrix and he wanted to have some sort of average.

But what is a random correlation matrix?

Let’s first give a characterization of correlation matrices.

It is well known that for a matrix $C:=(c_{i,j})_{1leq i,jleq n}inmathbb{R}^{ntimes n}$ there exist (multivariate normal distributed) random variables $X,Y$ with [text{Cor}(X,Y):=left(frac{text{Cov}(X_i,Y_j)}{sqrt{text{Var}(X_i)text{Var}(Y_j)}}right)_{1leq i,jleq n}=C] if and only if

  1. $-1leq c_{i,j}leq 1$ for all $i,jin{0,ldots,n}$,
  2. $c_{i,i}= 1$ for all $iin{0,ldots,n}$,
  3. $C$ is symmetric (therefore all eigenvalues $lambda_1,ldots,lambda_n$ of $C$ are real)
  4. and all eigenvalues of $C$ are greater or equal to zero.

But what is the right notion of randomness for these matrices?

For example let’s look at the orthogonal matrices. In many numerical applications we need uniformly distributed random orthogonal matrices in terms of the Haar measure (See http://en.wikipedia.org/wiki/Orthogonal_matrix#Randomization).

Unfortunately in our case there is no clear, natural notion of randomness. 🙁

Method 1 – Try and Error: We generate a matrix fulfilling no. 1, 2 and 3 of the characterization (these matrices are called pseudo correlation matrices) by generating independent pseudo-random numbers uniformly distributed between -1 and 1 for the entries $c_{i,j}=c_{j,i}$, $1leq i

If this random symmetric matrix is positive semidefinite (i.e. all eigenvalues of $C$ are greater or equal to zero) than we have the desired result. Otherwise we try again. Here is the corresponding R code:

random.pseudo.correlation.matrix &lt;-

function(n) {

a for(i in 1:(n-1)) {

for(j in (i+1):n) {

a[i,j] }




random.correlation.matrix.try.and.error &lt;-

function(n) {

repeat {

a if (min(eigen(a)$values)&gt;=0) return(a)



This approach is only reasonable for very small dimensions (try it with $n=6,7,8$).

Method 2 – Lift the Diagonal:

We denote by $I$ the identity matrix. If $C$ has the eigenvalues $lambda_1leqldotsleqlambda_n$ then $(C+acdot I)$ has the eigenvalues $lambda_1+aleqldotsleqlambda_n+a$ since $x$ is a solution of $det(C-xcdot I)=0$ if and only if $x+a$ is a solution of $det(C+acdot I-xcdot I)=det(C-(x-a)cdot I)=0$.

So we start again with a pseudo correlation matrix $C$, but instead of retrying when $C$ has negative eigen values, we lift the diagonal by $lambda_1$ and obtain $C+lambda_1cdot I$, which is always positive semidefinite. After dividing by $1+lambda_1$ we have a correlation matrix which is “some kind of random”. 😉

Unfortunatly the diagonal is accentuated and the smallest eigen value is always zero. We could avoid the second problem by adding $lambda_1+b$ where $b$ is some random number, but the first remains.

make.positive.semi.definite &lt;-

function(a, offset=0) {

(a + (diag(dim(a)[1]) * (abs(min(eigen(a)$values))+offset))) /


random.correlation.matrix.lift.diagonal &lt;-

function(n, offset=0) {

a make.positive.semi.definite(offset)


Method 3 – Gramian matrix – my favorite: Holmes [1] discusses two principal methods for generating random correlation matrices.

One of them is to generate $n$ independent pseudo-random vectors $t_1,ldots,t_n$ distributed uniformly on the unit sphere $S^{n-1}$ in $mathbb{R}^n$ and to use the Gram matrix $T^tT$, where $T:=(t_1,ldots,t_n)$ has $t_i$ as $i$-th column and $T^t$ is the transpose of $T$.

To create the $t_i$ in R, we load the package mvtnorm, generate $tau_isimmathcal{N}(0,I)$ and set $t_i:=tau_i/||tau_i||$:

random.correlation.matrix.sphere &lt;-

function(n) {


t for (i in 1:n) {

t[i,] }



Conclusion: There are futher methods (like e.g. to generate a random spectrum and then construct the correlation matrix), which are not all so easy to implement. But as much as the three given methods they all are unsatisfactory in some way because we don’t really know how random correlation matrices should be distributed.

For my colleague an average of calculation time does only make sense when he knows which kinds of correlation matrices occur in the applications. He decided to describe and compare the different cases individually.

But does it perhaps make sense to use random correlation matrices as test cases or are the special cases more important? For example random correlation matrices generated with method 1 and 3 are only singular with probability zero.

Any critique, comments, suggestions or questions are welcome!

And for the next time: Given a correlation matrix C. How do we generate tuples of pseudo-random numbers following a given multivariate distribution with correlation matrix C?


Holmes, R. B. 1991.

On random correlation matrices.

Siam J. Matrix Anal. Appl., Vol. 12 No. 2: 239-272.