I provide examples of a duality between the Gaussian symplectic ensemble and the Gaussian orthogonal ensemble.
Showing posts with label random matrix theory. Show all posts
Showing posts with label random matrix theory. Show all posts
Monday, December 14, 2015
Saturday, December 12, 2015
Gaussian Symplectic Ensemble
In this post I define the Gaussian Symplectic Ensemble (GSE). Often the GSE is defined using quaternions; here I only use complex numbers. I give an accurate definition of the symplectic group, of self-dual matrices, of the GSE and give an algorithm to draw random matrices from the GSE.
Thursday, December 3, 2015
A virial theorem in the Gaussian Unitary Ensemble
In the Gaussian Unitary Ensemble of \( n \times n \) matrices, one can calculate the following expectation value
\begin{equation}\label{eq:20151201a}
\mathbb{E} \left[ \sum_{ i \neq j} \dfrac{1}{ \left( \lambda_i - \lambda_j \right)^2} \right] = \dfrac{1}{2} n (n-1)
\end{equation}
with \( \lambda_1 , \ldots, \lambda_n \) the eigenvalues of the random matrix \( H \). I have normalized the GUE such that
\( \mathbb{E} [ H_{ij} H_{kl} ] = \delta_{il} \delta_{jk} \). In this blog post, I check \eqref{eq:20151201a} with a Monte Carlo simulation in Mathematica.
Friday, November 27, 2015
Illustration of the Dyson Ornstein-Uhlenbeck process
I define the Dyson Ornstein-Uhlenbeck process as
\begin{equation}\label{eq:20151125a}
dX_t = -\alpha X_t dt + H \sqrt{dt}
\end{equation}
with \( \alpha > 0 \) and \( H \) a random matrix from the Gaussian Unitary Ensemble of \( n \times n \) Hermitian matrices.
The eigenvalues \( \lambda_i(t) \) of \( X_t \) then have the following dynamics
\begin{equation}\label{eq:20151125b}
d\lambda_i = -\alpha \lambda_i dt + \sum_{ j \neq i} \frac{1}{\lambda_i - \lambda_j} dt + dB_i
\end{equation}
where \( B_1, \ldots, B_n \) are independent Brownian processes. In this post I illustrate the process \eqref{eq:20151125b} numerically.
Wednesday, November 25, 2015
Illustration of Dyson Brownian Motion
The Dyson Brownian motion is defined as
\begin{equation}\label{eq:20151124a}
X_{t + dt} = X_t + H \sqrt{dt}
\end{equation}
with \( H \) a random matrix from the Gaussian Unitary Ensemble of \( n \times n \) Hermitian matrices. It is then well-known that the dynamics of the eigenvalues \( \lambda_i(t) \) of \( X_t \) is described by the process
\begin{equation}\label{eq:20151124b}
d\lambda_i = \sum_{ j \neq i} \frac{1}{\lambda_i - \lambda_j} dt + dB_i
\end{equation}
where \( B_1, \ldots, B_n \) are independent Brownian processes. In this post I illustrate the process \eqref{eq:20151124b} numerically.
Thursday, November 19, 2015
Proof of a determinantal integration formula
While reading about random matrices I encountered the following formula in a blog post by Terence Tao.
If \( K ( x,y) \) is such that
If \( K ( x,y) \) is such that
- \( \int\! dx \ K(x,x) = \alpha \)
- \( \int\! dy \ K(x,y) K(y,z) = K(x,z) \)
Sunday, November 15, 2015
Spectral Density in the Gaussian Unitary Ensemble
In this post I perform numerical experiments on the spectral density in the Gaussian Unitary Ensemble (GUE).
Saturday, October 31, 2015
On the moments of the Gaussian orthogonal ensemble
I am currently reading Mehta's book on random matrices and decided to implement a Mathematica program to calculate expectation values in the Gaussian orthogonal ensemble (GOE). This is the set of symmetric \( n \times n \) matrices \( H \) with probability measure
\begin{equation*}
P(H) dH = \mathcal{N} \prod_{ i \le j} dH_{ij} \ \exp \left( -\frac{1}{2}\ \mathrm{tr}(H^2)\right)
\end{equation*}
My program uses recursion based on Wick's theorem (also called Isserlis' theorem according to Wikipedia), and also some rules for summing over indices in \( n \) dimensions. I used ideas from Derevianko's program Wick.m
Friday, October 30, 2015
Some expectation values in the Gaussian orthogonal ensemble
I calculate some expectation values if the probability measure is given by
\begin{equation*}
P(H) dH = \mathcal{N} \prod_{ i \le j} dH_{ij} \ \exp \left( -\frac{1}{2}\ \mathrm{tr}(H^2)\right)
\end{equation*}
Hereby are \( H \) symmetric \( n \times n \) matrices and \(\mathcal{N}\) is the normalization factor. This is a special case of the Gaussian orthogonal ensemble (GOE).
Sunday, October 25, 2015
Invariance of the Gaussian orthogonal ensemble
On page 17 in his book , Mehta proves the following result about the ensemble of symmetric \( n \times n \) matrices \( H \)
I prove here the converse, namely, the probability measure \eqref{eq:20151025e} is invariant under transformations \( H \mapsto R H R^T \).
- If the ensemble is invariant under every transformation \( H \mapsto R H R^T \) with \( R \) an orthogonal matrix
- and if all components \( H_{ij}, i \le j \) are independent
I prove here the converse, namely, the probability measure \eqref{eq:20151025e} is invariant under transformations \( H \mapsto R H R^T \).
Gaussian orthogonal ensemble
I am currently reading Mehta's book on random matrices (the first edition because it is thinner than the third). I plan to write some blog posts while studying this book.
In chapter 2, Metha defines the Gaussian orthogonal ensemble. This is the set of symmetric \( n \times n \) matrices \( H \) with probability density
\begin{equation}\label{eq:20151025a}
\prod_{ i \le j} dH_{ij} \ \exp \left( -a\ \mathrm{tr}(H^2) + b\ \mathrm{tr} H + c \right)
\end{equation}
with \( a, b \) and \(c \) constants. It can be calculated that this density function is invariant under transformations
\begin{equation}\label{eq:20151025b}
H \mapsto R H R^T
\end{equation}
with \( R \) an orthogonal matrix.
This is completely equivalent with the vector case. In the vector case the probability density is \begin{equation}\label{eq:20151025c} \prod_{ i} dx_i \ \exp \left( -a\ \sum_i x^2_i + c \right) \end{equation} This density is invariant under rotations \begin{equation}\label{eq:20151025d} x \mapsto R x \end{equation}
One can see that \eqref{eq:20151025d} is the vector representation of the orthogonal group and \eqref{eq:20151025b} is the representation on symmetric matrices. Because symmetric matrices do not form an irreducible representation of the orthogonal group - I can namely subtract the trace - I wonder at this point if one also studies something like ''Gaussian orthogonal ensemble on traceless symmetric matrices''.
This is completely equivalent with the vector case. In the vector case the probability density is \begin{equation}\label{eq:20151025c} \prod_{ i} dx_i \ \exp \left( -a\ \sum_i x^2_i + c \right) \end{equation} This density is invariant under rotations \begin{equation}\label{eq:20151025d} x \mapsto R x \end{equation}
One can see that \eqref{eq:20151025d} is the vector representation of the orthogonal group and \eqref{eq:20151025b} is the representation on symmetric matrices. Because symmetric matrices do not form an irreducible representation of the orthogonal group - I can namely subtract the trace - I wonder at this point if one also studies something like ''Gaussian orthogonal ensemble on traceless symmetric matrices''.
Subscribe to:
Posts (Atom)