I am currently reading Mehta's book on random matrices and decided to implement a Mathematica program to calculate expectation values in the Gaussian orthogonal ensemble (GOE). This is the set of symmetric \( n \times n \) matrices \( H \) with probability measure
\begin{equation*}
P(H) dH = \mathcal{N} \prod_{ i \le j} dH_{ij} \ \exp \left( -\frac{1}{2}\ \mathrm{tr}(H^2)\right)
\end{equation*}
My program uses recursion based on Wick's theorem (also called Isserlis' theorem according to Wikipedia), and also some rules for summing over indices in \( n \) dimensions. I used ideas from Derevianko's program Wick.m
Saturday, October 31, 2015
Friday, October 30, 2015
Some expectation values in the Gaussian orthogonal ensemble
I calculate some expectation values if the probability measure is given by
\begin{equation*}
P(H) dH = \mathcal{N} \prod_{ i \le j} dH_{ij} \ \exp \left( -\frac{1}{2}\ \mathrm{tr}(H^2)\right)
\end{equation*}
Hereby are \( H \) symmetric \( n \times n \) matrices and \(\mathcal{N}\) is the normalization factor. This is a special case of the Gaussian orthogonal ensemble (GOE).
Sunday, October 25, 2015
Invariance of the Gaussian orthogonal ensemble
On page 17 in his book , Mehta proves the following result about the ensemble of symmetric \( n \times n \) matrices \( H \)
I prove here the converse, namely, the probability measure \eqref{eq:20151025e} is invariant under transformations \( H \mapsto R H R^T \).
- If the ensemble is invariant under every transformation \( H \mapsto R H R^T \) with \( R \) an orthogonal matrix
- and if all components \( H_{ij}, i \le j \) are independent
I prove here the converse, namely, the probability measure \eqref{eq:20151025e} is invariant under transformations \( H \mapsto R H R^T \).
Gaussian orthogonal ensemble
I am currently reading Mehta's book on random matrices (the first edition because it is thinner than the third). I plan to write some blog posts while studying this book.
In chapter 2, Metha defines the Gaussian orthogonal ensemble. This is the set of symmetric \( n \times n \) matrices \( H \) with probability density
\begin{equation}\label{eq:20151025a}
\prod_{ i \le j} dH_{ij} \ \exp \left( -a\ \mathrm{tr}(H^2) + b\ \mathrm{tr} H + c \right)
\end{equation}
with \( a, b \) and \(c \) constants. It can be calculated that this density function is invariant under transformations
\begin{equation}\label{eq:20151025b}
H \mapsto R H R^T
\end{equation}
with \( R \) an orthogonal matrix.
This is completely equivalent with the vector case. In the vector case the probability density is \begin{equation}\label{eq:20151025c} \prod_{ i} dx_i \ \exp \left( -a\ \sum_i x^2_i + c \right) \end{equation} This density is invariant under rotations \begin{equation}\label{eq:20151025d} x \mapsto R x \end{equation}
One can see that \eqref{eq:20151025d} is the vector representation of the orthogonal group and \eqref{eq:20151025b} is the representation on symmetric matrices. Because symmetric matrices do not form an irreducible representation of the orthogonal group - I can namely subtract the trace - I wonder at this point if one also studies something like ''Gaussian orthogonal ensemble on traceless symmetric matrices''.
This is completely equivalent with the vector case. In the vector case the probability density is \begin{equation}\label{eq:20151025c} \prod_{ i} dx_i \ \exp \left( -a\ \sum_i x^2_i + c \right) \end{equation} This density is invariant under rotations \begin{equation}\label{eq:20151025d} x \mapsto R x \end{equation}
One can see that \eqref{eq:20151025d} is the vector representation of the orthogonal group and \eqref{eq:20151025b} is the representation on symmetric matrices. Because symmetric matrices do not form an irreducible representation of the orthogonal group - I can namely subtract the trace - I wonder at this point if one also studies something like ''Gaussian orthogonal ensemble on traceless symmetric matrices''.
Thursday, October 8, 2015
Sylvester, On Arithmetical Series, 1892
claimtoken-5618eb80bbe93
In Chebyshev's Mémoire, which is a famous paper in the history of number theory, Chebyshev obtains bounds on the function \begin{equation*} \psi(x) = \sum_{p^{\alpha} \le x} \log p \end{equation*} where the sum is over prime powers. Sylvester improved Chebyshev's bounds in his paper On Arithmetical Series by applying two strategies:- Whereas Chebyshev analyzed the expression \begin{equation*} T(x) - T\left( \frac{x}{2} \right)- T\left( \frac{x}{3} \right)- T\left( \frac{x}{5} \right)+ T\left( \frac{x}{30} \right) \end{equation*} where \( T(x) = \sum_{ 1 \le n \le x} \log n\ \), Sylvester analyzed more complex expressions, for example \begin{equation*} T(x) - T\left( \frac{x}{2} \right)- T\left( \frac{x}{3} \right)- T\left( \frac{x}{5} \right)+ T\left( \frac{x}{6} \right)-T\left( \frac{x}{7} \right)+ T\left( \frac{x}{70} \right)- T\left( \frac{x}{210} \right) \end{equation*}
- Sylvester also applied an iterative method so that he could strengthen bounds he obtained in a recursive manner.
Subscribe to:
Posts (Atom)