Gaussian Graphical Models and Inference

Today we are going to take a deep look at how to perform inference about a systems where we suspect the data is jointly Gaussian.

Why do we care so much about Gaussian distributions, rather than any other continuous distribution? The normal distribution is used everywhere, from modeling white noise in econometrics, modeling log-returns of stock prices, and noisy observations in control. There are several nice properties about Gaussian random variables. One reason is computational: the sum of two Gaussians is also a Gaussian, conditioning and marginalizing a Gaussian yields a Gaussian, etc. Another is that the Central Limit Theorem suggests that the Gaussian is a sort of “attractor” since averages of random variables tend to Gaussians.

The Intuition Behind Multivariate Gaussian
Most people have a clear intuition about the standard normal distribution. If we have X \sim \mathcal{N}(\mu, \sigma^2), then random draws of X will on average have value \mu, and \sigma^2 gives us a sense of how spread-out the distribution is. The probability density function is, as everyone knows

p(x;\mu,\sigma^2) = \frac{1}{\sqrt{2\pi \sigma^2}} \exp \left\{-\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2}\right\}

What does it mean to have a multivariate Gaussian? First, we can easily imagine having an n-dimensional vector \textbf{X} = [X_1, \ldots X_n]^T, where each X_i is drawn from its own Gaussian independently, i.e. X_i \sim \mathcal{N}(\mu_i, \sigma^2_i). Because each entry is independent, it’s clear that the probability density function for such a vector would be

p(X ; \mu,\sigma^2) = \prod_{i=1}^n p(x_i;\mu_i,\sigma_i^2) = \frac{1}{(2\pi)^{N/2} (\prod_{i=1}^n \sigma_i^2)^{1/2}} \exp \left\{-\frac{1}{2}\sum_{i=1}^n \frac{(x_i-\mu_i)^2}{\sigma_i^2}\right\}

We can simplify this by using matrix notation. In particular, let’s define

\Lambda=\begin{pmatrix}\sigma_1^2&&&\\&\sigma_2^2&&\\&&\ddots&\\&&&\sigma_n^2\end{pmatrix}

Then we can simplify the pdf for the multivariate Gaussian to be

p(\textbf{X} ; \mu,\sigma^2) = \frac{1}{(2\pi)^{N/2}|\Lambda|^{1/2}} \exp \left\{-\frac{1}{2} (x-\mu)^T \Lambda^{-1} (x-\mu) \right\}

What if the elements of a n-dimensional vector are linearly correlated combinations of independent Gaussians? That is, what if the vector \textbf{Y} is so that

\textbf{Y}_i = A_{i1}X_1 + A_{i2}X_2 + \ldots A_{in}X_n

where the X_1, \ldots X_n are drawn from independent univariate Gaussians. As the above notation hints at, lets say that we have a n \times n matrix \textbf{A} such that

\textbf{Y} = \textbf{A}\textbf{X}

What is p(Y; \mu, \Lambda)? Well, this is just using a change of variables. From calculus we know that if Y=f(X) and we know the pdf for p_X(X) then the pdf for p_Y(Y) is given by $latex p_X(f^{-1}(
Y))|\det \frac{\delta X}{\delta Y}|$.

We note that \mathbf{A} already encodes the information that \Lambda does. To make computation cleaner, let’s assume that X_i \sim \mathcal{N}(\mu_i, 1) (in other words, that \Lambda = I is the identity matrix). Now we have

p_Y(Y; \mu) = \frac{1}{(2\pi)^{N/2}|I|^{1/2} |A| } \exp \left\{-\frac{1}{2} (A^{-1}Y-\mu)^T I^{-1} (A^{-1}Y-\mu) \right\}

=\frac{1}{(2\pi)^{N/2}|AA^T|^{1/2}} \exp \left\{-\frac{1}{2} (Y-A\mu)^T (AA^T)^{-1} (Y-A\mu) \right\}

What do we see here? By taking linear combinations of n independent Gaussian random variables X_i to obtain \textbf{Y}, we have a new Gaussian parameterized by mean \mu' = \mathbf{A\mu} and covariance matrix \Lambda' = \mathbf{AA^T}.

This is a tricky point that took me a while to understand: first that covariance is a measure of linear dependence, second that a multivariate Gaussian can be interpreted as linearly composing independent Gaussians.

Two Different Forms

We have now seen one form the of multivariate Gaussian. This is the \textbf{covariance form}

p(X; \mu, \Lambda) = \frac{1}{(2\pi)^{N/2}|AA^T|^{1/2}} \exp \left\{-\frac{1}{2} (X-\mu)^T \Lambda^{-1} (X-\mu) \right\}

We can do some algebra to obtain another form:

p(X; \mu, \Lambda) \propto \exp \left\{-\frac{1}{2} (X-\mu)^T \Lambda^{-1} (X-\mu) \right\}
\propto \exp \left\{-\frac{1}{2}X^T\Lambda^{-1}X +\frac{1}{2}X^T\Lambda^{-1}\mu +\frac{1}{2}\mu^T\Lambda^{-1}X \right\}
= \exp \left\{-\frac{1}{2}X^T\Lambda^{-1}X + (\mu^T\Lambda^{-1})X \right\}

Now define \mathbf{J} = \Lambda^{-1} and \mathbf{h} = J\mu. Then we can re-write the above as

p(X; \mu, \Lambda) \propto \exp \left\{-\frac{1}{2}X^T\mathbf{J}X + \mathbf{h}^TX \right\}

This is called the information form of a Gaussian. The parameter \mathbf{J} is known as the information, while \mathbf{h} is known as the potential. We will refer to a Gaussian parameterized in this form as being \mathcal{N}^{-1}(\mathbf{h}, \mathbf{J}).

Why do we need to switch forms for Gaussians? It turns out that each form has advantages and disadvantages for different tasks.

The covariance form is:

  • Easy to recover the mean and covariance. This comes directly from the paramterization.
  • Easy to marginalize. Through some lengthy computation we can show that p(X_i)\sim \mathcal{N}(\mu_i, \Lambda_{ii}).
  • Hard to perform conditioning. We can show that x_1 | x_2 \sim \mathcal{N}(\mu + \Lambda_{12}\Lambda_{22}^{-1}(x_2-\mu_2), \Lambda_{11}-\Lambda_{12}\Lambda_{22}^{-1}\Lambda_{21}).

On the other hand, the information form is:

  • Easy to perform conditioning. Indeed p(X_1 | X_2) \sim \mathcal{N}(J_{11}, h_1-J_{12}X_2). Intuitively we see that conditioning “shifts” the Gaussian by the cross-terms between X_1 and X_2.
  • Hard to marginalize. We have X_1 \sim \mathcal{N}(h_1-J_{12}J_{22}^{-1}h_2, J_{11}-J_{12}J_{22}^{-1}J_{21}).
  • Easy to understand in terms of an undirected graphical model. We will see this shortly.

How do we convert between covariance and information form. We note that J = \Lambda^{-1} and h = J\mu. So if we knew the covariance form, we could go to information form by performing a matrix inversion, and vice-versa, but this seems expensive. Thankfully we can use tools like Schur’s complements and the Matrix Inversion Lemma to give us a nice form to make this easier.

That is, for these inference problems, there is a clean way to obtain inverses for matrices.

Converting between the two forms seems desirable. Conditioning, which is very important for inference, is quite easy in the information form. But if we ever want an estimate, then we want to know the mean, so we need to change to covariance form. Techniques such as Kalman filtering and Gaussian belief propogation will do just this.

Gaussian Graphical Models

It turns out that in the information form, a Gaussian random vector has a very natural interpretation as an undirected graphical model: The information matrix \textbf{J} encodes a minimal undirected I-map for x. We can see this quite naturally if we consider that

P(X) \propto \prod_{i \in V} \phi_i(X_i) \prod_{i,j \in E} \psi_{ij}(X_i, X_j)

and let

\phi_i(X_i) = \exp \left\{ -\frac{1}{2} X_i^2 J_{ii} + h_i X_i \right\}

\psi_{ij}(X_i, X_j) = \exp \left\{ -X_iX_jJ_{ij} \right\}

Thus, the information form very naturally yields pairwise clique potentials. We also have the following theorem about the pairwise conditional independencies.

Theorem 1: For X \sim \mathcal{N}^{-1}(h, J), X_i \perp X_j | x_{rest} if and only if J_{ij} = 0.

Proof: We saw earlier that p(X_i | X_j) \sim \mathcal{N}(J_{ii}, h_i-J_{ij}X_j). Thus, if J_{ij}=0, then the two are conditionally independent given the rest of the vertices.

Thus, the information matrix encodes the conditional dependencies as well.

Application: Gauss Markov Process

Suppose we have a time series X_1, \ldots X_n where X_1 \sim \mathcal{N}(\mu_1, \Lambda_1) and

X_{i+1} = AX_i + u_i

and u_i \in \mathcal{N}(0, \Lambda_u). This is a linear dynamic system where the next point is a linear transformation of the previous point, with some Gaussian noise. Now what if we don’t observe X_i but we can observe

Y_i = CX_i + v_i

where v_i \in \mathcal{N}(0, \Lambda_v). This is the \emph{hidden Gauss Markov process} which is at the heart of several financial time series models and other applications.

Are we able to recover the old X_i given the Y_i. This is a direct application of Kalman filtering, which we shall see later.

Conclusion

The multivariate Gaussian is a commonly used way to model continuous distributions. We showed how the covariance matrix naturally arises from the construction of a Gaussian vector as a linear transformation of independent Gaussian random variables. We then introduced the information form of the multivariate Gaussian and showed how it makes conditioning easy and has an intuitive representation as an undirected graphical model. Finally, we introduced how the multivariate Gaussian can be used to model a linear dynamic system. In the future we will discuss Kalman filtering as a way to perform inference for such systems.

Introduction to Spectral Graph Theory

Today we will give an introduction to the ideas of spectral graph theory. In this era of big data, we often deal with graphs with millions of edges and vertices. Traditional graph algorithms often run too slowly on such graphs, but we have many powerful linear algebraic techniques (such as eigenvalue/eigenvector computation). Thus, the main idea of spectral theory is to view a graph as a linear operator (i.e. a matrix) and use our understanding of linear algebra to study their properties. In short, the eigenvalues and eigenvectors of matrices associated with a graph reveal coombinatorial properties about the graphs.

We limit our attention to undirected, simple, loopless graphs.We can describe a graph as a a triple (V, E, W) where V is the set of vertices, E is a collection of edges, and W gives a positive edge weight for each vertex. In practice, a weight can model the strength of the interaction/similarity between two vertices. We define the adjacency matrix A_G of a graph (V,E,W) to be an n \times n square matrix with entires (A_G)_{ij} = \begin{cases} w_{ij} &\mbox{ } \{i,j\} \in E \\ 0 &\mbox{ otherwise} \end{cases} Note that A_G is a real symmetric matrix. Symmetry is useful because it allows us to consider the eigenvalue decomposition of a matrix. By the Spectral Theorem, a real symmetric matrix has an orthonormal basis of eigenvectors. The existence of an orthnormal eigenvector basis allows us to write M = \sum_{i=1}^n \lambda_i v_i v_i^T.

We won’t study the adjacency matrix directly. Although it is the most natural matrix to associate with a graph, eigenvalues and eigenvectors give most insight when they are used to understand a natural operator or quadratic form. Thus, the random walk matrix W_g = A_g D_G^{-1}  is of greater interest (as we shall see, it has both such properties). We can intuitively understand the random walk matrix as describing a discrete-time process, in which each time step stuff at a vertex uniformly is distributed to its neighbors. We are interested in studying the random walk matrix because it allows us to answer the following questions:

  • Does the marginal probability distribution of being at any of the vertices converge as the amount of time goes to infinity?
  • If so, how quickly does this happen?
  • How long does it take to reach a vertex j from vertex i by walking at random?

Unfortunately W_g is not symmetric, unless A_G is regular (in which case D_G just scales A_G. We can still reason about the spectral properties of W_G. Namely,  the Perron-Frobenius Theorem guarantees that W_G has an eigenvalue \lambda_{pf} such that  \lambda_{pf} \ge 0 and all the remaining eigenvalues \lambda of W_G have magnitude ||\lambda|| \le \lambda_{pf}. Furthermore, the eigenvector \pi associated with \lambda_{pf} is nonnegative and non-zero. We can actually deduce that  \lambda_{pf} = 1.

Theorem: For a random walk matrix W_G we have \lambda_{pf} = 1.

Proof: From the equality W_g \pi = \lambda_{pf} \pi note that  1^T W_g \pi = \lambda_{pf} (1^T \pi) = \lambda_{pf}\sum_{i=1}^n \pi_i. Furthermore, note that   1^T W_g = 1 because the columns of the random walk matrix must sum to 1. Thus it follows that \lambda_{pf} = 1, as long as \sum_{i=1}^n \pi_i \neq 0, which will always be the case since \pi is nonnegative and non-zero. We can also guess the eigenvector \pi. Indeed we have \pi_i = d_i where d_i is the degree of vertex i. To prove this, note that W_G \pi = (A_G D_G^{-1})(D_G 1) = A_G 1 = D_G 1 = \pi.

This is a start, but we would like to say something more about the other eigenvalues/eigenvectors of W_G. Although it is not a symmetric matrix, we can perform a simple change of basis to make into a symmetric matrix. In particular D_G^{-1/2} W_G D_G^{1/2} which is same as D_G^{-1/2} (A_G D_G^{-1}) D_G^{1/2} = D_G^{-1/2} A_G D_G^{-1/2}, which is symmetric. Because this matrix is symmetric, it has eigenvector decomposition \sum_{i=1}^n \lambda_i v_iv_i^T an y_i = D^{1/2}v_i are eigenvectors of W_G with the same respective eigenvalues. However, these eigenvectors of W_G are not orthonormal (rather we note that v_j^Tv_i = y_j^T D_G^{-1} y_i, so they are orthonormal with respective to the inner product defined by D_G^{-1}.

We’ve noted that the eigenvalues of D_G^{-1/2}A_G D_{-1/2} must have absolute value bounded by 1. With this in mind, it is often more convenient to consider this following matrix instead L = I - D_G^{-1/2}A_G D_G^{-1/2}. We call this the normalized Laplacian and this matrix is extremely important to study in practice. Note that the eigenvalues of the normalized laplacian are \nu_i = 1-\lambda_i for each \lambda_i that is an eigenvalue of the random walk matrix. We note that we now have 0 \le \nu_i \le 2. Often we considered the Laplacian matrix which is just D_G-A_G = D^{1/2} L D^{1/2}. This is just a scaled version of the normalized Laplacian (without square roots), and is easier to work with, but shares the same properties about the eigenvalues (which will now be scaled by D^{1/2}).

As one final definition, let us define the Rayleigh coefficient of a vector x with respect to an n\times n matrix M to be \frac{x^TMx}{x^tx}. We can show that the maximum Rayleigh quotient is equal to the maximum eigenvalue of M (and similarly, the Rayleigh quotient is minimized when x is the eigenvector of the minimum eigenvalue, yielding the minimum eigenvalue of M). There are many useful bounds on the eigenvalues that we can obtain by considering the Rayleigh quotient of the Laplacian matrix. We note that the Rayleigh quotient of the Laplacian is given by \frac{v^T L v}{v^T v} = \frac{x^T (D_G-A_G) x}{x^TD_G x} where x = D_G^{-1/2}.

There are many interesting results that we can prove about the random walk matrix, its Laplacian, and the respective eigenvalues, and so here is a quick outline of some results. In a later post we will return to some of them in more detail.

Properties of Laplacian:

  • The smallest eigenvalue is 0 with eigenvector the constant vector \textbf{1} (we have shown this already).
  • We can interpret the ith entry of L_G v (the action of the Laplacian on a vector (v_1 v_2 v_3) as being (L_G)_i = d_i \cdot (v_i - \text{average of neighbors of i}).
  • The quadratic form of the Laplacian satisfies: x^T L_G x = \sum_{i \sim j} w_{ij}(x_i - x_j)^2. Intuitively, this measures the “smoothness” of a x (remember we can view x as a real-valued function on the vertices); the quadratic form is small if x does not jump too much over any edge. 
  • The constant vector 1 = (1, \ldots, 1) is an eigenvector of L_G with eigenvalue 0 for all graphs G. Thus \nu_1 = 0 is the smallest eigenvalue of L_G.

Properties of Eigenvectors/Eigenvalues for Specific Graphs

  • If G, H are two graphs whose Laplacian eigenvalues are \lambda_1, \ldots \lambda_1 for G and \mu_1, \ldots \mu_m for H, then the product graph G\times H has eigenalues \eta_{ij} = \lambda_i + \mu_j, with eigenvector that can be constructed as the cross product of eigenvectors of G and H.

General Properties of Eigenvectors/Eigenvalues

  • We can view a vector as a map X:V\to \mathbb{R} from vertices to real numbers. Thus an eigenvector assigns a real number to every vertex in G. 
  • NOTE: that by convention we refer to the eigenvalues of the the adjacency matrix as \mu_1 \ge \mu_2 \ge \ldots \mu_n but the eigenvalues of the Laplacian as \nu_1 \le \nu_2 \le \ldots \nu_n.
  • Recall that indeed that the greatest eigenvalue of the random walk matrix is 1 so because W_G = A_G D_G^{-1}, we can see that if G is d-regular, then we have \mu_1 = d.
  • The second smallest eigenvalue of the Laplacian nu_2 can be related to how connected the graph is; it is related to dividing the graph into two pieces without cutting too many edges. We can see this via Cheeger’s Inequality which states that conductance of a graph \theta_G statisfies \nu_2 / 2 \le \theta_G \le \sqrt{2\nu_2}

Non-Bayesian Parameter Estimation and Cramer-Rao Bound

In a previous post we discussed Bayesian Parameter Estimation, where we have a latent model parameter we would like to estimate, and we assume a prior distribution over possible model parameters. However, sometimes it is difficult or unnatural to assign a prior distribution to the latent variable of interest. Thus, a lot of theory has been developed about  parameter estimation where the latent model parameter is not a random variable, but rather deterministic but unknown.

Let the hidden parameter of interest be x and suppose we have received observations \textbf{y} about x. We are interested in creating an estimator \hat{x} that minimizes the mean squared error:

\hat{x}(\cdot) = \text{argmin}_{f(\cdot)} E[(x-f(\textbf{y}))^2]

Since we have absolutely no knowledge about the true value of x, we would like our estimator \hat{x} to not depend on the parameters being estimated. We call such an estimator valid.

In general, finding a valid estimator that minimizes mean-square error is not always possible. Thus, we often put restrictions on the class of candidate estimators. We will restrict our attention to estimators that are unbiased (recall this means that E[\hat{x}(y)] = E[x]). It turns out that for unbiased estimators, minimizing mean-square error is equivalent to minimizing the variance of the estimator.

Lemma 1: For an unbiased estimator \hat{x}, the mean-square error is equivalent to the variance of \hat{x}.

Proof Lemma 1: Let us consider the case where x for simplicity. Note that the error variance is \lambda_e = E[ ((\hat{x}-x) - (E[\hat{x}-E[x]))^2]. Now noting that E[x] = x since we are only taking expectation over y, this equal to E [ ((\hat{x}-E[\hat{x})^2] = \lambda_x.

Thus, we focusing our attention on estimators that are both valid (does not depend on the parameter we are trying to estimate) and unbiased; we call such parameters admissible. Since in this case, minimizing the mean-square error is equivalent to minimizing the variance of the estimator, we are now looking for minimum-variance unbiased (MVU) estimators.

The MVU estimator \hat{x}_{MVU}(\cdot) does not always exist, or even if it exists, might be hard to find. There is not always a general way to find MVU estimators; however there exist bounds on the performance of the performance of MVU estimators. The bound that we will now discuss is known as the Cramer-Rao bound. This gives a lower bound on the variance of any admissible (unbiased and valid) estimator. We present the scalar version of the bound.

Theorem 1 (Cramer-Rao bound): For any admissible estimator \hat{x}(\cdot), provided that p_y(y; x) satisfies the regularity condition:

E[\frac{\delta}{\delta x} \ln p_y(y;x)] =0,\quad \forall x

then we have the lower bound

\lambda_x \ge \frac{1}{J_y(x)}

where J_y(x) is the Fischer information in y about x:

J_y(x) = E[(\frac{\delta}{\delta x} \ln p_y(y;x))^2]

Proof Theorem 1 (sketch): We can compute the correlation between e(y) = \hat{x}(y) - x and f(y) = \frac{\delta}{\delta x} \ln p_y(y;x), which we know must be at most 1. Computing both the co-variance Cov(e(y), f(y) and variances Var(e(y)), Var(f(y)) we can see that [Cov(e(y), f(y)]^2 \le Var(e(y))Var(f(y)), which simplifies to the Cramer-Rao bound.

We note that the Fisher information cannot be computed for all problems and the derivation also relies on the regularity condition of p_y(y). Also, although any estimator that satisfies the Cramer-Rao bound with equality is an MVU estimator, it is possible that no estimator can meet the Cramer-Rao bound for all or even any x.

We call unbiased estimators that satisfy the Cramer-Rao bound with equality efficient. It can be shown that an estimator \hat{x} is efficient if and only if \hat{x}(y) = x + \frac{1}{J_y(x)} \frac{\delta}{\delta x} \ln p_y(y; x). Thus, if an efficient estimator exists, it must be in fact unique. Furthermore, we can show the following:

Claim 1: If an efficient estimator \hat{x}_{eff}(\cdot) exists, it is the maximum likelihood estimator: that is  \hat{x}_{eff}(\cdot) = \text{argmax}_{x} p_y (y;x).

Although if an efficient estimator exists it is the ML estimator, If an efficient estimator does not exist, then the ML estimator need not have any special properties.

Median Finding Using Random Sampling

Today we will describe a randomized median-finding algorithm that is faster than any deterministic selection algorithm in expectation. The well-known “Median of Median” algorithm uses O(n) comparisons in the worst case, but the constant factor is something that leaves much to be desired. Turns out that any deterministic selection algorithm must use at least (2 + \epsilon)n comparisons. We are going to use the idea of random sampling to do better in expectation.

The motivation is simple: if we take any random sample of elements from the input, the median of the samples is “close” (in the ordinal sense) to the true median of the input. So we might think that we might be able to only carefully consider elements close to the sample median,

Our algorithm will be to choose s samples with replacement (this will make our analysis easier). If s is not too big, then we can sort the samples and find its median, and then create “fences” slightly before and after the sampled median. The fences (which are items selected from the sample) will be constructed in such a way that we are guaranteed that the true median lies in between. If we compare all the items to at least one of the fence posts, the we can filter out all elements outside the fences. Finally, if there are few enough elements inside the fences, we can just sort them deterministically, and since we will know how many elements are before and after the fences, we’ll be able to say the index of the median.

To do this we have to decide (a) how many elements s to sample and (b) how far away should the fence posts be. Both these choices have tradeoffs. Namely:

  1. We want the s to be small, but large enough to be representative of all the elements.
  2. Similarly, we want the fence posts to be close together, but far apart enough so that they we will contain the median with high probability.

We will show that both (1) and (2) are satisfied when s=4n^{3/4}.

Lemma 1: The fences a=s/2-\sqrt{n}, b= s/2+\sqrt{n} will contain the median with high probability.

Proof Lemma 1: First let x be the random variable describing the number of the s samples that precede the true median. By linearity of expectation, E[x] = \frac{1}{2}s and \sigma_x^2 = s(\frac{1}{2})(1-\frac{1}{2}) = \frac{1}{4}s \Rightarrow \sigma_x = \frac{1}{2}\sqrt{s}. We will now argue that the probability that the probability that the true median is not contained in [a,b] is small; this happens if the true median is less than a or greater than b. Note that a is greater than the true median if and only if fewer than a of the samples precede the true median. Thus it suffices to show that Pr[x < \frac{s}{2}-\sqrt{n}] is small, and similarly for the right fence post b. By Chebyshev inequality,

Pr[|x-E[x]| \ge n^{1/8}\sigma_{x}] = Pr[|x-\frac{s}{2}| \ge \sqrt{n}] \le \frac{1}{n^{1/4}}

Thus, the probability that that the true median lies outside the fences is at most 1-\frac{1}{n^{1/4}}, which satisfies the with high probability condition.

Lemma 2: The number of elements within the fences is at most n^{3/4} with high probability.

Proof Lemma 2: The analysis is similar to for Lemma 1. In this case, we hope that a greater than the (in the true ordering) \frac{s}{2}-2n^{3/4} element and that b is less than the (in the true ordering) \frac{s}{2}+2n^{3/4} element. Thus we perform similar analysis to show that Pr[a < \frac{s}{2}-2n^{3/4}] and Pr[b > \frac{s}{2} + 2n^{3/4}] is 1-O(\frac{1}{n^{1/4}}).

Thus with high probability, the true median is within the fences and there are only n^{3/4} elements inside the fence, we can sort these elements in \Theta(n^{3/4} \log n) time, which is negligible. The majority of time is spent comparing all the items to the 2 fence posts, which seems to take 2n comparisons. However, we if we see that an element e < a or e > b, then we do not have to bother comparing e with the other fence post. Thus, about half of the elements will only have to do 1 comparison instead of 2. Putting this together, we can obtain an expected (\frac{3}{2}n+\epsilon)n comparisons for our algorithm!

First Post and Bayesian Parameter Estimation

It has been quite a long time since I have last blogged (previously eshyu.wordpress.com). I feel like I learn better by writing, and thus my goal is to write a blog post every day about at least one thing that I learned. I feel like I am learning so many things these days, so I want to make it my goal to write about at least one cool thing I learn everyday. Hopefully this will solidify my understanding of the many cool things I am currently learning!

I am quite sleepy right now so that first post will be brief.

This first post is about Bayesian parameter estimation. Suppose there is some underlying model parameter \textbf{x} from which we only receive a noisy observation \textbf{y}. Given  \textbf{y}, we would like to update our estimate of  \textbf{x}. In the Bayesian framework, we begin with an initial a priori distribution p_{x}(\cdot) of the value of  \textbf{x}. We also have an observation model which specifies the way our noisy observation \textbf{y} gives us information about \textbf{x}. This usually comes in the form of the distribution p_{y|x}(\textbf{y} | \cdot), the conditional probability distribution of \textbf{x} given our observation \textbf{y}.

Using Baye’s Rule, we can update our posterior belief of \textbf{x} as

p_{x|y} = \frac{p_{y|x}(y | x)p_{x}(x)}{\int_x p_{y|x}(y|x)p_{x}(x)}

However we would like to take this a step further and make our best estimate about the value of \textbf{x}. But what does “best” mean? In the Bayesian framework, we specify a determinstic cost function C(a, \hat{a}), which is the cost of estimating the vector $a$ as $\hat{a}$. With this, “best” means minimizing our average cost, over all possible true parameter values \textbf{x} and observations \textbf{y}. We will now give three examples of cost criterion.

1. Minimum absolute-error (MAE): C(a,\hat{a}) = |a-\hat{a}|.

2. Minimum uniform cost (MUC): C(a,\hat{a}) = \begin{cases} 1 \quad |a-\hat{a}| > \epsilon \\ 0 \quad \text{otherwise} \end{cases}.

3. Mean square error (MSE): C(a,\hat{a}) = ||a-\hat{a}||^2. We often call this estimator the Bayes Least-Square (BLS) estimator. This has the most interesting properties, which we shall elaborate on soon.

It turns out, the Bayesian-optimal estimators for all three of these cost functions are the simple and well-known characteristics of the posterior distribution. In particular we can show that

  1. The MAE-minimizing estimate is the median of p_{y|x}(y|x).
  2. The MUC-minimizing estimate is the mode of  p_{y|x}(y|x), i.e. the maximum a-priori estimate (MAP)
  3. The BLS estimate is the mean of p_{y|x}(y|x).