Information Criterion

This post will be my summary about the Akaike Information Criterion(AIC)  and the Takeuchi Information Criterion(TIC).  In particular, a derivation of AIC and TIC is shown. And if I can understand more about the Generalized Information Criterion, I will cover it too.

Continue reading

Handbook of Markov Chain Monte Carlo

Today I received my copy of “Handbook of Markov Chain Monte Carlo“. Up to now I have consulted “Monte Carlo Strategies in Scientific Computing” of Jun S.Liu. In this post I will give my first thought on the new book.

The book of Liu is indeed awesome. The author himself has made many original contributions to sampling algorithms, and many of his ideas was explained in the book. The chapters on Gibbs Sampler (chapter 6),General Conditional Sampling (chapter 7) and multi-chain MCMC (chapter 10, 11) were excellent. But given that the book was written 10 years ago, many recent developments is missing. Some algorithms were not given enough spaces (in particular, Reversible Jump MCMC was given only  2 pages!).

Continue reading

Formulating the PCA

Today I have thought about how one can formulate the Principal Component Analysis (PCA) method. In particular I want to reformulate PCA as a solution for a regression problem. The idea of reformulation PCA as a solution for some regression problem is useful in Sparse PCA , in which a L_1 regularization term is inserted into a ridge regression formula to enforce spareness of the coefficients (i.e. elastic net). There are at least two equivalent ways to motivate PCA. In this post I will first give a formulation of PCA based on orthogonal projection, and then discuss a regression-type reformulation of PCA.

Continue reading

RJMCMC in clustering

Slide from a 30-minute presentation. There are some mistakes in the slide.

Continue reading

Nonparametric Bayesian Seminar 1 : Notes

(mục đích chính là viết ra để khỏi quên nên sẽ  lộn xộn. Không có hình vẽ)

Paper: Introduction to Nonparametric Bayesian Models (Naonori Ueda, Takeshi Yamada)

Continue reading

Flows: the psychology of optimal experience

Continue reading

Ising model

Boltzman distribution: \pi (\text{x}) = \frac{1}{Z}e^{-U(\text{x})/kT}

Continue reading

Follow

Get every new post delivered to your Inbox.