Sampling via Iterative Gaussianization

Speaker
Sifan Liu
We propose an algorithm that iteratively transforms a target distribution, specified by an unnormalized density, into a standard Gaussian. At each iteration, the target is rotated to make the coordinates as independent as possible, after which a mean-field variational inference step is applied to bring each marginal closer to a standard Gaussian. The effectiveness of each iteration depends critically on the choice of rotation. We show that a principled choice arises from the principal components of a covariance matrix formed from the relative score function of the target, leading to a natural PCA-type procedure. The resulting sequence of transformations progressively maps the target to a Gaussian, while the inverse transformation can be used to generate samples from the target. We establish convergence guarantees for Gaussian targets and demonstrate the effectiveness of the algorithm through numerical experiments on posterior sampling tasks. Compared to applying mean-field variational inference in the standard coordinate axes, using the proposed principal component axes yields substantial accuracy gains with negligible computational overhead. Compared to conventional normalizing flows, our approach achieves comparable flexibility with far fewer parameters and lower training cost.
Categories
Lecture/Talk, Panel/Seminar/Colloquium