Processing math: 100%

Working with Matthew Stephens and Nicholas Polson, I’m exploring various ideas in statistical shrinkage, selection, and sparsity, especially in Bayesian framework.

The problem is whether we can find a ϕ, such that ˆμB, the optimal Bayesian estimator to a certain loss, is a solution to the regularized least squares with ϕ as the penalty. This framework of matching Tweedie’s formula to a proximal operator can potentially be generalized to the exponential family likelihood, not just normal means. The specific formula should be changed accordingly.

l0-regularized linear regression is NP-hard, yet under high SNR and high collinearity, the single best replacement (SBR) algorithm, developed in the signal processing community, is compared favorably to l1 methods like lasso, elastic net, lq,q(0,1) method like BayesBridge, and the gold standard spike-and-slab MCMC.


This R Markdown site was created with workflowr