Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Affiliates 
 Visitors 
 Students 
 Research 
 ICAM-LANL 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Colloquia 
 Colloquia Archive 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 P/T Colloquia 
 Archive 
 Kac Lectures 
 Kac Fellows 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Colloquia 
 
 Jobs 
 Postdocs 
 CNLS Fellowship Application 
 Students 
 Student Program 
 Visitors 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Wednesday, March 31, 2010
3:00 PM - 4:30 PM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

Iterative bias reduction for multivariate smoothers

Eric Matzner-Lober
Universite Rennes

We present a general procedure for nonparametric multivariate regression smoothers that outperforms existing procedures such as MARS, additive models, projection pursuit or $L_2$ additive boosting on both real and simulated datasets. In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoother. We still propose to use classical nonparametric linear smoother, such as thin plate splines or kernel smoothers, but instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting (base) smoother has a small variance but a substantial bias. Afterward, we propose to iteratively correct the biased initial estimator by an estimate of the bias obtained by smoothing the residuals. In univariate settings, we relate our procedure to $L_2$-Boosting. Rules for selecting the optimal number of iterations are also proposed and, based on empirical evidence, we propose one stopping rule. In the regression framework, when the unknown regression function $m$ belongs to the Sobolev space $\mathcal{H}^{(\nu)}$ of order $\nu$, we show that using a thin plate splines base smoother and the proposed stopping rule leads to an estimate $\hat m$ which converge to the unknown function $m$. Moreover, our procedure is adaptive with respect to the unknown order $\nu$ and converge at the minimax rate. We apply our method to both simulated and real data and show that our method compares favourably with existing procedures such as MARS, additive models, $L_2$ boosting or projection pursuit, with improvement on mean squared error up to 30%. A R package is available.

Host: Garrett Kenyon