Skip to Content

Search: {{$root.lsaSearchQuery.q}}, Page {{$root.page}}

Seyoung Park

Tuesday, September 10, 2013
12:00 AM
438 WH

Interquantile regression with high <br>dimensional covariates

Much work has been done in the quantile regression analysis. The ordinary quantile regression
estimation is asymptotically consistent when the number of predictors, p, is small, but
it could be inconsistent if p grows with the sample size n. This motivates the use of penalization
methods in model selection for quantile regression. Belloni and Chernozhukov
(2011) consider the l1-penalized quantile regression in high dimensional sparse models and
obtain some non-asymptotic results. Jiang (2012) introduces a new estimator that estimates
several quantiles at the same time while penalizing inter-quantile differences as well as individual
quantile coefficients, but provides no theoretical results for high dimensional predictors.
In this work, we consider joint quantile regression in high-dimensional sparse
models by allowing the number of quantiles, K, and the number of predictors, p, to grow
with n. We provide asymptotic and non-asymptotic results on the penalized model selection
and estimation procedures in the spirit of Jiang (2012), and study the oracle properties
of adaptive penalization. We examine model selection consistency and stability across
quantiles, and compare adaptive shrinkage with thresholding in the quantile regression
framework.