Abstract
We consider model selection via 𝓁0-penalty for high-dimensional sparse quantile regression models. This procedure is almost equivalent to model selection via information criterion due to similarity in penalty. We deal with linear models, additive models, and varying coefficient models in a unified way and establish model selection consistency results rigorously when the size of the relevant index set goes to infinity. The treatment of this situation is challenging and the theoretical novelty of our results is important because such information criteria are commonly used in practice. In this paper, we consider two different setups and propose tuning parameters in the 𝓁0-penalty. We begin this talk with a brief review on best subset selection and quantile regression.