Department of Statistics Unitmark
Dietrich College of Humanities and Social Sciences

On the l1-lq Regularized Regression

Publication Date

February, 2008

Publication Type

Tech Report

Author(s)

Han Liu and Jian Zhang

Abstract

In this paper we consider the problem of grouped variable selection in high-dimensional
regression using l1-lq regularization (1 ≤ q ≤ ∞), which can be viewed as a natural
generalization of the `1-`2 regularization (the group Lasso). The key condition is that
the dimensionality pn can increase much faster than the sample size n, i.e. pn À n (in
our case pn is the number of groups), but the number of relevant groups is small. The
main conclusion is that many good properties from `1-regularization (Lasso) naturally
carry on to the l1-lq cases (1 ≤ q ≤ ∞), even if the number of variables within each
group also increases with the sample size. With fixed design, we show that the whole
family of estimators are both estimation consistent and variable selection consistent
under different conditions. We also show the persistency result with random design
under a much weaker condition. These results provide a unified treatment for the whole
family of estimators ranging from q = 1 (Lasso) to q = ∞ (iCAP), with q = 2 (group
Lasso)as a special case. When there is no group structure available, all the analysis
reduces to the current results of the Lasso estimator (q = 1).