In this paper we consider the problem of grouped variable
selection in high-dimensional regression using
![$\ell_1\text{-}\ell_q$](img2.png)
regularization (
![$1\leq q \leq \infty$](img3.png)
),
which can be viewed as a natural generalization of the
![$\ell_1\text{-}\ell_2$](img4.png)
regularization (the group Lasso). The key
condition is that the dimensionality
![$p_n$](img5.png)
can increase much faster
than the sample size
![$n$](img6.png)
, i.e.
![$p_n \gg n$](img7.png)
(in our case
![$p_n$](img5.png)
is
the number of groups), but the number of relevant groups is small.
The main conclusion is that many good properties from
![$\ell_1\text{-}$](img8.png)
regularization (Lasso) naturally carry on to the
![$\ell_1\text{-}\ell_q$](img2.png)
cases (
![$1\leq q \leq \infty$](img3.png)
), even if the
number of variables within each group also increases with the sample
size. With fixed design, we show that the whole family of estimators
are both estimation consistent and variable selection consistent
under different conditions. We also show the persistency result with
random design under a much weaker condition. These results provide a
unified treatment for the whole family of estimators ranging from
![$q=1$](img9.png)
(Lasso) to
![$q=\infty$](img10.png)
(iCAP), with
![$q=2$](img11.png)
(group Lasso)as a special case.
When there is no group structure available, all the analysis reduces
to the current results of the Lasso estimator (
![$q=1$](img9.png)
).