Several high-dimensional learning applications require the parameters to satisfy a “group sparsity” constraint, where clusters of coefficients are required to be simultaneously selected or rejected. The group lasso and its variants are common methods to solve problems of this form. Recently, in the standard sparse setting, it has been noted that tighter convex relaxations than the â„“ 1 norm can be obtained by using other regularizers, leading to better predictive performance. Motivated by these discoveries, we develop the group k-support norm to achieve group sparsity with a better predictive accuracy. We develop an efficient algorithm to solve the resulting optimization problems, and show that our approach outperforms traditional methods for prediction and recovery under group sparsity.